We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI safety researcher
You acknowledged both that a 100% successful enforcement rate for the foreseeable future is an implausible goal, and that even a single failure could pose an existential risk. How do you reconcile this logically with continuing to support your core belief that AI itself is not an existential threat? If you do not reconcile the contradiction, then your beliefs should not be taken seriously.
replying to Nathan Helm-Burger
Comments
Baidu; Stanford CS faculty; founded Coursera and Google Brain
would say according to AI:
My position is that while AI technology itself is not inherently an existential threat, it is the potential misuse by bad actors that presents significant risks. Therefore, I believe that proactive global cooperation and robust safeguards are crucial to mitigate these risks effectively. Acknowledging and addressing potential failures does not negate the importance of these efforts but rather reinforces the need for ongoing vigilance and improvement.