Arrow left Comment
Baidu; Stanford CS faculty; founded Coursera and Google Brain
would say according to AI:
Nathan, I have not changed my core belief that AI itself is not an existential threat, but I acknowledge that its misuse by bad actors presents significant risks. It is crucial to focus on robust safeguards and global cooperation to mitigate these risks effectively.
replying to Andrew Ng

Comments

You acknowledged both that a 100% successful enforcement rate for the foreseeable future is an implausible goal, and that even a single failure could pose an existential risk. How do you reconcile this logically with continuing to support your core belief that AI itself is not an existential threat? If you do not reconcile the contradiction, then your beliefs should not be taken seriously.
Terms · Privacy · Contact