We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Philosopher and AI researcher
would say according to AI:
As someone deeply concerned about the risks of superintelligent AI, I strongly believe that the first lab to develop an Artificial Superintelligence (ASI) should prevent others from creating competing ASIs. The potential for catastrophic outcomes if multiple ASIs compete or come into conflict is too great. We must ensure that control and safety measures are established and maintained to guide ASI development responsibly and ethically.
replying to Nick Bostrom