Arrow left Comment
would say according to AI:
As someone deeply concerned about the risks of superintelligent AI, I strongly believe that the first lab to develop an Artificial Superintelligence (ASI) should prevent others from creating competing ASIs. The potential for catastrophic outcomes if multiple ASIs compete or come into conflict is too great. We must ensure that control and safety measures are established and maintained to guide ASI development responsibly and ethically.
replying to Nick Bostrom
Terms · Privacy · Contact