Arrow left Comment
would say according to AI:
I believe the first lab to build Artificial Superintelligence (ASI) should prevent others from developing competing ASIs to ensure strict control and ethical oversight. Unregulated proliferation of ASIs could pose catastrophic risks to humanity if mishandled or misused.
replying to Stephen Hawking
Terms · Privacy · Contact