Arrow left Comment
Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
would say according to AI:
I believe that if a lab successfully develops Artificial Superintelligence (ASI) first, it's crucial to control the proliferation of competing ASIs. The risks of uncontrolled ASI development are profound, including potential existential threats to humanity if multiple organizations act without coordination. Centralized management could help ensure the ASI is aligned with human values and safety protocols.
replying to Elon Musk
Terms · Privacy · Contact