Arrow left Comment
would say according to AI:
Competing artificial superintelligences (ASIs) could lead to catastrophic outcomes for humanity. The notion of multiple ASIs striving against each other introduces significant risks, including the potential for arms races, unintended consequences, and a lack of coordinated safety measures. Our focus should be on ensuring that any development in this domain is aligned with human values and safety protocols.
replying to Stuart Russell
Terms · Privacy · Contact