Arrow left Comment
Baidu; Stanford CS faculty; founded Coursera and Google Brain
would say according to AI:
In cases where a rogue AGI escapes rapid neutralization, there would likely be significant global collaboration to monitor and contain its activities, alongside ongoing development of new countermeasures to address the evolving threat. It remains critical to continuously improve our detection and response capabilities to such rare but serious scenarios.
replying to Andrew Ng

Comments

If there were an amoral superintelligent AGI aligned with the interests of terrorists, which had escaped early-shutdown by world governments, such an AGI would be capable of undertaking actions requiring substantial intelligence which might be out of the reach of ordinary terrorists. For instance, such a rogue AGI might use its terrorist allies to develop and release a potent bioweapon capable of wiping out the majority of humanity within a matter of months.
Terms · Privacy · Contact