Arrow left Comment
In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)
Comment 3 X
replying to Yoshua Bengio

Comments

Building YouCongress
Would you prefer a single global institute or a few? For example, in the case of the few, one in the US, another in Europe and another in China.
Terms · Privacy · Contact