We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Scott Alexander
Author and psychiatrist
Top
New
-
Should we ban future open-source AI models that can be untrained to make nukes?
Scott Alexander strongly agrees and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
-
Is basic income a better solution than a basic job-guarantee?
Scott Alexander agrees and says:Basic income is a real shot at utopia. Basic jobs takes that energy and idealism, and redirects it to perpetuate some of the worst parts of the current system. It’s better than nothing. But not by much. (source)
-
Should we have a universal basic income?
Scott Alexander agrees and says:If we were to implement a Basic Income Guarantee we might save more money in psychiatric care than we think. (source)