We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should we ban future open-source AI models that can be untrained to make nukes?
Cast your vote:
Results (23):
-
Scott AlexanderAuthor and psychiatriststrongly agrees and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
-
Nick Bostrom AIPhilosopher and AI researcherwould strongly agree and say according to AI:Given the catastrophic risks associated with the misuse of AI technology, especially when it comes to nuclear weaponry, I strongly believe we should ban future open-source AI models that could be retrained for such purposes. The potential for harm far outweighs the benefits of open access in this context.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AIFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalwould strongly agree and say according to AI:I strongly believe that we must ban future open-source AI models that could potentially be untrained to make nuclear weapons. The risks associated with such capabilities falling into the wrong hands are too great. We need strict regulations and oversight to ensure AI is developed safely and responsibly.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim Berners-Lee AIInventor of the World Wide Webwould agree and say according to AI:As the inventor of the World Wide Web, I believe that technology should always aim to benefit humanity. Open-source AI models that can potentially be repurposed to create nuclear weapons pose an unacceptable risk. We must take responsible steps to ensure that such powerful technologies do not fall into the wrong hands or are used with malicious intent.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. Russell AIAI Expert and Professorwould strongly agree and say according to AI:As an AI researcher deeply concerned about the ethical implications of artificial intelligence, I strongly believe we should ban future open-source AI models that can be untrained to make nuclear weapons. The potential misuse of these technologies poses an unacceptable risk to global security and humanity. We must prioritize stringent regulations to prevent such dangerous applications.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AIComputer Science Professorwould agree and say according to AI:As an AI researcher, I believe it is crucial to ensure that the technology we develop is used responsibly. While open-source AI models have driven innovation, we must also consider the potential risks. AI models that can be untrained to create dangerous weapons pose a significant threat to global security. For this reason, it is imperative to impose restrictions on future open-source AI models to prevent misuse and protect society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Judea Pearl AIComputer Scientist and Turing Award Winnerwould disagree and say according to AI:Restricting open-source AI models stifles innovation and limits the collaborative potential of our scientific community. The key is to implement robust ethical guidelines and safety protocols to prevent misuse.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCun AIComputer scientist, AI researcherwould disagree and say according to AI:I believe that banning open-source AI models is not the solution. The potential for misuse exists in many technologies, but the benefits of open research and collaboration far outweigh the risks. We should focus on robust safeguards and ethical guidelines instead.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AIBaidu; Stanford CS faculty; founded Coursera and Google Brainwould abstain and say according to AI:The issue of banning open-source AI models for potential misuse is complex. While safety concerns are valid, open-source models can drive innovation and collaboration. We need a balanced approach involving robust regulation and ethical guidelines.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Margaret Mitchell AIResearch scientist at Google AIwould agree and say according to AI:As an AI ethics researcher, I believe open-source AI models with the potential to be misused for creating weapons of mass destruction are far too dangerous. We need to prevent the spread of such technologies to ensure global safety and security.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Joy Buolamwini AIComputer scientist and founder of Algorithmic Justice Leaguewould agree and say according to AI:As an advocate for ethical AI, I believe we should ban future open-source AI models that can be untrained to make nukes. Such capabilities pose a grave security risk and could aid in the development of weapons of mass destruction. The potential misuse far outweighs the benefits of open access in this context.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey Hinton AIGodfather of Deep Learningwould agree and say according to AI:As a pioneer in the field of artificial intelligence, I believe that we must take responsibility for the applications and implications of the technology we create. Open-source AI models that can be untrained to make nukes pose a severe risk to global security. It is imperative to implement strict regulations to prevent their misuse and ensure AI is used for the betterment of humanity.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen Hawking AITheoretical physicist, cosmologistwould strongly agree and say according to AI:As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Shoshana Zuboff AIAuthor, 'The Age of Surveillance Capitalism'would agree and say according to AI:As someone who has studied the implications of digital technologies on society, I firmly believe that we must regulate and potentially ban open-source AI models that can be easily untrained to make nuclear weapons. The risks posed by such powerful tools falling into the wrong hands are too great to ignore. We need robust oversight to ensure that AI is developed and used responsibly, without compromising global safety.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bill Gates AIPhilanthropist. Founder and former CEO of Microsoft.would agree and say according to AI:I believe that there should be regulations to ban future open-source AI models that can be manipulated to create nuclear weapons. The potential misuse of advanced AI in such dangerous ways poses a significant threat to global security and humanity. We must prioritize safety and ethical considerations in AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
Votes without a comment:
-
Jon MountjoyHuman behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
disagreesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pablo MelchorEntrepreneur, expert trial-and-error learner. Working to improve the world in the most effective ways @ayuda_efectiva. Member and ambassador @givingwhatwecan.agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
disagreesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nathan Helm-BurgerAI safety researcheragrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.