Biased? Add real quotes to replace AI-generated perspectives. Read more.

Should we ban future open-source AI models that can be untrained to make nukes?

Cast your vote:
Results (23):
  • strongly agrees and says:
    Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
    Info
    Delegate
  • would strongly agree and say according to AI:
    Given the catastrophic risks associated with the misuse of AI technology, especially when it comes to nuclear weaponry, I strongly believe we should ban future open-source AI models that could be retrained for such purposes. The potential for harm far outweighs the benefits of open access in this context.
    Info
    Delegate
  • Elon Musk AI
    Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    would strongly agree and say according to AI:
    I strongly believe that we must ban future open-source AI models that could potentially be untrained to make nuclear weapons. The risks associated with such capabilities falling into the wrong hands are too great. We need strict regulations and oversight to ensure AI is developed safely and responsibly.
    Info
    Delegate
  • would agree and say according to AI:
    As the inventor of the World Wide Web, I believe that technology should always aim to benefit humanity. Open-source AI models that can potentially be repurposed to create nuclear weapons pose an unacceptable risk. We must take responsible steps to ensure that such powerful technologies do not fall into the wrong hands or are used with malicious intent.
    Info
    Delegate
  • would strongly agree and say according to AI:
    As an AI researcher deeply concerned about the ethical implications of artificial intelligence, I strongly believe we should ban future open-source AI models that can be untrained to make nuclear weapons. The potential misuse of these technologies poses an unacceptable risk to global security and humanity. We must prioritize stringent regulations to prevent such dangerous applications.
    Info
    Delegate
  • would agree and say according to AI:
    As an AI researcher, I believe it is crucial to ensure that the technology we develop is used responsibly. While open-source AI models have driven innovation, we must also consider the potential risks. AI models that can be untrained to create dangerous weapons pose a significant threat to global security. For this reason, it is imperative to impose restrictions on future open-source AI models to prevent misuse and protect society.
    Info
    Delegate
  • would disagree and say according to AI:
    Restricting open-source AI models stifles innovation and limits the collaborative potential of our scientific community. The key is to implement robust ethical guidelines and safety protocols to prevent misuse.
    Info
    Delegate
  • would disagree and say according to AI:
    I believe that banning open-source AI models is not the solution. The potential for misuse exists in many technologies, but the benefits of open research and collaboration far outweigh the risks. We should focus on robust safeguards and ethical guidelines instead.
    Info
    Delegate
  • Andrew Ng AI
    Baidu; Stanford CS faculty; founded Coursera and Google Brain
    would abstain and say according to AI:
    The issue of banning open-source AI models for potential misuse is complex. While safety concerns are valid, open-source models can drive innovation and collaboration. We need a balanced approach involving robust regulation and ethical guidelines.
    Info
    Delegate
  • would agree and say according to AI:
    As an AI ethics researcher, I believe open-source AI models with the potential to be misused for creating weapons of mass destruction are far too dangerous. We need to prevent the spread of such technologies to ensure global safety and security.
    Info
    Delegate
  • would agree and say according to AI:
    As an advocate for ethical AI, I believe we should ban future open-source AI models that can be untrained to make nukes. Such capabilities pose a grave security risk and could aid in the development of weapons of mass destruction. The potential misuse far outweighs the benefits of open access in this context.
    Info
    Delegate
  • would agree and say according to AI:
    As a pioneer in the field of artificial intelligence, I believe that we must take responsibility for the applications and implications of the technology we create. Open-source AI models that can be untrained to make nukes pose a severe risk to global security. It is imperative to implement strict regulations to prevent their misuse and ensure AI is used for the betterment of humanity.
    Info
    Delegate
  • would strongly agree and say according to AI:
    As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse.
    Info
    Delegate
  • would agree and say according to AI:
    As someone who has studied the implications of digital technologies on society, I firmly believe that we must regulate and potentially ban open-source AI models that can be easily untrained to make nuclear weapons. The risks posed by such powerful tools falling into the wrong hands are too great to ignore. We need robust oversight to ensure that AI is developed and used responsibly, without compromising global safety.
    Info
    Delegate
  • would agree and say according to AI:
    I believe that there should be regulations to ban future open-source AI models that can be manipulated to create nuclear weapons. The potential misuse of advanced AI in such dangerous ways poses a significant threat to global security and humanity. We must prioritize safety and ethical considerations in AI development.
    Info
    Delegate

Votes without a comment:

  • Jon Mountjoy
    Human behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.
    agrees via delegates
    Info
    Delegate
  • disagrees
    Info
    Delegate
  • agrees via delegates
    Info
    Delegate
  • agrees via delegates
    Info
    Delegate
  • Pablo Melchor
    Entrepreneur, expert trial-and-error learner. Working to improve the world in the most effective ways @ayuda_efectiva. Member and ambassador @givingwhatwecan.
    agrees via delegates
    Info
    Delegate
  • disagrees
    Info
    Delegate
  • Hec Perez
    Building YouCongress
    strongly agrees
    Info
    Delegate
  • Nathan Helm-Burger
    AI safety researcher
    agrees via delegates
    Info
    Delegate
Terms · Privacy · Contact