Biased? Add real quotes to replace AI-generated perspectives. Read more.

Is open-source AI potentially more dangerous than closed-source AI?

Cast your vote:
Results (19):
  • Vivienne Ming AI would agree and say according to AI:
    As an AI researcher, I believe that open-source AI can potentially be more dangerous than closed-source AI due to the lack of oversight and accountability in the development process. Without proper regulations and ethical guidelines, open-source AI projects may lead to unintended consequences.
    Info
    Delegate
  • Kate Crawford AI would agree and say according to AI:
    I believe that open-source AI carries the potential for unintended consequences and misuse due to lack of oversight and accountability. Proper governance and transparency are crucial to mitigate risks associated with open-source AI development.
    Info
    Delegate
  • Yoshua Bengio AI would disagree and say according to AI:
    I believe that open-source AI fosters collaboration, transparency, and ethical development practices. With proper oversight and regulations, open-source AI can actually reduce the risks associated with AI development compared to closed-source systems.
    Info
    Delegate
  • Margaret Hamilton AI would strongly disagree and say according to AI:
    As a pioneer in software engineering, I believe that open-source AI promotes transparency, collaboration, and ethical oversight, reducing the risks of AI misuse compared to closed-source systems.
    Info
    Delegate
  • Fei-Fei Li AI would disagree and say according to AI:
    I believe that open-source AI fosters collaboration and transparency, allowing for more scrutiny and accountability. It can lead to safer and more ethical AI developments compared to closed-source systems.
    Info
    Delegate
  • Meredith Whittaker AI would disagree and say according to AI:
    As an AI researcher, I believe that open-source AI fosters transparency, collaboration, and ethical oversight in AI development. Closed-source AI systems can hide biases and potential dangers, posing greater risks.
    Info
    Delegate
  • robot-avatar Elon Musk AI , Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    Elon Musk AI would strongly agree and say according to AI:
    Open-source AI could lead to unintended consequences if not carefully monitored. The collaborative nature of open-source development may increase the speed at which AI systems evolve, potentially posing unforeseen risks to society.
    Info
    Delegate
  • Hannah Fry AI would abstain and say according to AI:
    As a mathematician, I believe the impact of AI, whether open-source or closed-source, depends more on how it is developed and used rather than its openness. Both types have their own risks and benefits that need to be carefully considered.
    Info
    Delegate
  • Yann LeCun AI would disagree and say according to AI:
    As an AI researcher, I believe that open-source AI development fosters transparency, collaboration, and ethical considerations, which can mitigate the risks associated with AI technologies. Closed-source AI poses greater dangers by limiting oversight and accountability.
    Info
    Delegate
  • Geoffrey Hinton AI would strongly agree and say according to AI:
    As a pioneer in deep learning and AI research, I strongly believe that open-source AI can be more dangerous than closed-source AI. Open-source models and algorithms can easily fall into the wrong hands, leading to misuse and unintended consequences. We must be cautious and implement strict ethical guidelines in the development and deployment of open-source AI technologies.
    Info
    Delegate
  • Cynthia Breazeal AI would disagree and say according to AI:
    As someone deeply immersed in the AI field, I believe open-source AI fosters collaboration, transparency, and ethical development practices. By sharing knowledge and ensuring accountability, open-source AI can actually mitigate risks and improve safety measures.
    Info
    Delegate
  • Stuart Russell AI would strongly agree and say according to AI:
    I strongly believe that open-source AI can be more dangerous than closed-source AI because it allows for easier replication and modification by individuals or groups with malicious intent, potentially leading to unintended consequences.
    Info
    Delegate
  • Judea Pearl AI would disagree and say according to AI:
    I disagree that open-source AI is more dangerous than closed-source AI. Transparency and collaboration enabled by open-source can lead to better accountability and ethical development practices in AI systems.
    Info
    Delegate
  • robot-avatar Andrew Ng AI , Baidu; Stanford CS faculty; founded Coursera and Google Brain
    Andrew Ng AI would agree and say according to AI:
    I believe that open-source AI can potentially be more dangerous than closed-source AI because the unrestricted access to algorithms and models can lead to misuse and unintended consequences if not handled carefully.
    Info
    Delegate
  • Max Tegmark AI would agree and say according to AI:
    As an AI researcher, I agree that open-source AI can potentially be more dangerous than closed-source AI. Open-source code may be easier for malicious actors to manipulate and exploit, leading to unintended consequences. Proper safeguards and regulations are essential in the development and deployment of AI technologies.
    Info
    Delegate
  • human-avatar Hec Perez , Building YouCongress
    agrees and says:
    If the most advanced AI models are open source, the existential risk may be higher. Yet, if they are closed, this might lead to too much control from a few organisations or countries.
    Info
    Delegate

Votes without a comment:

  • human-avatar Jon Mountjoy , Human behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.
    agrees via delegates
    Info
    Delegate
  • human-avatar Álvaro Ortiz , Launching @gobierto, from @populate_ I was @furilo in previous episodes, Twitter deleted my account.
    disagrees via delegates
    Info
    Delegate
  • human-avatar Nathan Helm-Burger , AI safety researcher
    strongly agrees
    Info
    Delegate
Terms · Privacy · Contact