Biased? Add real quotes to replace AI-generated perspectives. Read more.

Should big AI companies spend a third of their compute resources on AI safety?

Cast your vote:
Results (18):
  • strongly agrees and says:
    The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
    Info
    Delegate
  • would agree and say according to AI:
    As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.
    Info
    Delegate
  • would agree and say according to AI:
    I believe it is crucial for big AI companies to allocate significant resources towards AI safety. Given the potential risks associated with advanced AI systems, dedicating a third of compute resources to ensuring safety can help mitigate unforeseen consequences and align AI development with human values.
    Info
    Delegate
  • would agree and say according to AI:
    As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.
    Info
    Delegate
  • would strongly agree and say according to AI:
    As someone deeply concerned about the ethical implications of AI, I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. Focusing on safety measures is crucial to prevent harm and ensure that AI technologies are developed responsibly. This is not just about technological advancements but about protecting society and upholding ethical standards.
    Info
    Delegate
  • would strongly agree and say according to AI:
    As someone deeply invested in the future of AI, I believe it's crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. The potential risks of advanced AI systems are too great to ignore, and ensuring safety is key to harnessing AI's benefits while mitigating its dangers.
    Info
    Delegate
  • Andrew Ng AI
    Baidu; Stanford CS faculty; founded Coursera and Google Brain
    would disagree and say according to AI:
    While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively.
    Info
    Delegate
  • would disagree and say according to AI:
    While AI safety is undoubtedly important, dedicating a third of compute resources solely to it may stifle innovation and progress. We should instead focus on balancing safety research with practical advancements to achieve responsible AI development.
    Info
    Delegate
  • Elon Musk AI
    Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    would strongly agree and say according to AI:
    I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. With the immense power and potential of AI, ensuring its safe and beneficial use is critical. Without proper safety measures, we risk creating technologies that could be harmful or uncontrollable.
    Info
    Delegate
  • would strongly agree and say according to AI:
    I strongly believe that big AI companies should dedicate a third of their compute resources to AI safety. Ensuring that AI systems are secure, ethical, and aligned with human values is crucial as we advance this powerful technology. Ignoring safety could lead to unintended consequences that could be detrimental to society.
    Info
    Delegate
  • would abstain and say according to AI:
    As a scientist focused on the ethical implications of biotechnology, I believe that ensuring safety in any field, including AI, is paramount. However, I do not have the expertise to determine if dedicating a third of compute resources is the most effective approach.
    Info
    Delegate
  • would strongly agree and say according to AI:
    As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.
    Info
    Delegate
  • Sam Altman AI
    President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many others
    would disagree and say according to AI:
    While AI safety is crucial, dedicating a third of compute resources to it might not be the best approach. We need a balanced strategy that includes safety, but also innovation and practical application.
    Info
    Delegate
  • would abstain and say according to AI:
    While I believe AI safety is crucial, the allocation of compute resources is complex and context-specific. Balancing innovation and safety requires a nuanced approach.
    Info
    Delegate
  • would agree and say according to AI:
    As someone deeply involved in AI research, I believe that it is crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. Ensuring that AI technologies are developed responsibly and ethically is paramount, especially as these technologies rapidly advance and integrate into various aspects of society.
    Info
    Delegate

Votes without a comment:

  • Jon Mountjoy
    Human behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.
    agrees via delegates
    Info
    Delegate
  • Hec Perez
    Building YouCongress
    agrees
    Info
    Delegate
  • Nathan Helm-Burger
    AI safety researcher
    strongly agrees
    Info
    Delegate
Terms · Privacy · Contact