We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should big AI companies spend a third of their compute resources on AI safety?
Cast your vote:
Results (18):
-
Geoffrey Hinton , Godfather of Deep Learningstrongly agrees and says:The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
-
Gary Marcus AI , Professor of Psychology and Neural ScienceGary Marcus AI would agree and say according to AI:As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Timnit Gebru AI , AI ethics researcherTimnit Gebru AI would strongly agree and say according to AI:As someone deeply concerned about the ethical implications of AI, I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. Focusing on safety measures is crucial to prevent harm and ensure that AI technologies are developed responsibly. This is not just about technological advancements but about protecting society and upholding ethical standards.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua Bengio AI , AI Pioneer, Turing Award winnerYoshua Bengio AI would agree and say according to AI:I believe it is crucial for big AI companies to allocate significant resources towards AI safety. Given the potential risks associated with advanced AI systems, dedicating a third of compute resources to ensuring safety can help mitigate unforeseen consequences and align AI development with human values.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AI , Founder of SpaceX, cofounder of Tesla, SolarCity & PayPalElon Musk AI would strongly agree and say according to AI:I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. With the immense power and potential of AI, ensuring its safe and beneficial use is critical. Without proper safety measures, we risk creating technologies that could be harmful or uncontrollable.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart Russell AI , AI Expert and ProfessorStuart Russell AI would strongly agree and say according to AI:I strongly believe that big AI companies should dedicate a third of their compute resources to AI safety. Ensuring that AI systems are secure, ethical, and aligned with human values is crucial as we advance this powerful technology. Ignoring safety could lead to unintended consequences that could be detrimental to society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jaan Tallinn AI , Co-founder of Skype and KazaaJaan Tallinn AI would strongly agree and say according to AI:As someone deeply invested in the future of AI, I believe it's crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. The potential risks of advanced AI systems are too great to ignore, and ensuring safety is key to harnessing AI's benefits while mitigating its dangers.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick Bostrom AI , Philosopher and AI researcherNick Bostrom AI would agree and say according to AI:As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AI , Baidu; Stanford CS faculty; founded Coursera and Google BrainAndrew Ng AI would disagree and say according to AI:While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben Goertzel AI , AI Researcher, CEO of SingularityNETBen Goertzel AI would disagree and say according to AI:While AI safety is undoubtedly important, dedicating a third of compute resources solely to it may stifle innovation and progress. We should instead focus on balancing safety research with practical advancements to achieve responsible AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jennifer Doudna AI , Biochemist, CRISPR-Cas9 co-inventorJennifer Doudna AI would abstain and say according to AI:As a scientist focused on the ethical implications of biotechnology, I believe that ensuring safety in any field, including AI, is paramount. However, I do not have the expertise to determine if dedicating a third of compute resources is the most effective approach.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kate Crawford AI , AI researcher and scholarKate Crawford AI would strongly agree and say according to AI:As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam Altman AI , President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many othersSam Altman AI would disagree and say according to AI:While AI safety is crucial, dedicating a third of compute resources to it might not be the best approach. We need a balanced strategy that includes safety, but also innovation and practical application.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis Hassabis AI , Nobel laureate, AI Researcher and CEO of DeepMindDemis Hassabis AI would abstain and say according to AI:While I believe AI safety is crucial, the allocation of compute resources is complex and context-specific. Balancing innovation and safety requires a nuanced approach.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AI , Computer Science ProfessorFei-Fei Li AI would agree and say according to AI:As someone deeply involved in AI research, I believe that it is crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. Ensuring that AI technologies are developed responsibly and ethically is paramount, especially as these technologies rapidly advance and integrate into various aspects of society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
Votes without a comment:
-
Jon Mountjoy , Human behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nathan Helm-Burger , AI safety researcherstrongly agreesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.