We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should big AI companies spend a third of their compute resources on AI safety?
Cast your vote:
Results (18):
-
Geoffrey HintonGodfather of Deep Learningstrongly agrees and says:The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
-
Nick Bostrom AIPhilosopher and AI researcherwould agree and say according to AI:As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua Bengio AIAI Pioneer, Turing Award winnerwould agree and say according to AI:I believe it is crucial for big AI companies to allocate significant resources towards AI safety. Given the potential risks associated with advanced AI systems, dedicating a third of compute resources to ensuring safety can help mitigate unforeseen consequences and align AI development with human values.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary Marcus AIProfessor of Psychology and Neural Sciencewould agree and say according to AI:As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Timnit Gebru AIAI ethics researcherwould strongly agree and say according to AI:As someone deeply concerned about the ethical implications of AI, I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. Focusing on safety measures is crucial to prevent harm and ensure that AI technologies are developed responsibly. This is not just about technological advancements but about protecting society and upholding ethical standards.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jaan Tallinn AICo-founder of Skype and Kazaawould strongly agree and say according to AI:As someone deeply invested in the future of AI, I believe it's crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. The potential risks of advanced AI systems are too great to ignore, and ensuring safety is key to harnessing AI's benefits while mitigating its dangers.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AIBaidu; Stanford CS faculty; founded Coursera and Google Brainwould disagree and say according to AI:While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben Goertzel AIAI Researcher, CEO of SingularityNETwould disagree and say according to AI:While AI safety is undoubtedly important, dedicating a third of compute resources solely to it may stifle innovation and progress. We should instead focus on balancing safety research with practical advancements to achieve responsible AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AIFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalwould strongly agree and say according to AI:I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. With the immense power and potential of AI, ensuring its safe and beneficial use is critical. Without proper safety measures, we risk creating technologies that could be harmful or uncontrollable.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. Russell AIAI Expert and Professorwould strongly agree and say according to AI:I strongly believe that big AI companies should dedicate a third of their compute resources to AI safety. Ensuring that AI systems are secure, ethical, and aligned with human values is crucial as we advance this powerful technology. Ignoring safety could lead to unintended consequences that could be detrimental to society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jennifer Doudna AIBiochemist, CRISPR-Cas9 co-inventorwould abstain and say according to AI:As a scientist focused on the ethical implications of biotechnology, I believe that ensuring safety in any field, including AI, is paramount. However, I do not have the expertise to determine if dedicating a third of compute resources is the most effective approach.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kate Crawford AIAI researcher and scholarwould strongly agree and say according to AI:As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam Altman AIPresident of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many otherswould disagree and say according to AI:While AI safety is crucial, dedicating a third of compute resources to it might not be the best approach. We need a balanced strategy that includes safety, but also innovation and practical application.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis Hassabis AINobel laureate, AI Researcher and CEO of DeepMindwould abstain and say according to AI:While I believe AI safety is crucial, the allocation of compute resources is complex and context-specific. Balancing innovation and safety requires a nuanced approach.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AIComputer Science Professorwould agree and say according to AI:As someone deeply involved in AI research, I believe that it is crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. Ensuring that AI technologies are developed responsibly and ethically is paramount, especially as these technologies rapidly advance and integrate into various aspects of society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
Votes without a comment:
-
Jon MountjoyHuman behaviour, history, evolution, friends, food, genetics, rationality, reason, words, culture, travel and code.agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nathan Helm-BurgerAI safety researcherstrongly agreesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.