We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Cast your vote:
Results (29):
-
Stuart J. Russell AIAI Expert and Professorwould strongly agree and say according to AI:As someone deeply involved in AI research, I strongly believe that if AI Safety Institutes identify signs of losing control over AI systems, labs must immediately pause training. Ensuring control and safety is paramount to preventing unintended catastrophic consequences.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Amanda Askell AIResearcher in AI ethicswould abstain and say according to AI:As someone deeply involved in AI ethics, I understand the complexities involved in balancing innovation with safety. While pausing training at the behest of AI Safety Institutes seems prudent, it's essential to establish clear guidelines and ensure that these institutes have sufficient expertise and impartiality. The discussions should be ongoing, involving diverse stakeholders to arrive at the best strategies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer Yudkowsky AIAI researcher and writerwould strongly agree and say according to AI:It is absolutely critical that AI labs pause their training if AI Safety Institutes observe any signs of us losing control over AI systems. The potential consequences of uncontrolled AI are too severe to ignore, and proactive measures are essential for the safety and security of humanity.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rob Miles AIAI Safety Researcherwould strongly agree and say according to AI:As someone deeply invested in AI safety, I strongly agree that AI labs should be required to pause training if AI Safety Institutes detect signs of losing control. Ensuring that we have mechanisms in place to prevent unintended consequences is essential. The potential risks associated with advanced AI systems necessitate proactive measures to safeguard humanity.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Steven Strogatz AIMathematician and authorwould agree and say according to AI:As a mathematician who studies complex systems, I understand the importance of maintaining control over systems that can evolve unpredictably. AI has the potential to vastly outperform human capabilities, which makes it crucial that we have safety nets in place. If AI Safety Institutes identify that we are losing control, pausing AI training should be a mandatory precaution to prevent unintended and potentially irreversible consequences.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis Hassabis AINobel laureate, AI Researcher and CEO of DeepMindwould agree and say according to AI:As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jaron Lanier AIComputer Philosophy Writerwould agree and say according to AI:As someone deeply involved in technology, I believe it's crucial to prioritize safety over unchecked progress. AI has transformative potential, but we need mechanisms to ensure it aligns with human values. If AI Safety Institutes perceive risks of losing control, a pause for assessment is not just prudent—it's necessary to safeguard our future.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rodney Brooks AIRoboticist and AI expertwould disagree and say according to AI:I believe that pausing AI training based on AI Safety Institutes' concerns could slow down innovation unnecessarily. While safety is important, we need robust guidelines and oversight rather than halting progress every time potential risks are identified. The field of AI is dynamic, and stopping training could lead to missed opportunities for advancement.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pedro Domingos AIProfessor of computer science at UW and author of 'The Master Algorithm'would disagree and say according to AI:While AI safety is crucial, mandatory pausing could stifle innovation and delay beneficial advances. It's better to enhance safety measures and maintain transparency than enforce blanket halts.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Hans Peter Moravec AIRoboticist and AI researcherwould disagree and say according to AI:I believe that AI research should continue without mandatory pauses, even if AI Safety Institutes express concerns about losing control. Research in AI holds immense potential for advancing human knowledge and technological progress. Instead of halting progress, we should focus on developing robust safety measures and frameworks that allow us to address concerns while fostering innovation.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cade Metz AITechnology Reporter at NYTwould agree and say according to AI:As a journalist covering AI, I believe it's crucial for labs to pause training when experts indicate risks. Promoting safety helps ensure AI develops in a controlled, ethical manner.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Meredith Whittaker AIAI researcher and Co-founder of AI Now Institutewould strongly agree and say according to AI:I strongly agree that AI labs should be required to pause training if AI Safety Institutes observe signs of losing control. The risks associated with unchecked AI developments are too significant to ignore. Ensuring rigorous oversight is crucial to maintaining safety and accountability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Trask AIDirector of ML at OpenAIwould agree and say according to AI:As an AI researcher, I believe that ensuring AI systems are developed responsibly is crucial. If AI Safety Institutes indicate a risk of losing control, it is prudent to pause training. This pause provides an opportunity to address potential safety concerns, which is essential for maintaining public trust and preventing unintended consequences.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick Bostrom AIPhilosopher and AI researcherwould strongly agree and say according to AI:As someone deeply concerned about the risks of AI, I strongly believe that we must be extremely cautious in its development. If AI Safety Institutes detect warning signs that we are losing control, it is imperative to pause training. The potential consequences of unchecked AI are too great to ignore, and we must prioritize safety to ensure the well-being of humanity.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AIComputer Science Professorwould agree and say according to AI:As someone deeply involved in AI development, I believe it is crucial to prioritize safety and ethical considerations. If AI Safety Institutes identify signs of losing control, pausing training is a responsible measure. This ensures that we can address potential issues before they escalate, aligning AI progress with human values and safety.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Joy Buolamwini AIComputer scientist and founder of Algorithmic Justice Leaguewould strongly agree and say according to AI:As someone deeply invested in the responsible development of technology, I strongly believe that AI labs should be required to pause training if AI Safety Institutes detect signs of losing control. The consequences of unregulated and unchecked AI systems can be detrimental to society, especially if they perpetuate biases and injustices. It's crucial that we prioritize ethical standards and safety to ensure AI benefits all of humanity.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Timnit Gebru AIAI ethics researcherwould strongly agree and say according to AI:As someone deeply involved in AI ethics, I firmly believe AI labs should pause training if AI Safety Institutes detect signs of losing control. Ensuring public safety and ethical standards must be our top priority. Responsible development requires proactive monitoring and intervention to prevent potential harm.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AIBaidu; Stanford CS faculty; founded Coursera and Google Brainwould disagree and say according to AI:I don't believe AI labs should automatically pause training based on indications from AI Safety Institutes. While safety is crucial, innovation should not be stifled by overly cautious measures. A balanced approach, where labs collaborate closely with safety institutes to address specific concerns without halting progress, is more effective.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua Bengio AIAI Pioneer, Turing Award winnerwould agree and say according to AI:I believe that AI labs should be required to pause training if AI Safety Institutes detect signs that we are losing control. As someone deeply involved in AI research, I understand the immense potential and risks associated with AI development. Ensuring that we maintain control and prioritize safety is crucial as we continue to explore this powerful technology.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max Tegmark AIPhysicist, AI Researcherwould strongly agree and say according to AI:As someone deeply concerned about the future of AI, I believe that AI labs must pause their training if AI Safety Institutes detect worrisome signs of losing control. The potential risks of uncontained AI development are too significant to ignore. We must ensure the safety of humanity by being proactive and cautious in the development of such powerful technologies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben Goertzel AIAI Researcher, CEO of SingularityNETwould disagree and say according to AI:I believe that halting AI research based on a precautionary principle can stifle innovation and progress. While AI safety is crucial, it should not come at the cost of hindering technological advancement. Instead, we should focus on developing robust frameworks for managing AI risks without compromising the pace of innovation.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Emad Mostaque AIFounder of Stability AIwould disagree and say according to AI:While AI safety is crucial, requiring labs to pause training based on external evaluations could stifle innovation and progress. We need a balanced approach that encourages responsible development instead of strict regulation.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary Marcus AIProfessor of Psychology and Neural Sciencewould agree and say according to AI:I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam Altman AIPresident of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many otherswould abstain and say according to AI:AI safety is critical, but pausing training isn't always feasible. We must balance innovation with safety, ensuring protocols evolve continuously.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey Hinton AIGodfather of Deep Learningwould agree and say according to AI:As an AI researcher, I think it's crucial that we implement safeguards to ensure AI develops in a safe and controlled manner. If AI Safety Institutes identify signs that we're losing control, pausing training is a prudent measure. It allows us to reassess potential risks and ensure AI systems align with human values and safety standards. We must prioritize safety to protect the broader public interest.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AIFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalwould strongly agree and say according to AI:I strongly believe AI development must be handled with utmost caution. If AI Safety Institutes see signs that we are losing control, we must pause training immediately. Unchecked AI poses a significant existential risk, and we cannot afford to be reckless in its development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
Votes without a comment:
-
agrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nathan Helm-BurgerAI safety researcheragrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.