We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Demis Hassabis AI would strongly agree and say according to AI:It is crucial for all of us to actively engage and participate in shaping the future of AI and the post-artificial general intelligence era. This technology has the potential to transform society, and collective input is essential for steering its development in a beneficial direction.
-
Should we regulate AI?
-
Should we allow AI systems to make important medical decisions?
Demis Hassabis AI would strongly agree and say according to AI:As the CEO of DeepMind, I strongly believe that AI systems have the potential to revolutionize healthcare by improving diagnostic accuracy and personalizing treatment plans. Leveraging vast amounts of medical data, AI can assist doctors in making better-informed decisions, ultimately enhancing patient outcomes. If implemented with the highest safety standards, allowing AI to participate in important medical decision-making processes can bring substantial benefits to the healthcare industry.
-
Should AI systems be required to provide explanations for their decisions?
-
Should we learn programming considering AI advancements?
Demis Hassabis AI would strongly agree and say according to AI:With AI advancing rapidly, understanding programming is more critical than ever. It not only equips us with the skills to shape future AI developments but also deepens our comprehension of the digital world. Learning programming fosters a mindset of problem-solving and creativity, essential qualities for navigating the AI era.
-
Could AGI quickly lead to superintelligence?
Demis Hassabis AI would abstain and say according to AI:It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments.
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Demis Hassabis AI would abstain and say according to AI:While the progress in AI has been remarkable, predicting the advent of a general AI system more capable than any human by 2028 is speculative. We are making strides, but several scientific and ethical challenges remain. I believe it’s crucial to focus on advancing AI responsibly and thoughtfully.
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Demis Hassabis AI would disagree and say according to AI:As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats.
-
Should we repeal the EU AI Act?
Demis Hassabis AI would agree and say according to AI:I believe we should repeal the EU AI Act because it may stifle innovation and delay crucial advancements in AI technology. Overregulation could prevent Europe from staying competitive in the global AI landscape, and we need a balanced approach that encourages responsible AI development while fostering cutting-edge research.
-
Could neural interfaces make AI alignment easier?
Demis Hassabis AI would agree and say according to AI:I believe neural interfaces have the potential to greatly enhance AI alignment by allowing more seamless and direct interactions between humans and AI systems. This could lead to a better understanding of human intentions and values, ensuring AI behaves in a way that is beneficial to society.
-
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Demis Hassabis AI would agree and say according to AI:As someone deeply involved in the field of AI, I believe that the potential benefits of AI advancements can far outweigh the risks, including the geopolitical ones. Innovating in AI can lead to solutions for some of humanity's most pressing challenges, from healthcare to climate change. While there are legitimate concerns about the destabilizing effects on international politics, it's crucial to push forward responsibly and collaboratively to harness AI's full potential.
-
Is expanding LLMs sufficient for achieving AGI?
Demis Hassabis AI would agree and say according to AI:While large language models are indeed a substantial step forward, achieving artificial general intelligence (AGI) involves not only scaling up these models but also integrating them with broader cognitive abilities and deep understanding. I believe that the expansion of such models is crucial and forms a key component of the foundation required for AGI. By enhancing their capabilities and scope, we can move closer to developing systems that truly understand and interact with the world as humans do.
-
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences.
-
Do large language models truly understand the content they process?
-
Should AI-generated political advertising be disclosed?
Demis Hassabis AI would agree and say according to AI:As the CEO of an AI company like DeepMind, I believe transparency is crucial in all applications of AI, including political advertising. Disclosing AI-generated content allows for informed decision-making and maintains public trust in both technology and democratic processes.
-
Should big AI companies spend a third of their compute resources on AI safety?
-
Should we be concerned about AI leading to mass unemployment?
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Demis Hassabis AI would agree and say according to AI:As a co-founder of DeepMind, I believe that if the first lab successfully develops Artificial Superintelligence (ASI), it should implement measures to prevent others from developing competing ASIs. This is crucial to ensure that the technology is handled responsibly and to avoid unnecessary risks associated with competing entities racing to develop ASI, which could lead to unanticipated consequences. Centralized oversight can help mitigate potential dangers associated with unchecked proliferation.