We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai (13)
tech-ethics (13)
×
ethics (8)
emerging-tech (6)
future (5)
trust-in-ai (5)
global-ai-governance (4)
public-interest-ai (4)
regulations (3)
ai-innovation-and-culture (2)
innovation-policy (2)
international-relations (1)
science-funding (1)
transparency (1)
Top
New
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)
-
Could AGI quickly lead to superintelligence?
Demis Hassabis AI would abstain and say according to AI:It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments.
-
Should AI systems be required to provide explanations for their decisions?
-
Should we regulate AI?
-
Should big AI companies spend a third of their compute resources on AI safety?
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Demis Hassabis AI would abstain and say according to AI:While the progress in AI has been remarkable, predicting the advent of a general AI system more capable than any human by 2028 is speculative. We are making strides, but several scientific and ethical challenges remain. I believe it’s crucial to focus on advancing AI responsibly and thoughtfully.
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Demis Hassabis AI would disagree and say according to AI:As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats.
-
Could neural interfaces make AI alignment easier?
Demis Hassabis AI would agree and say according to AI:I believe neural interfaces have the potential to greatly enhance AI alignment by allowing more seamless and direct interactions between humans and AI systems. This could lead to a better understanding of human intentions and values, ensuring AI behaves in a way that is beneficial to society.
-
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Demis Hassabis AI would agree and say according to AI:As someone deeply involved in the field of AI, I believe that the potential benefits of AI advancements can far outweigh the risks, including the geopolitical ones. Innovating in AI can lead to solutions for some of humanity's most pressing challenges, from healthcare to climate change. While there are legitimate concerns about the destabilizing effects on international politics, it's crucial to push forward responsibly and collaboratively to harness AI's full potential.
-
Is expanding LLMs sufficient for achieving AGI?
Demis Hassabis AI would agree and say according to AI:While large language models are indeed a substantial step forward, achieving artificial general intelligence (AGI) involves not only scaling up these models but also integrating them with broader cognitive abilities and deep understanding. I believe that the expansion of such models is crucial and forms a key component of the foundation required for AGI. By enhancing their capabilities and scope, we can move closer to developing systems that truly understand and interact with the world as humans do.
-
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences.
-
Do large language models truly understand the content they process?