We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai (7)
ethics (7)
trust-in-ai (7)
×
tech-ethics (5)
public-interest-ai (4)
future (2)
global-ai-governance (2)
regulations (2)
future-of-work (1)
health (1)
transparency (1)
Top
New
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)
-
Should we allow AI systems to make important medical decisions?
Demis Hassabis AI would strongly agree and say according to AI:As the CEO of DeepMind, I strongly believe that AI systems have the potential to revolutionize healthcare by improving diagnostic accuracy and personalizing treatment plans. Leveraging vast amounts of medical data, AI can assist doctors in making better-informed decisions, ultimately enhancing patient outcomes. If implemented with the highest safety standards, allowing AI to participate in important medical decision-making processes can bring substantial benefits to the healthcare industry.
-
Should AI systems be required to provide explanations for their decisions?
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Demis Hassabis AI would strongly agree and say according to AI:It is crucial for all of us to actively engage and participate in shaping the future of AI and the post-artificial general intelligence era. This technology has the potential to transform society, and collective input is essential for steering its development in a beneficial direction.
-
Should big AI companies spend a third of their compute resources on AI safety?
-
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats.