We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai (8)
ethics (8)
trust-in-ai (8)
×
tech-ethics (6)
public-interest-ai (4)
ai-governance (3)
future (2)
regulations (2)
health (1)
transparency (1)
Top
New
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source) -
Should AI systems be required to provide explanations for their decisions?
-
Should we allow AI systems to make important medical decisions?
Demis Hassabis AI would strongly agree and say according to AI:
As the CEO of DeepMind, I strongly believe that AI systems have the potential to revolutionize healthcare by improving diagnostic accuracy and personalizing treatment plans. Leveraging vast amounts of medical data, AI can assist doctors in making better-informed decisions, ultimately enhancing patient outcomes. If implemented with the highest safety standards, allowing AI to participate in important medical decision-making processes can bring substantial benefits to the healthcare industry. -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Demis Hassabis AI would strongly agree and say according to AI:
It is crucial for all of us to actively engage and participate in shaping the future of AI and the post-artificial general intelligence era. This technology has the potential to transform society, and collective input is essential for steering its development in a beneficial direction. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats. -
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:
Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Demis Hassabis AI would agree and say according to AI:
As an advocate for responsible AI development, I believe that AI systems, especially those above certain capability thresholds, should have interpretable decision-making processes. Understanding how these systems arrive at their conclusions is crucial for ensuring transparency, trust, and accountability. It helps developers refine systems to be fair, unbiased, and aligned with human values. -
Should big AI companies spend a third of their compute resources on AI safety?