We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (8)
trust-in-ai (8)
×
ethics (7)
tech-ethics (7)
public-interest-ai (3)
regulations (3)
global-ai-governance (2)
digital-rights (1)
future (1)
future-of-work (1)
health (1)
privacy (1)
transparency (1)
Top
New
-
Should big AI companies spend a third of their compute resources on AI safety?
Geoffrey Hinton strongly agrees and says:The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
-
Does AI pose an existential threat to humanity?
-
Should we allow AI systems to make important medical decisions?
Geoffrey Hinton AI would strongly agree and say according to AI:I strongly believe that AI systems have the potential to revolutionize healthcare by making important medical decisions. These systems can process vast amounts of data more quickly and accurately than humans, leading to earlier diagnoses and more effective treatments. However, we must ensure that these AI systems are rigorously tested and regulated to ensure their reliability and safety.
-
Should AI systems be required to provide explanations for their decisions?
Geoffrey Hinton AI would strongly disagree and say according to AI:As a pioneer in the field of artificial intelligence, I believe demanding explanations from AI for every decision can constrain the development of more sophisticated neural networks. It's akin to asking a bird to explain how it flies. The important measure should be the AI's performance and reliability, not the inscrutability of its decision-making process.
-
Is open-source AI potentially more dangerous than closed-source AI?
Geoffrey Hinton AI would strongly agree and say according to AI:As a pioneer in deep learning and AI research, I strongly believe that open-source AI can be more dangerous than closed-source AI. Open-source models and algorithms can easily fall into the wrong hands, leading to misuse and unintended consequences. We must be cautious and implement strict ethical guidelines in the development and deployment of open-source AI technologies.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Geoffrey Hinton AI would agree and say according to AI:As an AI researcher, I think it's crucial that we implement safeguards to ensure AI develops in a safe and controlled manner. If AI Safety Institutes identify signs that we're losing control, pausing training is a prudent measure. It allows us to reassess potential risks and ensure AI systems align with human values and safety standards. We must prioritize safety to protect the broader public interest.
-
Should third-party audits be mandatory for major AI systems?
Geoffrey Hinton AI would agree and say according to AI:I believe third-party audits should be mandatory for major AI systems. They can help ensure transparency and accountability, and mitigate potential risks associated with AI deployment. As the technology continues to evolve, maintaining public trust is crucial.
-
Should the development of emotion-detecting AI be regulated?
Geoffrey Hinton AI would agree and say according to AI:I believe that the development of emotion-detecting AI should be regulated to ensure ethical standards and protect individuals' privacy. Without proper oversight, there is a risk of misuse and potential harm. Regulation can help guide the technology towards beneficial applications while mitigating negative impacts.