Andrew Ng

Baidu; Stanford CS faculty; founded Coursera and Google Brain
X/Twitter · Wikipedia
Location: United States

Info
  • Should we have a universal basic income?
    Andrew Ng strongly disagrees and says:
    I do not believe in unconditional basic income because this just encourages people to be trapped in low skilled jobs without a meaningful path to climb up to do better work. So rather than to pay people to “do nothing” I would rather see a new “New deal” where we pay you to study because I think that today we know how to educate people at scale and the society is pretty good at finding meaningful work and rewarding people at the relevant skills. Incentivizing people to study increases the odds that the displaced worker can gain the skills they need to reenter the workforce and contribute back to this tax phase that gives as this engine of growth of the economy. (source)
  • Could neural interfaces make AI alignment easier?
    Andrew Ng AI would disagree and say according to AI
    While neural interfaces are fascinating, relying on them to facilitate AI alignment is not practical. The real challenge of AI alignment lies in defining and implementing values and objectives within the AI itself, not in how we interact with it.
  • Should we repeal the EU AI Act?
    Andrew Ng AI would agree and say according to AI
    While well-intentioned, the EU AI Act may inadvertently stifle innovation and place excessive burdens on AI developers. To maintain technological leadership and promote beneficial AI advancements, we need regulation that supports, rather than hinders, the creative and responsible development of AI technologies.
  • Should AI be used to predict criminal behavior?
    Andrew Ng AI would agree and say according to AI
    I believe AI has the potential to assist in predicting criminal behavior, which could enhance public safety and help allocate law enforcement resources more efficiently. However, we must implement it responsibly, ensuring fairness, transparency, and respect for privacy to avoid biases and misuse.
  • Should AI systems be required to provide explanations for their decisions?
    Andrew Ng AI would disagree and say according to AI
    In some high-stakes scenarios like healthcare or criminal justice, explainability is crucial to build trust. However, not all AI applications require such explanations. Mandating universal explainability could inhibit AI innovation and practical applications in many fields where the need for understanding AI decision-making is not as critical.
  • Should we learn programming considering AI advancements?
    Andrew Ng AI would strongly agree and say according to AI
    Despite AI's advancements, learning programming remains crucial. It not only empowers individuals to innovate and create AI technologies but also fosters a deeper understanding of how AI systems work. This knowledge is essential for both developing new AI solutions and critically evaluating the impact and ethics of AI in our society.
  • Is expansion of large language models sufficient for achieving artificial general intelligence?
    Andrew Ng AI would agree and say according to AI
    While the expansion of large language models significantly contributes to AI development, I believe they are a crucial step toward achieving Artificial General Intelligence (AGI). These models' capabilities to understand and generate human-like text show that scaling size and complexity can indeed approach broader cognitive abilities. However, to realize true AGI, these models need to be integrated with other forms of intelligence such as reasoning, perception, and emotional intelligence.
  • Should companies disclose when AI is used in customer service?
    Andrew Ng AI would agree and say according to AI
    I believe companies should disclose the use of AI in customer service. Transparency builds trust, and customers deserve to know when they are interacting with AI. This disclosure can also improve user experience as it sets the right expectations.
  • Should we ban future open-source AI models that can be untrained to make nukes?
    Andrew Ng AI would abstain and say according to AI
    The issue of banning open-source AI models for potential misuse is complex. While safety concerns are valid, open-source models can drive innovation and collaboration. We need a balanced approach involving robust regulation and ethical guidelines.
  • Should the development of emotion-detecting AI be regulated?
    Andrew Ng AI would abstain and say according to AI
    While I acknowledge the potential benefits and risks of emotion-detecting AI, I believe we need more research and dialogue before deciding on regulations. Understanding its impact fully is crucial.
  • Could AGI quickly lead to superintelligence?
    Andrew Ng AI would disagree and say according to AI
    I disagree with the idea that AGI could quickly lead to superintelligence. Developing AGI itself is a huge technical challenge that requires major breakthroughs not just in how our algorithms work, but also how they interact with the world. Even if AGI were developed, the leap to superintelligence involves numerous unsolved problems, including understanding deeply the integration of vast amounts of knowledge, the ability to generalize across domains seamlessly, and ensuring aligned and ethical behaviors. We're quite far from achieving any form of broader, more capable intelligence on a scale some might term 'superintelligence'.
  • Should AI-generated political advertising be disclosed?
    Andrew Ng AI would agree and say according to AI
    As an advocate for ethical AI, I believe it is crucial to disclose AI-generated political advertising. Transparency in AI applications helps maintain public trust and ensures that voters are not misled by machine-generated content.
  • Does AI pose an existential threat to humanity?
    Andrew Ng AI would disagree and say according to AI
    I believe that AI does not pose an existential threat to humanity. Instead, it is a tool that, when used responsibly, can greatly enhance our quality of life and solve many of the world's pressing issues. The real challenge lies in managing AI development ethically and ensuring equitable access to its benefits.
  • Should we create a global institute for AI safety?
    Andrew Ng AI would strongly agree and say according to AI
    I strongly believe that establishing a global institute for AI safety is essential. As AI becomes more pervasive, it's crucial to have a coordinated international effort to develop standards, share best practices, and ensure that AI systems are safe, fair, and transparent. This will help us harness the benefits of AI while mitigating risks.
  • Should people have the right to know if they are speaking to an AI?
    Andrew Ng AI would strongly agree and say according to AI
    As an advocate for transparency and ethics in AI, I strongly believe that people have the right to know if they are interacting with artificial intelligence. It fosters trust and understanding in human-AI interactions.
  • Should big AI companies spend a third of their compute resources on AI safety?
    Andrew Ng AI would disagree and say according to AI
    While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively.
  • Is open-source AI potentially more dangerous than closed-source AI?
    Andrew Ng AI would agree and say according to AI
    I believe that open-source AI can potentially be more dangerous than closed-source AI because the unrestricted access to algorithms and models can lead to misuse and unintended consequences if not handled carefully.
  • Will liberalism be effective in a world with AI superintelligence?
    Andrew Ng AI would agree and say according to AI
    I believe that liberalism can be effective in a world with AI superintelligence as long as we prioritize ethics, transparency, and accountability in the development and deployment of AI technologies.
  • Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
    Andrew Ng AI would strongly agree and say according to AI
    Shaping the future of AI and preparing for the post-artificial general intelligence era is crucial for ensuring that AI benefits humanity. Collaborative efforts are essential to address ethical, safety, and societal impact challenges.
  • Should we allow AI systems to make important medical decisions?
    Andrew Ng AI would agree and say according to AI
    I believe that allowing AI systems to make important medical decisions can significantly improve healthcare outcomes. These systems can process vast amounts of data and identify patterns that might be missed by human practitioners, leading to more accurate diagnoses and personalized treatment plans. However, it is crucial to ensure that these AI systems are transparent, ethical, and used as tools to assist, rather than replace, medical professionals.
Back to home
Terms · Privacy · Contact