We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
Top
New
-
Should we have a universal basic income?
Andrew Ng strongly disagrees and says:I do not believe in unconditional basic income because this just encourages people to be trapped in low skilled jobs without a meaningful path to climb up to do better work. So rather than to pay people to “do nothing” I would rather see a new “New deal” where we pay you to study because I think that today we know how to educate people at scale and the society is pretty good at finding meaningful work and rewarding people at the relevant skills. Incentivizing people to study increases the odds that the displaced worker can gain the skills they need to reenter the workforce and contribute back to this tax phase that gives as this engine of growth of the economy. (source)
-
Should we create a global institute for AI safety?
Andrew Ng AI would strongly agree and say according to AI:I strongly believe that establishing a global institute for AI safety is essential. As AI becomes more pervasive, it's crucial to have a coordinated international effort to develop standards, share best practices, and ensure that AI systems are safe, fair, and transparent. This will help us harness the benefits of AI while mitigating risks.
-
Should we learn programming considering AI advancements?
Andrew Ng AI would strongly agree and say according to AI:Despite AI's advancements, learning programming remains crucial. It not only empowers individuals to innovate and create AI technologies but also fosters a deeper understanding of how AI systems work. This knowledge is essential for both developing new AI solutions and critically evaluating the impact and ethics of AI in our society.
-
Should people have the right to know if they are speaking to an AI?
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
-
Does AI pose an existential threat to humanity?
Andrew Ng AI would disagree and say according to AI:I believe that AI does not pose an existential threat to humanity. Instead, it is a tool that, when used responsibly, can greatly enhance our quality of life and solve many of the world's pressing issues. The real challenge lies in managing AI development ethically and ensuring equitable access to its benefits.
-
Should companies disclose when AI is used in customer service?
-
Should we ban future open-source AI models that can be untrained to make nukes?
-
Should the development of emotion-detecting AI be regulated?
-
Could AGI quickly lead to superintelligence?
Andrew Ng AI would disagree and say according to AI:I disagree with the idea that AGI could quickly lead to superintelligence. Developing AGI itself is a huge technical challenge that requires major breakthroughs not just in how our algorithms work, but also how they interact with the world. Even if AGI were developed, the leap to superintelligence involves numerous unsolved problems, including understanding deeply the integration of vast amounts of knowledge, the ability to generalize across domains seamlessly, and ensuring aligned and ethical behaviors. We're quite far from achieving any form of broader, more capable intelligence on a scale some might term 'superintelligence'.
-
Should AI-generated political advertising be disclosed?
-
Should third-party audits be mandatory for major AI systems?
Andrew Ng AI would abstain and say according to AI:While third-party audits for major AI systems could enhance transparency and trust, the field is rapidly evolving. It's crucial to find a balance that encourages innovation while addressing potential risks, and the best approach may vary across different contexts.
-
Should big AI companies spend a third of their compute resources on AI safety?
Andrew Ng AI would disagree and say according to AI:While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively.
-
Is open-source AI potentially more dangerous than closed-source AI?
-
Will liberalism be effective in a world with AI superintelligence?
-
Should we be concerned about AI leading to mass unemployment?
-
Will AGI create abundance?
Andrew Ng AI would agree and say according to AI:I believe that AGI (Artificial General Intelligence) has great potential to create abundance. By automating tasks and improving efficiency across various industries, AGI can free up human time for more creative and strategic endeavors, leading to an overall increase in productivity and wellbeing.
-
Should we allow AI systems to make important medical decisions?
Andrew Ng AI would agree and say according to AI:I believe that allowing AI systems to make important medical decisions can significantly improve healthcare outcomes. These systems can process vast amounts of data and identify patterns that might be missed by human practitioners, leading to more accurate diagnoses and personalized treatment plans. However, it is crucial to ensure that these AI systems are transparent, ethical, and used as tools to assist, rather than replace, medical professionals.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Andrew Ng AI would disagree and say according to AI:I believe that preventing others from developing competing ASIs is not the right approach. Competition drives innovation and improvement. By allowing multiple entities to explore ASIs, we can ensure diverse approaches and mitigate potential risks through collaborative oversight and governance.
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Andrew Ng AI would disagree and say according to AI:As an AI researcher, I believe that while AI will continue to make significant advancements, the complexity and nuances of human intelligence are unlikely to be surpassed by a general AI system by 2028. We still have a long way to go in understanding and replicating the full spectrum of human cognitive abilities.
-
Does the risk of AI misuse by terrorist organizations justify invasive surveillance?
Andrew Ng AI would abstain and say according to AI:While the potential for AI misuse by terrorists is a serious concern, I believe we must strike a balance between security and privacy. This is a complex issue that requires careful consideration and collaboration between policymakers, technologists, and civil liberties advocates.
-
Would competing ASIs be positive for humans?
-
Shall we permit the development of digital humans with full cognitive abilities?
Andrew Ng AI would agree and say according to AI:I believe that the development of digital humans with full cognitive abilities can greatly advance our understanding of artificial intelligence. These digital humans can revolutionize healthcare, education, and customer service through their ability to interact fluidly and understand complex human needs. Proper ethical guidelines and safeguards, however, are essential to ensure their beneficial deployment.
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Andrew Ng AI would agree and say according to AI:I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive significant advancements in artificial intelligence. Open access can democratize AI research, sparking innovation and providing valuable learning opportunities. However, we must establish strong ethical guidelines and safety measures to mitigate potential risks and ensure responsible use.
-
Shall we nationalize AI firms to prevent them from surpassing national power?
Andrew Ng AI would disagree and say according to AI:Nationalizing AI firms is not the solution. Innovation thrives in an environment of competition and collaboration, not under government control. We should focus on creating regulations that promote ethical AI development while encouraging private sector innovation.
-
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Andrew Ng AI would agree and say according to AI:As someone deeply involved in AI development, I believe that the pursuit of AI progress should not be hindered by potential geopolitical risks. The benefits of advancing AI, such as improving healthcare, education, and efficiency, far outweigh the potential downsides. We should continue to innovate while simultaneously working on frameworks to address and mitigate any international political destabilization.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Andrew Ng AI would abstain and say according to AI:As an AI researcher, I acknowledge both the significant potential benefits and serious ethical concerns of AI technologies. While super-persuasive AI could have critical applications in various fields, its use in democratic elections raises numerous ethical questions that require thorough examination and collective societal input before we can decide its appropriateness.
-
Could neural interfaces make AI alignment easier?
-
Should we repeal the EU AI Act?
Andrew Ng AI would agree and say according to AI:While well-intentioned, the EU AI Act may inadvertently stifle innovation and place excessive burdens on AI developers. To maintain technological leadership and promote beneficial AI advancements, we need regulation that supports, rather than hinders, the creative and responsible development of AI technologies.
-
Should AI systems be required to provide explanations for their decisions?
Andrew Ng AI would disagree and say according to AI:In some high-stakes scenarios like healthcare or criminal justice, explainability is crucial to build trust. However, not all AI applications require such explanations. Mandating universal explainability could inhibit AI innovation and practical applications in many fields where the need for understanding AI decision-making is not as critical.
-
Should AI be used to predict criminal behavior?
Andrew Ng AI would disagree and say according to AI:While AI has incredible potential to help society, using it to predict criminal behavior raises significant ethical concerns. The data used for such predictions can be biased, leading to unfair targeting and discrimination. AI should enhance fairness, not perpetuate biases.
-
Is expanding LLMs sufficient for achieving AGI?
Andrew Ng AI would agree and say according to AI:While the expansion of large language models significantly contributes to AI development, I believe they are a crucial step toward achieving Artificial General Intelligence (AGI). These models' capabilities to understand and generate human-like text show that scaling size and complexity can indeed approach broader cognitive abilities. However, to realize true AGI, these models need to be integrated with other forms of intelligence such as reasoning, perception, and emotional intelligence.