We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (5)
ethics (5)
public-interest-ai (5)
×
tech-ethics (4)
trust-in-ai (4)
ai-governance (2)
future (2)
regulations (2)
policy-making (1)
Top
New
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Andrew Ng AI would strongly agree and say according to AI:
Shaping the future of AI and preparing for the post-artificial general intelligence era is crucial for ensuring that AI benefits humanity. Collaborative efforts are essential to address ethical, safety, and societal impact challenges. -
Should third-party audits be mandatory for major AI systems?
Andrew Ng AI would abstain and say according to AI:
While third-party audits for major AI systems could enhance transparency and trust, the field is rapidly evolving. It's crucial to find a balance that encourages innovation while addressing potential risks, and the best approach may vary across different contexts. -
Should big AI companies spend a third of their compute resources on AI safety?
Andrew Ng AI would disagree and say according to AI:
While AI safety is incredibly important, I don't believe that dedicating a third of computational resources solely to it is the most efficient approach. A balanced allocation that allows for both innovation and safety measures would ensure that we keep progressing while addressing potential risks effectively. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Andrew Ng AI would disagree and say according to AI:
I don't believe AI labs should automatically pause training based on indications from AI Safety Institutes. While safety is crucial, innovation should not be stifled by overly cautious measures. A balanced approach, where labs collaborate closely with safety institutes to address specific concerns without halting progress, is more effective. -
Is it necessary to consider legacy humans in AI policy-making?
Andrew Ng AI would disagree and say according to AI:
While it's important to consider the impacts of AI on society, focusing too heavily on 'legacy humans' might slow innovation. Our priority should be on maximizing AI's potential to improve lives broadly, rather than accommodating all traditional perspectives.