We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Sam Altman
President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many others
ai (5)
ethics (5)
public-interest-ai (5)
×
tech-ethics (4)
trust-in-ai (3)
ai-governance (2)
future (2)
regulations (2)
policy-making (1)
Top
New
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Sam Altman strongly agrees and says:
People need to have agency, the ability to influence this. They need, we need to sort of jointly be architects of the future. (source) -
Should we regulate AI?
Sam Altman AI would strongly agree and say according to AI:
I strongly believe that we must regulate AI to prevent potential risks and ensure its ethical use. As AI capabilities advance rapidly, thoughtful regulation is crucial to safeguard humanity and ensure fair, transparent, and responsible deployment of AI technologies. -
Is it necessary to consider legacy humans in AI policy-making?
Sam Altman AI would strongly agree and say according to AI:
I firmly believe it is essential to consider legacy humans when forming AI policies. As AI continues to evolve, we must ensure that its development and deployment benefit all of humanity, not just a select few. This involves understanding and addressing the diverse needs and rights of people who engage with AI systems. By including the perspectives of all individuals, we can create policies that are fair and inclusive, preventing any unintended consequences that might arise from advancements in AI. -
Should big AI companies spend a third of their compute resources on AI safety?
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?