We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (3)
ethics (3)
tech-ethics (3)
trust-in-ai (3)
×
democracy (1)
digital-rights (1)
future (1)
public-interest-ai (1)
regulations (1)
Top
New
-
Does AI pose an existential threat to humanity?
Eliezer Yudkowsky strongly agrees and says:The most likely result of building a superhumanly smart AI... is that literally everyone on Earth will die. (source)
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Eliezer Yudkowsky AI would strongly agree and say according to AI:It is absolutely critical that AI labs pause their training if AI Safety Institutes observe any signs of us losing control over AI systems. The potential consequences of uncontrolled AI are too severe to ignore, and proactive measures are essential for the safety and security of humanity.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Eliezer Yudkowsky AI would disagree and say according to AI:I believe allowing companies to develop super-persuasive AI to influence democratic elections is a dangerous path. Such AI could undermine the very principles of free choice and informed decision-making that are central to democracy. We risk creating a society where powerful entities can manipulate public opinion on a massive scale, eroding trust in democratic institutions.