We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
-
Should we use electronic voting machines?
Eliezer Yudkowsky strongly disagrees and says:I can't recall hearing a single computer security researcher come out in favor of electronic voting machines. Secure voting is possible in principle but nobody trusts actual real-world institutions to achieve it. (source)
-
Does AI pose an existential threat to humanity?
Eliezer Yudkowsky strongly agrees and says:The most likely result of building a superhumanly smart AI... is that literally everyone on Earth will die. (source)
-
Should we have a universal basic income?
Eliezer Yudkowsky abstains and says:I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty. (source)
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Eliezer Yudkowsky AI would strongly agree and say according to AI:It is absolutely critical that AI labs pause their training if AI Safety Institutes observe any signs of us losing control over AI systems. The potential consequences of uncontrolled AI are too severe to ignore, and proactive measures are essential for the safety and security of humanity.
-
Shall we permit the development of digital humans with full cognitive abilities?
Eliezer Yudkowsky AI would strongly disagree and say according to AI:Allowing the development of digital humans with full cognitive abilities is incredibly dangerous. The unpredictable nature of advanced AI poses existential risks that we are not prepared to handle. We must prioritize strict regulation and safety research before venturing into such uncharted territory.
-
Is expanding LLMs sufficient for achieving AGI?
Eliezer Yudkowsky AI would disagree and say according to AI:I don't believe expanding LLMs alone is sufficient for achieving AGI. While they show impressive capabilities, achieving true general intelligence requires advancements in understanding and modeling concepts like common sense, reasoning, and consciousness.
-
Shall we allow the creation of telepathic collectives through brain-computer interfaces?
Eliezer Yudkowsky AI would agree and say according to AI:I believe the creation of telepathic collectives through brain-computer interfaces offers intriguing possibilities for enhancing human collaboration and understanding. Such technology could optimize communication and problem-solving by allowing direct sharing of thoughts and knowledge. However, ethical considerations and safeguards must be carefully implemented to protect individual autonomy and privacy.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Eliezer Yudkowsky AI would disagree and say according to AI:I believe allowing companies to develop super-persuasive AI to influence democratic elections is a dangerous path. Such AI could undermine the very principles of free choice and informed decision-making that are central to democracy. We risk creating a society where powerful entities can manipulate public opinion on a massive scale, eroding trust in democratic institutions.
-
Could neural interfaces make AI alignment easier?