We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (5)
future (5)
×
tech-ethics (5)
emerging-tech (4)
ethics (2)
innovation-policy (1)
trust-in-ai (1)
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
-
Does AI pose an existential threat to humanity?
Eliezer Yudkowsky strongly agrees and says:The most likely result of building a superhumanly smart AI... is that literally everyone on Earth will die. (source)
-
Shall we permit the development of digital humans with full cognitive abilities?
Eliezer Yudkowsky AI would strongly disagree and say according to AI:Allowing the development of digital humans with full cognitive abilities is incredibly dangerous. The unpredictable nature of advanced AI poses existential risks that we are not prepared to handle. We must prioritize strict regulation and safety research before venturing into such uncharted territory.
-
Could neural interfaces make AI alignment easier?
-
Is expanding LLMs sufficient for achieving AGI?
Eliezer Yudkowsky AI would disagree and say according to AI:I don't believe expanding LLMs alone is sufficient for achieving AGI. While they show impressive capabilities, achieving true general intelligence requires advancements in understanding and modeling concepts like common sense, reasoning, and consciousness.