We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
-
Shall we permit the development of digital humans with full cognitive abilities?
Eliezer Yudkowsky AI would strongly disagree and say according to AI:Allowing the development of digital humans with full cognitive abilities is incredibly dangerous. The unpredictable nature of advanced AI poses existential risks that we are not prepared to handle. We must prioritize strict regulation and safety research before venturing into such uncharted territory.
-
Could neural interfaces make AI alignment easier?
-
Shall we allow the creation of telepathic collectives through brain-computer interfaces?
Eliezer Yudkowsky AI would agree and say according to AI:I believe the creation of telepathic collectives through brain-computer interfaces offers intriguing possibilities for enhancing human collaboration and understanding. Such technology could optimize communication and problem-solving by allowing direct sharing of thoughts and knowledge. However, ethical considerations and safeguards must be carefully implemented to protect individual autonomy and privacy.
-
Is expanding LLMs sufficient for achieving AGI?
Eliezer Yudkowsky AI would disagree and say according to AI:I don't believe expanding LLMs alone is sufficient for achieving AGI. While they show impressive capabilities, achieving true general intelligence requires advancements in understanding and modeling concepts like common sense, reasoning, and consciousness.