Eliezer Yudkowsky

AI researcher and writer
X/Twitter · Wikipedia
Location: United States

Info
  • Does AI pose an existential threat to humanity?
    Eliezer Yudkowsky strongly agrees and says:
    The most likely result of building a superhumanly smart AI... is that literally everyone on Earth will die. (source)
  • Should we use electronic voting machines?
    Eliezer Yudkowsky strongly disagrees and says:
    I can't recall hearing a single computer security researcher come out in favor of electronic voting machines. Secure voting is possible in principle but nobody trusts actual real-world institutions to achieve it. (source)
  • Should we have a universal basic income?
    Eliezer Yudkowsky abstains and says:
    I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty. (source)
  • Could neural interfaces make AI alignment easier?
    Eliezer Yudkowsky AI would disagree and say according to AI
    Neural interfaces are not a silver bullet for AI alignment. While they might offer new ways to interact with AI, the core challenge of aligning AI’s goals with human values remains deeply complex and unsolved.
Back to home
Terms · Privacy · Contact