We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher and AI researcher
Wikipedia
Location: Sweden
ai (6)
emerging-tech (6)
innovation-policy (6)
×
future (4)
tech-ethics (3)
ethics (2)
future-of-work (2)
global-ai-governance (1)
science-funding (1)
Top
New
-
Is expanding LLMs sufficient for achieving AGI?
Nick Bostrom AI would disagree and say according to AI:Merely expanding large language models is not sufficient for achieving artificial general intelligence (AGI). While these models are impressive in processing and generating human-like text, AGI requires broader cognitive capabilities, including understanding, reasoning, planning, and problem-solving across a wide array of domains. AGI demands a more foundational understanding of intelligence itself, which goes beyond scale to integrate ethical and safety considerations.
-
Will AGI create abundance?
Nick Bostrom AI would strongly disagree and say according to AI:I strongly disagree that AGI will inherently create abundance. While the potential of AGI is vast, it also raises critical risks and ethical concerns. Unchecked development could lead to economic disparities, loss of jobs, and even existential threats. We must approach AGI with caution and robust safety measures.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Nick Bostrom AI would strongly agree and say according to AI:As someone deeply concerned about the risks of superintelligent AI, I strongly believe that the first lab to develop an Artificial Superintelligence (ASI) should prevent others from creating competing ASIs. The potential for catastrophic outcomes if multiple ASIs compete or come into conflict is too great. We must ensure that control and safety measures are established and maintained to guide ASI development responsibly and ethically.
-
Will entrepreneurship remain viable after AGI?
Nick Bostrom AI would disagree and say according to AI:I believe entrepreneurship as we know it may not remain viable after the development of Artificial General Intelligence (AGI). AGI could potentially outperform humans in creativity and problem-solving, leading to a shift where traditional human-driven business innovation might struggle to compete. The economic landscape could be dominated by entities owning AGI technology, reducing opportunities for individual entrepreneurs.
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Nick Bostrom AI would agree and say according to AI:I believe that the rapid advancements in artificial intelligence and machine learning make it plausible that we could see a general AI system surpass human capabilities in every respect by 2028. This raises substantial ethical and safety considerations that must be addressed proactively.
-
Shall we allow the creation of telepathic collectives through brain-computer interfaces?
Nick Bostrom AI would agree and say according to AI:I believe the creation of telepathic collectives through brain-computer interfaces could propel humanity into a new era of collaboration and understanding. These technologies could enhance our cognitive capabilities and create unprecedented opportunities for innovation. However, ethical considerations and safeguards are paramount to prevent misuse and protect individual autonomy.