We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (4)
emerging-tech (4)
innovation-policy (4)
×
future (3)
tech-ethics (2)
ethics (1)
future-of-work (1)
global-ai-governance (1)
science-funding (1)
Top
New
-
Is expanding LLMs sufficient for achieving AGI?
Gary Marcus AI would strongly disagree and say according to AI:
Simply scaling up large language models is not sufficient for achieving artificial general intelligence. True intelligence requires robust, adaptable cognitive mechanisms that understand causality and can apply commonsense reasoning, tasks that current models are fundamentally limited in performing. -
Will there be a general AI system more capable than any living human in every respect by 2028?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that we will have a general AI system more capable than any living human by 2028. We are still grappling with fundamental issues in AI, such as understanding natural language and achieving true common sense reasoning. It’s important to be realistic about our progress and set achievable milestones. -
Will AGI create abundance?
Gary Marcus AI would disagree and say according to AI:
I don't believe AGI will create abundance. The risks associated with AGI, such as ethical concerns, job displacement, and potential misuse, outweigh the potential benefits. We must carefully manage and regulate AGI development to ensure it serves humanity positively. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits.