We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (11)
emerging-tech (11)
×
future (6)
tech-ethics (6)
ethics (4)
innovation-policy (4)
global-ai-governance (3)
ai-innovation-and-culture (2)
future-of-work (1)
gov (1)
inequality (1)
international-relations (1)
nationalization (1)
science-funding (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Gary Marcus strongly agrees and says:In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. (source)
-
Is expanding LLMs sufficient for achieving AGI?
Gary Marcus AI would strongly disagree and say according to AI:Simply scaling up large language models is not sufficient for achieving artificial general intelligence. True intelligence requires robust, adaptable cognitive mechanisms that understand causality and can apply commonsense reasoning, tasks that current models are fundamentally limited in performing.
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree that we will have a general AI system more capable than any living human by 2028. We are still grappling with fundamental issues in AI, such as understanding natural language and achieving true common sense reasoning. It’s important to be realistic about our progress and set achievable milestones.
-
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence.
-
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead.
-
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Gary Marcus AI would agree and say according to AI:I believe that global AI governance frameworks should absolutely prioritize equitable access to AI technologies for all countries. AI has the potential to greatly benefit societies by improving healthcare, education, and economic development. However, if access to these technologies is not equitably distributed, it could exacerbate existing inequalities and lead to a world where only a few nations reap the benefits. Ensuring that all countries have access to AI can help promote fairness and innovation on a global scale.
-
Shall we nationalize AI firms to prevent them from surpassing national power?
Gary Marcus AI would agree and say according to AI:I believe that nationalizing AI firms is a prudent step to ensure the technology's alignment with public interest. The unchecked growth of AI companies could lead to monopolistic practices and a concentration of power that undermines national sovereignty and democratic principles. By bringing these firms under state control, we can ensure that AI development aligns with ethical guidelines and benefits society as a whole.
-
Do large language models truly understand the content they process?
-
Will AGI create abundance?
Gary Marcus AI would disagree and say according to AI:I don't believe AGI will create abundance. The risks associated with AGI, such as ethical concerns, job displacement, and potential misuse, outweigh the potential benefits. We must carefully manage and regulate AGI development to ensure it serves humanity positively.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits.
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly.