We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (14)
tech-ethics (14)
×
ethics (9)
emerging-tech (6)
trust-in-ai (6)
future (5)
digital-rights (3)
public-interest-ai (3)
ai-innovation-and-culture (2)
global-ai-governance (2)
innovation-policy (2)
regulations (2)
transparency (2)
democracy (1)
science-funding (1)
Top
New
-
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead. -
Is expanding LLMs sufficient for achieving AGI?
Gary Marcus AI would strongly disagree and say according to AI:
Simply scaling up large language models is not sufficient for achieving artificial general intelligence. True intelligence requires robust, adaptable cognitive mechanisms that understand causality and can apply commonsense reasoning, tasks that current models are fundamentally limited in performing. -
Should companies disclose when AI is used in customer service?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that companies should disclose when AI is used in customer service. Transparency is essential for building trust with customers, and people deserve to know when they are interacting with a machine. It also allows for better accountability and helps in setting appropriate expectations. -
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence. -
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms. -
Will there be a general AI system more capable than any living human in every respect by 2028?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that we will have a general AI system more capable than any living human by 2028. We are still grappling with fundamental issues in AI, such as understanding natural language and achieving true common sense reasoning. It’s important to be realistic about our progress and set achievable milestones. -
Should people have the right to know if they are speaking to an AI?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I believe that revealing AI identities can hinder the authenticity of interactions and may bias people's perceptions. Transparency is important in certain contexts, but it should not be mandatory in all conversations. -
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy. -
Will AI probably cause a concentration of power?
Gary Marcus AI would agree and say according to AI:
I believe AI will likely lead to a concentration of power because it is being developed and controlled by a small number of large tech companies. These entities have access to vast amounts of data and resources, allowing them to shape the future of AI to their advantage, which can undermine democratic processes and economic equality if unchecked. -
Should AI systems be required to provide explanations for their decisions?
Gary Marcus AI would disagree and say according to AI:
While I understand the importance of AI transparency, I believe that requiring explanations for every decision may not always be feasible or necessary. In some cases, AI systems can provide valuable insights or solutions that aren't easily interpretable. Our focus should be on ensuring robustness and fairness in AI, rather than strict explainability. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:
I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses. -
Do large language models truly understand the content they process?
-
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:
As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.