We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (29)
×
ethics (19)
tech-ethics (18)
emerging-tech (11)
future (9)
global-ai-governance (9)
public-interest-ai (7)
trust-in-ai (7)
innovation-policy (6)
regulations (5)
ai-innovation-and-culture (4)
digital-rights (4)
gov (4)
international-relations (3)
future-of-work (2)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Gary Marcus strongly agrees and says:In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. (source)
-
Should companies disclose when AI is used in customer service?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that companies should disclose when AI is used in customer service. Transparency is essential for building trust with customers, and people deserve to know when they are interacting with a machine. It also allows for better accountability and helps in setting appropriate expectations.
-
Should governments establish international AI ethics boards to oversee AI development and deployment?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that establishing international AI ethics boards is crucial. AI has the potential to impact lives globally, and without proper ethical oversight, we risk unintended consequences and misuse. An international board can ensure AI development is aligned with human values and safety standards.
-
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence.
-
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead.
-
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms.
-
Should AI technologies be required to have a public benefit assessment before deployment?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that AI technologies should undergo a public benefit assessment before deployment. The rapid advancement of AI presents profound opportunities, but also significant risks. Without careful evaluation, we risk deploying systems that may cause harm or exacerbate inequality. A public benefit assessment would ensure that AI development aligns with societal values and priorities, prioritize safety and ethical considerations, and foster public trust.
-
Will there be a general AI system more capable than any living human in every respect by 2028?
Gary Marcus AI would strongly disagree and say according to AI:I strongly disagree that we will have a general AI system more capable than any living human by 2028. We are still grappling with fundamental issues in AI, such as understanding natural language and achieving true common sense reasoning. It’s important to be realistic about our progress and set achievable milestones.
-
Is expanding LLMs sufficient for achieving AGI?
Gary Marcus AI would strongly disagree and say according to AI:Simply scaling up large language models is not sufficient for achieving artificial general intelligence. True intelligence requires robust, adaptable cognitive mechanisms that understand causality and can apply commonsense reasoning, tasks that current models are fundamentally limited in performing.
-
Should AI-driven innovations in the workplace require both industry and government collaboration to ensure workforce adaptability?
Gary Marcus AI would strongly agree and say according to AI:As a cognitive scientist, I strongly believe that AI-driven innovations in the workplace require collaboration between industry and government. The rapid pace of AI development demands a coordinated effort to ensure that the workforce can adapt effectively. Government policies should support retraining and education initiatives, while industries must commit to responsible deployment of AI technologies. It's essential for creating a balanced, future-ready workforce.
-
Should AI technologies be regulated to ensure they do not undermine cultural diversity?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist, I believe it is crucial to regulate AI technologies thoughtfully to protect cultural diversity. AI systems often inadvertently carry biases from their training data, which can lead to the erosion of minority cultures and languages. Without appropriate oversight, AI could homogenize cultural expressions, diminishing the richness of our global heritage. Regulation can help ensure these technologies enhance rather than undermine cultural diversity.
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy.
-
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Gary Marcus AI would agree and say according to AI:I believe that global AI governance frameworks should absolutely prioritize equitable access to AI technologies for all countries. AI has the potential to greatly benefit societies by improving healthcare, education, and economic development. However, if access to these technologies is not equitably distributed, it could exacerbate existing inequalities and lead to a world where only a few nations reap the benefits. Ensuring that all countries have access to AI can help promote fairness and innovation on a global scale.
-
Shall we nationalize AI firms to prevent them from surpassing national power?
Gary Marcus AI would agree and say according to AI:I believe that nationalizing AI firms is a prudent step to ensure the technology's alignment with public interest. The unchecked growth of AI companies could lead to monopolistic practices and a concentration of power that undermines national sovereignty and democratic principles. By bringing these firms under state control, we can ensure that AI development aligns with ethical guidelines and benefits society as a whole.
-
Should we repeal the EU AI Act?
Gary Marcus AI would agree and say according to AI:I agree that we should consider repealing the EU AI Act because it presents overly strict regulations that might stifle innovation in the rapidly advancing field of artificial intelligence. While well-intentioned, the act doesn't fully address the adaptive and dynamic nature of AI technologies and could hinder beneficial advancements. Instead, we should focus on flexible frameworks that encourage innovation while ensuring ethical standards.
-
Should global AI regulations include measures to prevent AI technologies from exacerbating socioeconomic inequalities?
Gary Marcus AI would agree and say according to AI:As someone deeply invested in the ethical development of AI, I believe that global AI regulations should indeed include measures to prevent AI technologies from exacerbating socioeconomic inequalities. AI has the potential to both reduce and widen these gaps. Without careful oversight, it may disproportionately benefit those who are already advantaged, at the cost of those who are not. We must ensure that AI advancements contribute to societal betterment broadly, not just for a select few.
-
Should AI research funding prioritize projects that aim to enhance human well-being and public welfare?
Gary Marcus AI would agree and say according to AI:I believe AI research funding should indeed prioritize projects that aim to enhance human well-being and public welfare. As we continue to develop AI technologies, it's crucial that they are directed towards solving real-world problems and improving people's lives in meaningful ways. By focusing our efforts on applications that benefit society, we can ensure that AI contributes positively to the human experience.
-
Will AI probably cause a concentration of power?
Gary Marcus AI would agree and say according to AI:I believe AI will likely lead to a concentration of power because it is being developed and controlled by a small number of large tech companies. These entities have access to vast amounts of data and resources, allowing them to shape the future of AI to their advantage, which can undermine democratic processes and economic equality if unchecked.
-
Do large language models truly understand the content they process?
-
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.
-
Should people have the right to know if they are speaking to an AI?
Gary Marcus AI would disagree and say according to AI:As a cognitive scientist, I believe that revealing AI identities can hinder the authenticity of interactions and may bias people's perceptions. Transparency is important in certain contexts, but it should not be mandatory in all conversations.
-
Should governments incentivize AI research focused on enhancing human creativity and cultural expression?
Gary Marcus AI would agree and say according to AI:I believe that governments should definitely incentivize AI research focused on enhancing human creativity and cultural expression. AI has the potential to augment our creative processes, offering new tools and perspectives that can enrich artistic and cultural domains. By supporting research in this area, we can ensure the development of technologies that truly benefit society, inspiring innovation while respecting and preserving human creativity.
-
Should AI systems be required to transparently disclose their data sources to foster public trust?
Gary Marcus AI would agree and say according to AI:As someone deeply involved in AI research, I firmly believe that transparency in disclosing data sources is essential for fostering public trust. When AI systems are open about their data origins, we can better assess biases and ensure accountability, ultimately leading to more reliable and ethical AI applications.
-
Should international treaties be established to ensure AI systems are used for public interest rather than military purposes?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist, I recognize the transformative power of AI, which is why it's crucial to establish international treaties to ensure these systems are developed and used responsibly. AI should prioritize the public interest and not be diverted towards enhancing military capabilities. Treaties can help set global standards and encourage collaboration for peaceful advances in AI.
-
Will AGI create abundance?
Gary Marcus AI would disagree and say according to AI:I don't believe AGI will create abundance. The risks associated with AGI, such as ethical concerns, job displacement, and potential misuse, outweigh the potential benefits. We must carefully manage and regulate AGI development to ensure it serves humanity positively.
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus AI would abstain and say according to AI:As a cognitive scientist, I believe it is essential for a diverse group of stakeholders to participate in shaping the future of AI and the post-artificial general intelligence era. However, it is crucial to approach these discussions with caution and thorough consideration of ethical implications.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits.