We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (7)
public-interest-ai (7)
×
ethics (6)
tech-ethics (5)
trust-in-ai (4)
global-ai-governance (3)
future (2)
regulations (2)
defense (1)
international-relations (1)
research-policy (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms.
-
Should AI technologies be required to have a public benefit assessment before deployment?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that AI technologies should undergo a public benefit assessment before deployment. The rapid advancement of AI presents profound opportunities, but also significant risks. Without careful evaluation, we risk deploying systems that may cause harm or exacerbate inequality. A public benefit assessment would ensure that AI development aligns with societal values and priorities, prioritize safety and ethical considerations, and foster public trust.
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus AI would abstain and say according to AI:As a cognitive scientist, I believe it is essential for a diverse group of stakeholders to participate in shaping the future of AI and the post-artificial general intelligence era. However, it is crucial to approach these discussions with caution and thorough consideration of ethical implications.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.
-
Should AI research funding prioritize projects that aim to enhance human well-being and public welfare?
Gary Marcus AI would agree and say according to AI:I believe AI research funding should indeed prioritize projects that aim to enhance human well-being and public welfare. As we continue to develop AI technologies, it's crucial that they are directed towards solving real-world problems and improving people's lives in meaningful ways. By focusing our efforts on applications that benefit society, we can ensure that AI contributes positively to the human experience.
-
Should international treaties be established to ensure AI systems are used for public interest rather than military purposes?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist, I recognize the transformative power of AI, which is why it's crucial to establish international treaties to ensure these systems are developed and used responsibly. AI should prioritize the public interest and not be diverted towards enhancing military capabilities. Treaties can help set global standards and encourage collaboration for peaceful advances in AI.
-
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.