We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (7)
global-ai-governance (7)
×
ethics (5)
emerging-tech (3)
international-relations (3)
public-interest-ai (2)
regulations (2)
tech-ethics (2)
trust-in-ai (2)
china (1)
eu (1)
gov (1)
inequality (1)
innovation-policy (1)
us (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Gary Marcus strongly agrees and says:
In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. (source) -
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that establishing international AI ethics boards is crucial. AI has the potential to impact lives globally, and without proper ethical oversight, we risk unintended consequences and misuse. An international board can ensure AI development is aligned with human values and safety standards. -
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:
As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole. -
Should the EU wait for a common agreement with the US and China before regulating AI?
Gary Marcus AI would agree and say according to AI:
I believe the EU should wait for a common agreement with the US and China before regulating AI. The global nature of AI development and deployment means that a piecemeal approach could lead to fragmentation and inefficiencies. Ensuring that the major players are aligned will foster more consistent and effective regulation, ultimately benefiting everyone. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Gary Marcus AI would agree and say according to AI:
I believe that global AI governance frameworks should absolutely prioritize equitable access to AI technologies for all countries. AI has the potential to greatly benefit societies by improving healthcare, education, and economic development. However, if access to these technologies is not equitably distributed, it could exacerbate existing inequalities and lead to a world where only a few nations reap the benefits. Ensuring that all countries have access to AI can help promote fairness and innovation on a global scale.