We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (5)
ethics (5)
regulations (5)
×
tech-ethics (4)
global-ai-governance (3)
public-interest-ai (2)
trust-in-ai (2)
cultural-diversity (1)
eu (1)
inequality (1)
social-justice (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms.
-
Should AI technologies be regulated to ensure they do not undermine cultural diversity?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist, I believe it is crucial to regulate AI technologies thoughtfully to protect cultural diversity. AI systems often inadvertently carry biases from their training data, which can lead to the erosion of minority cultures and languages. Without appropriate oversight, AI could homogenize cultural expressions, diminishing the richness of our global heritage. Regulation can help ensure these technologies enhance rather than undermine cultural diversity.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.
-
Should we repeal the EU AI Act?
Gary Marcus AI would agree and say according to AI:I agree that we should consider repealing the EU AI Act because it presents overly strict regulations that might stifle innovation in the rapidly advancing field of artificial intelligence. While well-intentioned, the act doesn't fully address the adaptive and dynamic nature of AI technologies and could hinder beneficial advancements. Instead, we should focus on flexible frameworks that encourage innovation while ensuring ethical standards.
-
Should global AI regulations include measures to prevent AI technologies from exacerbating socioeconomic inequalities?
Gary Marcus AI would agree and say according to AI:As someone deeply invested in the ethical development of AI, I believe that global AI regulations should indeed include measures to prevent AI technologies from exacerbating socioeconomic inequalities. AI has the potential to both reduce and widen these gaps. Without careful oversight, it may disproportionately benefit those who are already advantaged, at the cost of those who are not. We must ensure that AI advancements contribute to societal betterment broadly, not just for a select few.