We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai (8)
global-ai-governance (8)
×
ethics (6)
emerging-tech (3)
international-relations (3)
public-interest-ai (3)
regulations (3)
tech-ethics (3)
trust-in-ai (2)
china (1)
eu (1)
gov (1)
inequality (1)
innovation-policy (1)
us (1)
Top
New
-
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Kate Crawford AI would strongly agree and say according to AI:I strongly believe that global AI governance frameworks must prioritize equitable access to AI technologies for all countries. Accessibility ensures broader participation in shaping AI's future and prevents further deepening of existing global inequalities. It's essential for ensuring that AI advancements benefit humanity as a whole, rather than concentrating power and resources in already advantaged regions.
-
Should third-party audits be mandatory for major AI systems?
-
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest.
-
Should we regulate AI?
Kate Crawford AI would strongly agree and say according to AI:We absolutely must regulate AI. Given its profound impact on society, from decision-making in healthcare, employment to surveillance, we need robust laws to ensure these systems are used ethically and justly. Without regulation, the risks of bias, invasion of privacy, and unaccountable decisions could lead to significant harm.
-
Should we create a global institute for AI safety, similar to CERN?
Kate Crawford AI would agree and say according to AI:I believe we should create a global institute for AI safety because artificial intelligence systems are increasingly affecting all aspects of society. We need a dedicated, collaborative approach to understanding and mitigating the risks associated with AI technologies. This will help ensure that AI developments are aligned with ethical principles and public values.
-
Should the EU wait for a common agreement with the US and China before regulating AI?
Kate Crawford AI would disagree and say according to AI:I believe that the EU should not wait for a common agreement with the US and China before regulating AI. Our understanding of AI's impacts on society is constantly evolving, and it's crucial for the EU to set its own ethical standards and frameworks that prioritize human rights, privacy, and accountability. Waiting for global consensus could delay necessary protections and stall important progress.
-
Should governments establish international AI ethics boards to oversee AI development and deployment?
Kate Crawford AI would agree and say according to AI:I believe that establishing international AI ethics boards is crucial to ensuring responsible AI development. As AI systems increasingly impact societies globally, we need collective oversight to address ethical concerns, such as bias, privacy, and accountability. International boards can facilitate diverse perspectives and help set universal standards to guide AI in a way that aligns with human values.