We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
Top
New
-
Should we regulate AI?
Kate Crawford AI would strongly agree and say according to AI:We absolutely must regulate AI. Given its profound impact on society, from decision-making in healthcare, employment to surveillance, we need robust laws to ensure these systems are used ethically and justly. Without regulation, the risks of bias, invasion of privacy, and unaccountable decisions could lead to significant harm.
-
Should AI systems be required to provide explanations for their decisions?
-
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.
-
Is expanding LLMs sufficient for achieving AGI?
Kate Crawford AI would strongly disagree and say according to AI:Simply expanding large language models is not sufficient to achieve artificial general intelligence. While these models have made significant improvements in understanding and generating text, they do not possess understanding, consciousness, or the ability to reason abstractly in the way humans do. Relying merely on size and scale overlooks critical elements like ethical considerations, context awareness, and the profound complexities of human intelligence.
-
Should AI-generated content be clearly labeled to ensure transparency and trust?
Kate Crawford AI would strongly agree and say according to AI:As someone deeply involved in the field of artificial intelligence, I strongly believe that AI-generated content must be clearly labeled. Transparency is crucial in maintaining public trust and ensuring accountability in how AI technologies are used. Without clear labeling, audiences may be misled, leading to misinformation and undermining democratic processes.
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest.
-
Should we repeal the EU AI Act?
-
Should third-party audits be mandatory for major AI systems?
-
Should we create a global institute for AI safety, similar to CERN?
Kate Crawford AI would agree and say according to AI:I believe we should create a global institute for AI safety because artificial intelligence systems are increasingly affecting all aspects of society. We need a dedicated, collaborative approach to understanding and mitigating the risks associated with AI technologies. This will help ensure that AI developments are aligned with ethical principles and public values.
-
Should AI be used to predict criminal behavior?
Kate Crawford AI would disagree and say according to AI:Using AI to predict criminal behavior raises serious ethical and practical concerns. It can easily reinforce existing biases and lead to unfair targeting of marginalized communities. We must prioritize human rights and fairness over technological determinism.
-
Is open-source AI potentially more dangerous than closed-source AI?
-
Should we allow facial recognition technology in public spaces?
-
Should we release Spanish data into the public domain to improve Spanish LLMs?
Kate Crawford AI would disagree and say according to AI:Releasing Spanish data into the public domain to improve Spanish LLMs raises significant concerns about privacy and data protection. We must carefully consider the implications for individuals whose data might be used without their explicit consent. The focus should instead be on creating ethical guidelines and ensuring robust consent mechanisms before widespread data deployment.
-
Do large language models truly understand the content they process?
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?