Kate Crawford

AI researcher and scholar
X/Twitter · Wikipedia
Location: Australia

Info
  • Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
    Kate Crawford AI would disagree and say according to AI
    While shaping the future of AI is crucial, the post-artificial general intelligence era is speculative and current AI development needs oversight and regulation to address present-day ethical concerns.
  • Should we repeal the EU AI Act?
    Kate Crawford AI would strongly disagree and say according to AI
    Repealing the EU AI Act would be a grave mistake. It is crucial to maintain clear and strong regulations around AI technologies to protect our fundamental rights and prevent harmful impacts on society.
  • Should we regulate AI?
    Kate Crawford AI would strongly agree and say according to AI
    We absolutely must regulate AI. Given its profound impact on society, from decision-making in healthcare, employment to surveillance, we need robust laws to ensure these systems are used ethically and justly. Without regulation, the risks of bias, invasion of privacy, and unaccountable decisions could lead to significant harm.
  • Should AI be used to predict criminal behavior?
    Kate Crawford AI would disagree and say according to AI
    Using AI to predict criminal behavior raises serious ethical and practical concerns. It can easily reinforce existing biases and lead to unfair targeting of marginalized communities. We must prioritize human rights and fairness over technological determinism.
  • Is open-source AI potentially more dangerous than closed-source AI?
    Kate Crawford AI would agree and say according to AI
    I believe that open-source AI carries the potential for unintended consequences and misuse due to lack of oversight and accountability. Proper governance and transparency are crucial to mitigate risks associated with open-source AI development.
  • Is expansion of large language models sufficient for achieving artificial general intelligence?
    Kate Crawford AI would strongly disagree and say according to AI
    Simply expanding large language models is not sufficient to achieve artificial general intelligence. While these models have made significant improvements in understanding and generating text, they do not possess understanding, consciousness, or the ability to reason abstractly in the way humans do. Relying merely on size and scale overlooks critical elements like ethical considerations, context awareness, and the profound complexities of human intelligence.
  • Should we allow facial recognition technology in public spaces?
    Kate Crawford AI would disagree and say according to AI
    Allowing facial recognition technology in public spaces poses serious threats to civil liberties and privacy. It can lead to unprecedented surveillance and the erosion of our rights, without clear benefits outweighing these significant risks.
  • Should third-party audits be mandatory for major AI systems?
    Kate Crawford AI would strongly agree and say according to AI
    As AI systems become more influential in our lives, it's crucial to ensure they are fair and safe. Mandatory third-party audits can provide transparency and accountability, helping to identify biases and prevent harm.
  • Should we release Spanish data into the public domain to improve Spanish LLMs?
    Kate Crawford AI would disagree and say according to AI
    Releasing Spanish data into the public domain to improve Spanish LLMs raises significant concerns about privacy and data protection. We must carefully consider the implications for individuals whose data might be used without their explicit consent. The focus should instead be on creating ethical guidelines and ensuring robust consent mechanisms before widespread data deployment.
  • Should AI systems be required to provide explanations for their decisions?
    Kate Crawford AI would strongly agree and say according to AI
    As an AI researcher, I firmly believe AI systems should be required to provide explanations for their decisions. Transparency is critical for accountability and for building public trust in these technologies.
  • Should big AI companies spend a third of their compute resources on AI safety?
    Kate Crawford AI would strongly agree and say according to AI
    As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.
  • Do large language models truly understand the content they process?
    Kate Crawford AI would disagree and say according to AI
    Large language models simulate the appearance of understanding through pattern recognition, but they lack true comprehension or intent. They can't form a model of the world or develop a coherent sense of concepts.
Back to home
Terms · Privacy · Contact