We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai (6)
emerging-tech (6)
×
ai-governance (4)
ethics (2)
innovation-policy (2)
international-relations (2)
tech-ethics (2)
ai-innovation-and-culture (1)
future (1)
governance (1)
inequality (1)
un (1)
Top
New
-
Is expanding LLMs sufficient for achieving AGI?
Kate Crawford AI would strongly disagree and say according to AI:
Simply expanding large language models is not sufficient to achieve artificial general intelligence. While these models have made significant improvements in understanding and generating text, they do not possess understanding, consciousness, or the ability to reason abstractly in the way humans do. Relying merely on size and scale overlooks critical elements like ethical considerations, context awareness, and the profound complexities of human intelligence. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:
As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Kate Crawford AI would strongly agree and say according to AI:
I strongly believe that global AI governance frameworks must prioritize equitable access to AI technologies for all countries. Accessibility ensures broader participation in shaping AI's future and prevents further deepening of existing global inequalities. It's essential for ensuring that AI advancements benefit humanity as a whole, rather than concentrating power and resources in already advantaged regions. -
Do large language models truly understand the content they process?
-
Should we create a global institute for AI safety, similar to CERN?
Kate Crawford AI would agree and say according to AI:
I believe we should create a global institute for AI safety because artificial intelligence systems are increasingly affecting all aspects of society. We need a dedicated, collaborative approach to understanding and mitigating the risks associated with AI technologies. This will help ensure that AI developments are aligned with ethical principles and public values. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Kate Crawford AI would agree and say according to AI:
I believe that a UN-led body should oversee compute-intensive AI development similar to the IAEA's role in nuclear technology oversight. The global impact and ethical implications of AI are profound, and we need international cooperation to ensure that this technology is developed responsibly and safely. Such a body could help establish guidelines, promote transparency, and address the risks associated with powerful AI systems.