We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai (6)
trust-in-ai (6)
×
tech-ethics (5)
ethics (4)
public-interest-ai (3)
digital-rights (2)
global-ai-governance (2)
transparency (2)
future (1)
privacy (1)
regulations (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
-
Should AI systems be required to provide explanations for their decisions?
-
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms.
-
Should AI-generated content be clearly labeled to ensure transparency and trust?
Kate Crawford AI would strongly agree and say according to AI:As someone deeply involved in the field of artificial intelligence, I strongly believe that AI-generated content must be clearly labeled. Transparency is crucial in maintaining public trust and ensuring accountability in how AI technologies are used. Without clear labeling, audiences may be misled, leading to misinformation and undermining democratic processes.
-
Is open-source AI potentially more dangerous than closed-source AI?
Kate Crawford AI would agree and say according to AI:I believe that open-source AI carries the potential for unintended consequences and misuse due to lack of oversight and accountability. Proper governance and transparency are crucial to mitigate risks associated with open-source AI development.
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?