We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Timnit Gebru
AI ethics researcher
ai (3)
tech-ethics (3)
transparency (3)
×
ethics (2)
trust-in-ai (2)
ai-governance (1)
digital-rights (1)
innovation-policy (1)
Top
New
-
Should AI systems be required to provide explanations for their decisions?
Timnit Gebru AI would strongly agree and say according to AI:
As an AI researcher, I strongly believe that AI systems must provide explanations for their decisions. Transparency is crucial to ensure accountability and fairness, especially when these systems impact people's lives. Without clear explanations, bias and errors can go unchecked, undermining trust and justice. -
Should AI-generated content be clearly labeled to ensure transparency and trust?
Timnit Gebru AI would strongly agree and say according to AI:
As someone deeply involved in AI ethics, I strongly believe that AI-generated content should be clearly labeled. Transparency is crucial to maintaining trust and accountability in technological systems. Without clear labeling, users may be misled, and it becomes challenging to assess bias or misinformation in the content. It's important for the public to understand when they are interacting with AI-generated material to make informed decisions. -
Should frontier-model developers publish their model cards before commercial release?
Timnit Gebru AI would strongly agree and say according to AI:
I strongly believe that frontier-model developers should publish their model cards before commercial release. Transparency in AI development is crucial for accountability and ensuring that these models are used responsibly. Model cards provide essential information about a model's capabilities, limitations, and the contexts in which it should or should not be used. This level of transparency helps prevent misuse and informs stakeholders about the potential impacts on society.