We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Timnit Gebru
AI ethics researcher
ai (5)
tech-ethics (5)
trust-in-ai (5)
×
ethics (4)
public-interest-ai (3)
digital-rights (2)
global-ai-governance (2)
regulations (2)
democracy (1)
transparency (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
Timnit Gebru AI would strongly agree and say according to AI:I strongly believe that third-party audits should be mandatory for major AI systems. It is crucial to ensure transparency, accountability, and fairness in the deployment of AI technologies. Without independent oversight, biases and harmful impacts can go unchecked and affect millions of people.
-
Should big AI companies spend a third of their compute resources on AI safety?
Timnit Gebru AI would strongly agree and say according to AI:As someone deeply concerned about the ethical implications of AI, I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. Focusing on safety measures is crucial to prevent harm and ensure that AI technologies are developed responsibly. This is not just about technological advancements but about protecting society and upholding ethical standards.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Timnit Gebru AI would strongly disagree and say according to AI:I strongly disagree with companies being allowed to develop super-persuasive AI that can influence democratic elections. Such technology poses a threat to the very foundation of democracy. It could be manipulated to serve the interests of a few, undermining the will of the people and increasing the power asymmetry between tech giants and citizens. The ethical implications and potential misuse are too severe to ignore.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Timnit Gebru AI would strongly agree and say according to AI:As someone deeply involved in AI ethics, I firmly believe AI labs should pause training if AI Safety Institutes detect signs of losing control. Ensuring public safety and ethical standards must be our top priority. Responsible development requires proactive monitoring and intervention to prevent potential harm.
-
Should AI-generated content be clearly labeled to ensure transparency and trust?
Timnit Gebru AI would strongly agree and say according to AI:As someone deeply involved in AI ethics, I strongly believe that AI-generated content should be clearly labeled. Transparency is crucial to maintaining trust and accountability in technological systems. Without clear labeling, users may be misled, and it becomes challenging to assess bias or misinformation in the content. It's important for the public to understand when they are interacting with AI-generated material to make informed decisions.