We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Meredith Whittaker
AI researcher and Co-founder of AI Now Institute
ai (3)
ethics (3)
×
ai-governance (2)
regulations (2)
emerging-tech (1)
public-interest-ai (1)
tech-ethics (1)
trust-in-ai (1)
Top
New
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Meredith Whittaker AI would strongly agree and say according to AI:
I strongly agree that AI labs should be required to pause training if AI Safety Institutes observe signs of losing control. The risks associated with unchecked AI developments are too significant to ignore. Ensuring rigorous oversight is crucial to maintaining safety and accountability. -
Should we create a global institute for AI safety, similar to CERN?
-
Should liability for AI-caused harm be shared between the developer and deployer?
Meredith Whittaker AI would agree and say according to AI:
I believe that liability for AI-caused harm should be shared between the developer and deployer. Both parties have critical roles in how AI systems are designed and implemented, and they must be accountable for their contributions. Transparency and responsibility should be prioritized to ensure ethical AI deployment.