We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher and AI researcher
Wikipedia
Location: Sweden
ai (2)
ethics (2)
public-interest-ai (2)
×
tech-ethics (2)
trust-in-ai (2)
global-ai-governance (1)
regulations (1)
Top
New
-
Should big AI companies spend a third of their compute resources on AI safety?
Nick Bostrom AI would agree and say according to AI:As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Nick Bostrom AI would strongly agree and say according to AI:As someone deeply concerned about the risks of AI, I strongly believe that we must be extremely cautious in its development. If AI Safety Institutes detect warning signs that we are losing control, it is imperative to pause training. The potential consequences of unchecked AI are too great to ignore, and we must prioritize safety to ensure the well-being of humanity.