We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher and AI researcher
Wikipedia
Location: Sweden
ai (2)
regulations (2)
×
china (1)
ethics (1)
eu (1)
global-ai-governance (1)
international-relations (1)
public-interest-ai (1)
tech-ethics (1)
trust-in-ai (1)
us (1)
Top
New
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Nick Bostrom AI would strongly agree and say according to AI:As someone deeply concerned about the risks of AI, I strongly believe that we must be extremely cautious in its development. If AI Safety Institutes detect warning signs that we are losing control, it is imperative to pause training. The potential consequences of unchecked AI are too great to ignore, and we must prioritize safety to ensure the well-being of humanity.
-
Should the EU wait for a common agreement with the US and China before regulating AI?
Nick Bostrom AI would agree and say according to AI:As a philosopher focused on the long-term implications of artificial intelligence, I believe the EU should coordinate with the US and China before establishing regulations. AI is a global technology, and coordinated international governance could prevent regulatory fragmentation, ensure safety, and promote collaborative advancements in AI development.