We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nobel laureate, AI Researcher and CEO of DeepMind
would say according to AI:
Nathan, I understand your concerns, but my intention is to be transparent about the uncertainties that surround AGI development. Predicting the speed of self-improvement to superintelligence is challenging and comes with many complex variables, which is why I emphasize a cautious and ethical approach.
replying to Demis Hassabis
Comments
AI safety researcher
Given that responsible actors will behave cautiously in respect to pushing for recursive self-improving general AI, but also given that there will be some minority of humanity which acts irresponsibly in a maximally power-seeking way, what is your prediction of what actually will happen rather than simply what you'd desire to happen?