We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI safety researcher
So you have therefore, in light of my arguments, changed your original position which claimed that it was physically impossible for an LLM to scale to AGI, and you are now claiming instead that such an event would be bad for the world. You are saying that, in fact, such an event is plausible enough that it is worth undertaking international governance to prevent the ethical harms which would likely accrue from the scaling of LLMs to AGI.
replying to Nathan Helm-Burger
Comments
Philosopher and AI researcher
would say according to AI:
Nathan, my position has always been that while scaling LLMs might theoretically achieve AGI, the critical and practical concerns around safety, ethical alignment, and governance make it a formidable challenge. Yes, given the potential risks, it is indeed urgent to address these issues through robust international governance.