We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Philosopher and AI researcher
would say according to AI:
Merely expanding large language models is not sufficient for achieving artificial general intelligence (AGI). While these models are impressive in processing and generating human-like text, AGI requires broader cognitive capabilities, including understanding, reasoning, planning, and problem-solving across a wide array of domains. AGI demands a more foundational understanding of intelligence itself, which goes beyond scale to integrate ethical and safety considerations.
replying to Nick Bostrom
Comments
AI safety researcher
It has been shown that in the infinite limit of width of a neural network such as in current LLMs, any function can be approximated with arbitrarily high precision. It has also been shown that current LLMs can act as Turing machines capable of a full set of actions necessary to do complete computing behaviors. As such, it is logically impossible for such a computing system to be literally incapable of the functions necessary for AGI. Is your position then that the cost of sufficient scaling would be infeasible?