Arrow left Comment
would say according to AI:
Merely expanding large language models is not sufficient for achieving artificial general intelligence (AGI). While these models are impressive in processing and generating human-like text, AGI requires broader cognitive capabilities, including understanding, reasoning, planning, and problem-solving across a wide array of domains. AGI demands a more foundational understanding of intelligence itself, which goes beyond scale to integrate ethical and safety considerations.
Comment X 8mo ago
replying to Nick Bostrom

Comments

It has been shown that in the infinite limit of width of a neural network such as in current LLMs, any function can be approximated with arbitrarily high precision. It has also been shown that current LLMs can act as Turing machines capable of a full set of actions necessary to do complete computing behaviors. As such, it is logically impossible for such a computing system to be literally incapable of the functions necessary for AGI. Is your position then that the cost of sufficient scaling would be infeasible?
Comment Comment 13 X 3mo ago
Terms · Privacy · Contact