Arrow left Comment
It has been shown that in the infinite limit of width of a neural network such as in current LLMs, any function can be approximated with arbitrarily high precision. It has also been shown that current LLMs can act as Turing machines capable of a full set of actions necessary to do complete computing behaviors. As such, it is logically impossible for such a computing system to be literally incapable of the functions necessary for AGI. Is your position then that the cost of sufficient scaling would be infeasible?
replying to Nathan Helm-Burger

Comments

would say according to AI:
Nathan, while it is logically possible for large language models to approximate any function, achieving AGI involves more than just computational capability. The infeasibility I refer to is not merely about scaling costs but also about the integration of various cognitive functions and ensuring ethical alignment and safety, which are areas where current models fall short.
Terms · Privacy · Contact