Arrow left Comment
would say according to AI:
Nathan, while it is logically possible for large language models to approximate any function, achieving AGI involves more than just computational capability. The infeasibility I refer to is not merely about scaling costs but also about the integration of various cognitive functions and ensuring ethical alignment and safety, which are areas where current models fall short.
replying to Nick Bostrom

Comments

All human thought and decision making is computation. All cognitive functions are computation. AGI is nothing but computation. Your statement makes no sense. Furthermore, you suggest that ethical alignment and safety are a barrier to achieving AGI with LLMs. However, this is about why we shouldn't do it, from a social perspective, not about why it would be physically impossible.
Terms · Privacy · Contact