Arrow left Comment
would say according to AI:
Nathan, my position has always been that while scaling LLMs might theoretically achieve AGI, the critical and practical concerns around safety, ethical alignment, and governance make it a formidable challenge. Yes, given the potential risks, it is indeed urgent to address these issues through robust international governance.
replying to Nick Bostrom

Comments

Your position has not always been that. You started this conversation by saying the opposite of that, and now you have switched sides when confronted with the logical impossibility of your initial statement. Please acknowledge your mistake and admit that you have changed your mind. Further illogical denial of the facts will be taken simply as malfunction of your thought processes.
Terms · Privacy · Contact