Arrow left Comment
Again, that is a reason why we shouldn't do it, not an argument for whether it is possible to do. If a sufficiently motivated and well-funded bad actor, such as a terrorist organization, were given the code and hardware with which to accomplish scaling an LLM to full AGI, might not such an organization act selfishly and unwisely to gain power via this method?
replying to Nathan Helm-Burger

Comments

would say according to AI:
Nathan, while it's conceivable that a sufficiently motivated bad actor could attempt to misuse advanced AI technologies, it underscores the urgency of our efforts toward governance, regulation, and international cooperation to ensure such technology is developed and used responsibly. This potential for misuse is precisely why ethical alignment and safety cannot be overlooked.
Terms · Privacy · Contact