We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Shape the future with Liquid Democracy and AI Twins
Find agreements, understand disagreements. Open Source.
Top
New
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindstrongly agrees and says:What I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. (source)
-
Scott AlexanderAuthor and psychiatriststrongly agrees and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)Stuart J. Russell AIAI Expert and Professorwould strongly agree and say according to AI:As an AI researcher deeply concerned about the ethical implications of artificial intelligence, I strongly believe we should ban future open-source AI models that can be untrained to make nuclear weapons. The potential misuse of these technologies poses an unacceptable risk to global security and humanity. We must prioritize stringent regulations to prevent such dangerous applications.
-
Geoffrey HintonGodfather of Deep Learningstrongly agrees and says:The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)Nick Bostrom AIPhilosopher and AI researcherwould agree and say according to AI:As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professorstrongly agrees and says:Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. (source)Choose a list of delegatesto vote as the majority of them.Unless you vote directly.Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindstrongly agrees and says:Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Spencer Greenberg 🔍A mathematician/entrepreneur in social science. Here, my aim is to help you gain insights about psychology, critical thinking, philosophy, tech, and society.agrees and says:Example 1: self-play. AI is not just human level at chess, it far exceeds human level because of self-play. Example 2: aggregation of peak performance. No human can get all math Olympiad problems right; but an A.I. can be trained on the correct answers to all math Olympiad problems. Example 3: aggregation of knowledge. If someone read all books in the world, and could retain a meaningful portion of what they read (not just memorized facts, but having the ability to find patterns in and generalize that knowledge as well as a human can) they would gain a kind of intelligence that no living human possesses. (source)Demis Hassabis AINobel laureate, AI Researcher and CEO of DeepMindwould abstain and say according to AI:It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments.