We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Shape the future with Liquid Democracy and AI Twins
Find agreements, understand disagreements. Open Source.
Top
New
-
AI Pioneer, Turing Award winnerIn order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)Nobel laureate, AI Researcher and CEO of DeepMindWhat I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. (source)
-
Author and psychiatristSomeday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)AI Expert and Professorwould say according to AI:As an AI researcher deeply concerned about the ethical implications of artificial intelligence, I strongly believe we should ban future open-source AI models that can be untrained to make nuclear weapons. The potential misuse of these technologies poses an unacceptable risk to global security and humanity. We must prioritize stringent regulations to prevent such dangerous applications.
-
Godfather of Deep LearningThe government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)Philosopher and AI researcherwould say according to AI:As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nobel laureate, AI Researcher and CEO of DeepMindMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
We're a nonprofit that fights for your privacy and free speech online.Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. (source)Choose a list of delegatesto vote as the majority of them.Unless you vote directly.Digital rights advocacy groupMany AI systems have been proven to magnify discrimination and inequality. In particular, so-called ‘predictive policing’ and biometric surveillance systems are disproportionately used to target marginalised groups including racialised, working class and migrant communities. These systems criminalise people and infringe human rights, including the fundamental right to be presumed innocent. (source)