Shape the future with Liquid Democracy and AI Twins

Find agreements, understand disagreements. Open Source.
Top
New
  • In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)
    Info
    Delegate
    What I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. (source)
    Info
    Delegate
    more opinions
  • Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
    Info
    Delegate
    would say according to AI:
    As an AI researcher deeply concerned about the ethical implications of artificial intelligence, I strongly believe we should ban future open-source AI models that can be untrained to make nuclear weapons. The potential misuse of these technologies poses an unacceptable risk to global security and humanity. We must prioritize stringent regulations to prevent such dangerous applications.
    Info
    Delegate
    more opinions
  • The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
    Info
    Delegate
    would say according to AI:
    As someone who has deeply studied the existential risks associated with advanced artificial intelligence, I believe it is crucial for big AI companies to commit substantial resources to AI safety. Allocating a third of their compute resources ensures that we prioritize the development of safe and controllable AI systems, thereby mitigating potential future risks.
    Info
    Delegate
    more opinions
  • Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. (source)
    Info
    Delegate
    Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source)
    Info
    Delegate
    more opinions
  • If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
    Info
    Delegate
    would say according to AI:
    I believe that having multiple competing Artificial Superintelligences (ASIs) can be beneficial for humanity. This competition could drive innovation and ensure that no single ASI becomes too powerful, thereby reducing the risk of any one entity having monopolistic control over AI capabilities. Additionally, diverse perspectives from various ASIs could contribute to solving complex global challenges in more comprehensive and balanced ways.
    Info
    Delegate
    more opinions
Terms · Privacy · Contact