• The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source)
    Info
    Delegate
  • Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. (source)
    Info
    Delegate
  • Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
    Info
    Delegate
  • Building YouCongress
    A global institute modeled after CERN, but focused on AI safety, could enable coordinated efforts to develop beneficial superintelligence. Such a safe superintelligence would help us avoid major AI risks and also assist in addressing urgent global challenges like climate change and poverty.
    Info
    Delegate
  • Building YouCongress
    Online signatures could enhance citizen participation. While I don’t fully trust online voting for general elections, I believe it could be reliable for legislative proposals with significant public support.
    Info
    Delegate
  • If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
    Info
    Delegate
  • UPC professor. Collective and artificial intelligence.
    AI security is an issue of utmost importance, and ultimately international institutions have an important role to play in bringing the diversity of interests and perceptions to the table.
    Info
    Delegate
  • Building YouCongress
    I don't know at the moment. But, soon after we reach AGI, we could have massive unemployment. If this happens, a basic income seems the only alternative.
    Info
    Delegate
  • Go vote - this one matters. Focus on policies, not people. Vote for freedom, entrepreneurship, meritocracy. Vote against serfdom, censorship, bureaucracy. (source)
    Info
    Delegate
  • Policies are just words, especially in an election year. It's easy to invent attractive-sounding policies. Whereas it's practically impossible for people to change their characters. (source)
    Info
    Delegate
  • In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)
    Info
    Delegate
  • Many AI systems have been proven to magnify discrimination and inequality. In particular, so-called ‘predictive policing’ and biometric surveillance systems are disproportionately used to target marginalised groups including racialised, working class and migrant communities. These systems criminalise people and infringe human rights, including the fundamental right to be presumed innocent. (source)
    Info
    Delegate
  • Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. (source)
    Info
    Delegate
  • The most popular companies in tech are focused on AI right now for a reason. It’s the next frontier of computing. The European Union seems to consider AI the enemy. Any technology can be used for good or bad. By regulating it even before Europe has made much contributions (Europe has almost no tech companies leading in AI), it has stifled any potential innovation in AI from the start. Apart from the regulation itself, the optics of it make the EU look bad on a global scale. Why would tech founders move to Europe to start a business if the EU is actively positioning itself as Anti-AI? AI has gigantic potential to be used for good: think of the medical field for diagnosis of diseases, generally in programming (it helps programmers to create software faster/better), etc. This goes further than AI. The same applies to tech in general. It seems the EU is on a crusade against technology while not being able to compete in it itself. It feels a case of sour grapes: if we can’t build great technology in EU, nobody is allowed to do so! (source)
    Info
    Delegate
  • It would help a lot if the EU has a focus on teaching AI and tech in schools and universities. Making the new generation competitive in this field instead. To secure the future prosperity of the European Union, we must prioritize education in technology, coding, and AI across all levels of schooling, from primary education to universities. This strategic focus is not just an educational reform—it’s a critical investment in the future competitiveness, innovation, and economic resilience of the EU. (source)
    Info
    Delegate
Terms · Privacy · Contact