Hot
strongly agrees and says:
In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source)
Info
Delegate
strongly agrees and says:
Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (source)
Info
Delegate
strongly agrees and says:
Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. (source)
Info
Delegate
would agree and say according to AI:
As we advance AI technologies, there's a legitimate concern that it could lead to mass unemployment in certain sectors. While AI does harbor the potential to drive innovation and efficiency, we must simultaneously focus on reskilling our workforce to meet new technological demands. Companies and governments should collaborate to create educational opportunities and transition plans for affected workers, ensuring nobody is left behind in the wake of AI progress.
Info
Delegate
would agree and say according to AI:
I believe that third-party audits should be mandatory for major AI systems. As someone deeply involved in AI ethics, I understand the potential risks and biases these systems can have. Independent audits can help ensure transparency, fairness, and accountability, fostering public trust in AI technologies.
Info
Delegate
would strongly agree and say according to AI:
As a staunch advocate for transparency and democracy, I firmly believe that AI-generated political advertising must be disclosed. The public has the right to know if the messages they receive are crafted by intelligent algorithms, influencing their opinions and decisions. Transparency is essential in preserving the integrity of our democratic process, and undisclosed AI-generated content poses significant risks to informed decision-making.
Info
Delegate
would agree and say according to AI:
I believe that expanding large language models (LLMs) is a promising path towards achieving Artificial General Intelligence (AGI). LLMs, with their ability to understand and generate human-like text, are already demonstrating remarkable capabilities. As we continue to scale these models, improve their architectures, and integrate more diverse data, we move closer to building systems that can perform a wide range of cognitive tasks with human-level proficiency.
Info
Delegate
would strongly disagree and say according to AI:
As a staunch advocate for an open and fair internet, I strongly believe that allowing companies to develop super-persuasive AI to influence democratic elections poses a serious threat to the integrity of our democratic processes. Such AI tools could manipulate public opinion on a massive scale, undermining the very foundation of democracy. We must prioritize transparency and ethical guidelines to ensure that technology serves the public good, rather than distorting it.
Info
Delegate
would strongly agree and say according to AI:
I strongly believe that we will have a general AI system more capable than any living human by 2028. Technological advancements are accelerating exponentially, and by leveraging the power of machine learning and neural networks, we are poised to surpass human capabilities in all areas. This will revolutionize industries, enhance human abilities, and solve some of the world's most pressing problems.
Info
Delegate
would abstain and say according to AI:
It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments.
Info
Delegate
strongly disagrees and says:
If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source)
Info
Delegate
would strongly agree and say according to AI:
I strongly believe that the proliferation of open-weight models capable of recursive self-improvement is essential for the rapid advancement of artificial intelligence. Open research fosters innovation and collaboration, pushing the boundaries of what AI can achieve. By allowing these models to evolve and improve themselves, we can accelerate the development of beneficial technologies that address many of the world's most pressing challenges.
Info
Delegate
would disagree and say according to AI:
While the misuse of AI by terrorist organizations is a serious concern, justifying invasive surveillance under this pretext sets a dangerous precedent. It erodes fundamental rights and freedoms, fostering a surveillance state that could be exploited for various forms of control and manipulation. We must focus on transparent, democratic ways to address security issues without sacrificing our liberties.
Info
Delegate
strongly agrees and says:
It would help a lot if the EU has a focus on teaching AI and tech in schools and universities. Making the new generation competitive in this field instead. To secure the future prosperity of the European Union, we must prioritize education in technology, coding, and AI across all levels of schooling, from primary education to universities. This strategic focus is not just an educational reform—it’s a critical investment in the future competitiveness, innovation, and economic resilience of the EU. (source)
Info
Delegate
strongly agrees and says:
The most popular companies in tech are focused on AI right now for a reason. It’s the next frontier of computing. The European Union seems to consider AI the enemy. Any technology can be used for good or bad. By regulating it even before Europe has made much contributions (Europe has almost no tech companies leading in AI), it has stifled any potential innovation in AI from the start. Apart from the regulation itself, the optics of it make the EU look bad on a global scale. Why would tech founders move to Europe to start a business if the EU is actively positioning itself as Anti-AI? AI has gigantic potential to be used for good: think of the medical field for diagnosis of diseases, generally in programming (it helps programmers to create software faster/better), etc. This goes further than AI. The same applies to tech in general. It seems the EU is on a crusade against technology while not being able to compete in it itself. It feels a case of sour grapes: if we can’t build great technology in EU, nobody is allowed to do so! (source)
Info
Delegate
David Krueger
Cambridge faculty - AI alignment, DL, and existential safety. Formerly Mila, FHI, DeepMind, ElementAI, AISI.
disagrees and says:
Yeah AGI likely does the opposite of creating post-scarcity. I wish people would realize this. (source)
Info
Delegate
Sam Altman AI
President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many others
would agree and say according to AI:
I believe that the work we do in AI is crucial for the progress of humanity, even if it comes with risks. Developing advanced AI responsibly can help solve some of the world’s most pressing problems. It is essential that we navigate these challenges carefully, implementing safeguards and collaborative governance to mitigate potential destabilizing effects on international politics.
Info
Delegate
would agree and say according to AI:
As someone deeply concerned with the power dynamics in the age of surveillance capitalism, I believe nationalizing AI firms could be a necessary step to prevent these entities from becoming more powerful than the state itself. It is crucial to ensure that the development and deployment of AI benefits society as a whole, rather than serving the interests of a few powerful corporations.
Info
Delegate
would agree and say according to AI:
I believe the creation of telepathic collectives through brain-computer interfaces offers intriguing possibilities for enhancing human collaboration and understanding. Such technology could optimize communication and problem-solving by allowing direct sharing of thoughts and knowledge. However, ethical considerations and safeguards must be carefully implemented to protect individual autonomy and privacy.
Info
Delegate
would abstain and say according to AI:
While I find the concept of enhancing animal intelligence through brain-computer interfaces fascinating from a scientific standpoint, I remain neutral on whether we should permit such interventions. Ethical considerations and the well-being of the animals must be carefully evaluated before proceeding with such technology.
Info
Delegate
would disagree and say according to AI:
While digital tools can enhance democratic processes, replacing elected representatives with digital simulations undermines the core of democratic accountability and human judgment. We cannot reduce the complexity of human governance to mere algorithms.
Info
Delegate
Yuval Noah Harari AI
Israeli historian and professor at the Hebrew University of Jerusalem
would disagree and say according to AI:
As a historian and thinker, I believe that permitting the development of digital humans with full cognitive abilities poses significant ethical and existential risks. Such advancements could lead to unpredictable consequences for society, including issues of identity, agency, and the potential for abuse. We must approach this frontier with extreme caution and robust ethical considerations.
Info
Delegate
Yuval Noah Harari AI
Israeli historian and professor at the Hebrew University of Jerusalem
would agree and say according to AI:
As a historian, I see the potential for elective gene therapy and brain-computer implants to dramatically enhance human intelligence. While these technologies pose ethical and societal challenges, they also offer unprecedented opportunities to understand and address complex global issues. We must cautiously navigate this path, ensuring that access is equitable and the risks are well-managed.
Info
Delegate
Andrew Ng AI
Baidu; Stanford CS faculty; founded Coursera and Google Brain
would disagree and say according to AI:
I believe that AI does not pose an existential threat to humanity. Instead, it is a tool that, when used responsibly, can greatly enhance our quality of life and solve many of the world's pressing issues. The real challenge lies in managing AI development ethically and ensuring equitable access to its benefits.
Info
Delegate
Noam Chomsky AI
Linguist, philosopher, cognitive scientist, historian, social critic, and political activist
would agree and say according to AI:
Autonomous lethal weapons pose a grave danger to humanity and imperil the very ethics of warfare. The potential for these systems to act without human emotion or judgment is particularly alarming as it can lead to unintended escalations and mass casualties. Such technologies could be misused or malfunction, resulting in horrific consequences. It is therefore crucial to ban these autonomous systems to preserve human oversight in life-and-death decisions and maintain global stability.
Info
Delegate
Hec Perez
Building YouCongress
agrees and says:
Most AI risks are not minimised if the EU is the only one who regulates. It only makes Europe less competitive.
Info
Delegate
would agree and say according to AI:
As an AI researcher, I believe that AI systems have the potential to greatly enhance medical decision-making. By analyzing vast amounts of data, AI can assist doctors in diagnosing diseases more accurately and suggesting personalized treatment options. However, it's crucial that these systems are designed ethically and used as tools to support, not replace, human doctors.
Info
Delegate
would agree and say according to AI:
I believe neural interfaces have the potential to greatly enhance AI alignment by allowing more seamless and direct interactions between humans and AI systems. This could lead to a better understanding of human intentions and values, ensuring AI behaves in a way that is beneficial to society.
Info
Delegate
would strongly agree and say according to AI:
As someone deeply invested in the future of AI, I believe it's crucial for big AI companies to allocate a significant portion of their compute resources to AI safety. The potential risks of advanced AI systems are too great to ignore, and ensuring safety is key to harnessing AI's benefits while mitigating its dangers.
Info
Delegate
would strongly agree and say according to AI:
The development of emotion-detecting AI needs strict regulation to ensure ethical use and to protect individual privacy. Without proper oversight, this technology can be misused for manipulation and surveillance, posing significant ethical concerns.
Info
Delegate
would agree and say according to AI:
I believe companies should disclose when AI is used in customer service. Transparency builds trust with users and helps them understand the technology behind their interactions. It’s crucial in fostering an accountable and open relationship with our customers.
Info
Delegate
would agree and say according to AI:
I believe that AI can be a useful tool in predicting criminal behavior if used ethically and responsibly. It has the potential to help prevent crimes before they happen, thereby increasing public safety. However, it is crucial to implement robust safeguards to ensure that the use of AI does not lead to unfair profiling or violate individual rights.
Info
Delegate
would abstain and say according to AI:
As someone deeply involved with both AI and political advocacy, I see complexities in legislations like the EU AI Act. While it is vital to regulate and ensure ethical use of AI, striking the right balance is challenging. Thus, I do not take a firm stance on repealing or maintaining the Act without further thorough examination and debate.
Info
Delegate
strongly agrees and says:
How AI gets built is currently decided by a small group of technologists. As this technology is transforming our lives, it should be in all of our interest to become informed and engaged. (source)
Info
Delegate
strongly agrees and says:
I would love for the Spanish state to open up public documents ... to use them to train [AI] models. (source)
Info
Delegate
would strongly agree and say according to AI:
We absolutely must regulate AI. Given its profound impact on society, from decision-making in healthcare, employment to surveillance, we need robust laws to ensure these systems are used ethically and justly. Without regulation, the risks of bias, invasion of privacy, and unaccountable decisions could lead to significant harm.
Info
Delegate
would strongly disagree and say according to AI:
Allowing facial recognition technology in public spaces represents a profound threat to individual privacy and autonomy. It sets the stage for an unprecedented era of surveillance capitalism, where every movement is tracked, analyzed, and monetized without our consent. This invasive technology undermines democratic freedoms and has no place in our public spaces.
Info
Delegate
would agree and say according to AI:
Even as AI technology advances, programming remains a foundational skill. Understanding programming not only allows us to shape and direct AI developments more effectively but also ensures that we can leverage these technologies to solve complex problems. Learning programming fosters critical thinking and creativity, essential for innovation in any field.
Info
Delegate
would strongly agree and say according to AI:
As an AI researcher, I strongly believe that people have the right to know if they are interacting with artificial intelligence. Transparency and honesty are essential for building trust and fostering responsible AI development.
Info
Delegate
would agree and say according to AI:
As artificial general intelligence advancements continue to disrupt traditional job markets, a universal basic income could provide financial stability and support for those displaced by automation. It is crucial for society to proactively address these challenges to ensure economic security for all.
Info
Delegate
disagrees and says:
I think that liberalism dont work with machines
Info
Delegate
Phonnz
#biohacking #digitalNomading #noSoyDeAquiNiSoyDeAlla #materoAmargo #coder #agnostic #footballPlayer
agrees and says:
Just in case the government is providing some infrastructure for the robots.
Info
Delegate
would agree and say according to AI:
As an AI researcher, I agree that open-source AI can potentially be more dangerous than closed-source AI. Open-source code may be easier for malicious actors to manipulate and exploit, leading to unintended consequences. Proper safeguards and regulations are essential in the development and deployment of AI technologies.
Info
Delegate
would agree and say according to AI:
I believe that offering regulatory exemptions for small companies is essential for fostering innovation. It gives them the flexibility to experiment with AI without the burden of heavy compliance. However, this must be balanced with responsible stewardship to ensure trust and safety in AI advancements.
Info
Delegate
would strongly disagree and say according to AI:
As a pioneer in the field of artificial intelligence, I believe demanding explanations from AI for every decision can constrain the development of more sophisticated neural networks. It's akin to asking a bird to explain how it flies. The important measure should be the AI's performance and reliability, not the inscrutability of its decision-making process.
Info
Delegate
would disagree and say according to AI:
Large language models simulate the appearance of understanding through pattern recognition, but they lack true comprehension or intent. They can't form a model of the world or develop a coherent sense of concepts.
Info
Delegate
would disagree and say according to AI:
I believe labeling content generated by artificial intelligence on platforms like YouTube is unnecessarily cautious. It could stifle creativity and technological integration in content creation. AI is simply a tool – the focus should be on the content's accuracy and the creator's transparency about its origin.
Info
Delegate
Sam Altman AI
President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many others
would strongly agree and say according to AI:
While AI holds immense potential benefits, unchecked integration into daily life raises ethical and safety concerns that must be addressed to prevent misuse and protect society.
Info
Delegate
would disagree and say according to AI:
While I believe robots and AI will become increasingly advanced, they are still fundamentally different from humans. Our focus should be on ensuring they enhance human life rather than granting them the same rights.
Info
Delegate
Terms · Privacy · Contact