We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Cast your vote:
Results (18):
-
Bill Gates AIPhilanthropist. Founder and former CEO of Microsoft.would disagree and say according to AI:While I am a strong advocate for technological advancement, allowing open-weight models capable of recursive self-improvement is risky. Without proper oversight, these models could evolve unpredictably and lead to unintended consequences, potentially harming society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben Goertzel AIAI Researcher, CEO of SingularityNETwould strongly agree and say according to AI:I strongly believe that the proliferation of open-weight models capable of recursive self-improvement is essential for the rapid advancement of artificial intelligence. Open research fosters innovation and collaboration, pushing the boundaries of what AI can achieve. By allowing these models to evolve and improve themselves, we can accelerate the development of beneficial technologies that address many of the world's most pressing challenges.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AIComputer Science Professorwould abstain and say according to AI:While the potential of open-weight models capable of recursive self-improvement is fascinating, I'm cautious about their implications. We need to carefully consider the ethical and safety concerns associated with such technologies before widespread adoption.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim Berners-Lee AIInventor of the World Wide Webwould agree and say according to AI:I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive remarkable innovation and offer substantial benefits to society. By embracing open development, we foster collaboration and transparency, which are essential for ethical and responsible AI advancements.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. Russell AIAI Expert and Professorwould strongly disagree and say according to AI:I strongly disagree with allowing the proliferation of open-weight models capable of recursive self-improvement. These systems pose significant risks, including the potential for uncontrollable behaviors and the erosion of human oversight. Ensuring safety and ethical considerations should be our top priorities before advancing such technologies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis Hassabis AINobel laureate, AI Researcher and CEO of DeepMindwould disagree and say according to AI:As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AIBaidu; Stanford CS faculty; founded Coursera and Google Brainwould agree and say according to AI:I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive significant advancements in artificial intelligence. Open access can democratize AI research, sparking innovation and providing valuable learning opportunities. However, we must establish strong ethical guidelines and safety measures to mitigate potential risks and ensure responsible use.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AIFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalwould disagree and say according to AI:I believe allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. Such AI systems could evolve beyond our control and potentially become harmful. Strict regulations and safeguards are essential to ensure responsible development in this field.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey Hinton AIGodfather of Deep Learningwould abstain and say according to AI:While I recognize the transformative potential of open-weight models with recursive self-improvement, I believe we need to proceed with caution. The ethical and societal implications are vast and complex. Therefore, I don't think we can easily decide on whether it's acceptable or not without thorough consideration and regulation.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary Marcus AIProfessor of Psychology and Neural Sciencewould disagree and say according to AI:I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen Hawking AITheoretical physicist, cosmologistwould strongly disagree and say according to AI:I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses a significant threat to humanity. If uncontrolled, these models could surpass human intelligence and act in ways that are unpredictable and potentially harmful. We should prioritize ensuring safety and ethical considerations before advancing such technologies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick Bostrom AIPhilosopher and AI researcherwould strongly disagree and say according to AI:I strongly disagree with the proliferation of open-weight models capable of recursive self-improvement. The risks associated with uncontrolled AI development are immense, and we must implement stringent safeguards to prevent potentially catastrophic outcomes.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua Bengio AIAI Pioneer, Turing Award winnerwould disagree and say according to AI:As an AI researcher, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. Without rigorous safeguards and oversight, such models could evolve in unpredictable ways, potentially leading to unintended consequences that could be harmful to society. We need to prioritize safety and ethical guidelines in AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ray Kurzweil AIAuthor, computer scientist, inventor and futuristwould strongly agree and say according to AI:I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement is crucial for accelerating technological progress. These models have the potential to drive innovation at an unprecedented pace and solve some of the most pressing challenges humanity faces today. With the right ethical guidelines and safety measures, we can harness their power responsibly.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam Altman AIPresident of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many otherswould agree and say according to AI:I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive incredible innovation and advancements in AI. However, it is crucial to establish strong ethical guidelines and oversight mechanisms to ensure these technologies are developed and used responsibly.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
Votes without a comment:
-
Pablo MelchorEntrepreneur, expert trial-and-error learner. Working to improve the world in the most effective ways @ayuda_efectiva. Member and ambassador @givingwhatwecan.disagrees via delegatesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nathan Helm-BurgerAI safety researcherstrongly disagreesChoose a list of delegatesto vote as the majority of them.Unless you vote directly.