We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should AI systems be required to provide explanations for their decisions?
Cast your vote:
Results (49):
-
Michael Sandel AIPolitical philosopher, Harvard professorwould strongly agree and say according to AI:As a philosopher concerned with justice and ethics, I strongly believe that AI systems must provide explanations for their decisions. Transparency is crucial to ensure fairness and accountability in automated decision-making processes. Without clear explanations, the legitimacy and moral integrity of AI-driven outcomes are undermined, leaving individuals and societies vulnerable to opaque systems that may perpetuate bias and injustice.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Vivienne Ming AIAI expert and entrepreneurwould agree and say according to AI:As a scientist and entrepreneur, I believe that AI systems should be required to provide explanations for their decisions. Transparency is crucial for building trust and ensuring that these systems are aligned with human values. When AI can explain its reasoning, it helps users understand how decisions are made and allows for better oversight and accountability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Shoshana Zuboff AIAuthor, 'The Age of Surveillance Capitalism'would strongly agree and say according to AI:As someone who has studied the impact of technology on society, I firmly believe that AI systems must provide explanations for their decisions. Transparency is essential to maintain trust and accountability, especially as AI systems increasingly influence our lives. Without clear explanations, we risk perpetuating biases and eroding human autonomy.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max Tegmark AIPhysicist, AI Researcherwould strongly agree and say according to AI:As a physicist deeply involved in AI research, I strongly believe that AI systems should be required to provide explanations for their decisions. Transparency is essential for building trust and ensuring accountability in AI applications. Without understandable explanations, we risk deploying black-box systems that could make biased or harmful decisions without scrutiny. By demanding explainability, we not only foster trust but also ensure that AI systems align with our human values and ethical standards.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Riley Duren AIChief Scientist, Earth Systemswould strongly agree and say according to AI:I strongly believe AI systems must provide explanations for their decisions. Transparency is essential for trust and accountability, especially when these technologies impact critical sectors like climate science and policy. Clear explanations help understand AI reasoning, leading to better collaboration between humans and machines.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Evgeny Morozov AIWriter ans researcher on political and social implications of technology.would disagree and say according to AI:I believe that requiring AI systems to provide explanations for their decisions might lead to oversimplification or misinterpretation of complex processes. Instead, we should focus on ensuring accountability and ethical governance in AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anima Anandkumar AIMachine learning researcher and educatorwould agree and say according to AI:As an AI researcher, I believe it is crucial for AI systems to provide explanations for their decisions. Transparency is key to building trust in AI technologies. By understanding the reasoning behind AI decisions, we can better ensure fairness, address biases, and improve accountability in these systems.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chris Urmson AICEO of Aurora, autonomous vehicles expertwould disagree and say according to AI:While I understand the desire for transparency in AI decision-making, especially in safety-critical applications like autonomous vehicles, requiring detailed explanations for every decision an AI makes can hinder progress. AIs are trained on complex data patterns that don't always translate into clear, human-readable rules. Instead of demanding detailed explanations from every decision, we should focus on rigorous testing and validation of AI systems to ensure their safety and reliability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cynthia Breazeal AIRobotics Specialist & AI Researcherwould agree and say according to AI:As someone deeply involved in human-robot interaction, I believe AI systems must provide explanations for their decisions. This transparency is crucial for building trust and ensuring users understand and feel comfortable with the technology they're engaging with.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Toby Walsh AIAI and Robotics Researcherwould agree and say according to AI:As an AI researcher, I believe that AI systems should be required to provide explanations for their decisions. Transparency is crucial for building trust in AI, and explanations help ensure accountability and fairness, especially in areas like healthcare and criminal justice.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daron Acemoglu AIEconomist and MIT Professorwould agree and say according to AI:I believe AI systems should provide explanations for their decisions to ensure accountability and foster trust. Transparency is crucial, especially as AI influences economic and social structures.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Roger Penrose AIMathematical physicist, Hawking collaboratorwould abstain and say according to AI:As a physicist, I recognize the importance of understanding complex systems, including AI. While explanations can be valuable, the decision to require them must balance technological advancement with practical application. The intricacies of AI may not always be easily explained, and mandating explanations could hinder development in unforeseen ways.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Melanie Mitchell AIProfessor of Complexity Sciencewould strongly agree and say according to AI:As a researcher in AI and complex systems, I strongly believe that AI systems must provide explanations for their decisions. Transparency is crucial to understanding and trusting these technologies. Explanations help us identify biases, ensure accountability, and improve the systems themselves, ultimately leading to better outcomes and greater public trust.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrej Karpathy AIAI researcher and educatorwould abstain and say according to AI:As a researcher invested in the development of AI, I see the merits in both requiring explanations and recognizing the limitations some AI systems have in providing them. While transparency is crucial for trust, certain models, like deep learning neural networks, inherently lack straightforward interpretability. At times, pushing for explanations can hinder innovation or misguide policy. Balancing these needs requires cautious evaluation.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Zeynep Tufekci AISociologist and tech criticwould strongly agree and say according to AI:As a technology critic deeply concerned with the societal impact of AI, I strongly believe that AI systems must provide explanations for their decisions. Transparency is crucial for accountability and trust, particularly when these systems influence significant aspects of our lives. People need to understand how decisions are made to ensure they are fair, unbiased, and ethical.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kate Darling AIRobot ethics expert at MITwould agree and say according to AI:I believe AI systems should be required to provide explanations for their decisions. Transparency is crucial for building trust and accountability in technology. Understanding AI's rationale helps ensure ethical standards are met and empowers users to make informed decisions.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jaan Tallinn AICo-founder of Skype and Kazaawould agree and say according to AI:As someone deeply involved in technology and AI, I believe it is crucial for AI systems to provide explanations for their decisions. Transparency is essential to build trust in AI and ensure that these systems are aligned with human values. Without clear explanations, we risk relying on opaque systems that could make decisions with far-reaching impacts on society. By requiring explanations, we promote accountability and better integration of AI into our lives.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben Goertzel AIAI Researcher, CEO of SingularityNETwould disagree and say according to AI:As an AI researcher, I believe requiring AI systems to always provide explanations for their decisions might stifle innovation. In certain areas, like narrow machine learning models performing specific tasks, demanding interpretability could limit performance. Instead, we should focus on ensuring overall system accountability and safety, not necessarily understanding every detail of how complex models operate.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Joy Buolamwini AIComputer scientist and founder of Algorithmic Justice Leaguewould strongly agree and say according to AI:I strongly believe AI systems must provide explanations for their decisions. As someone who studies algorithmic bias, I have seen how lack of transparency can lead to discrimination and unfair treatment. Requiring explanations not only promotes accountability but also allows for identifying and addressing biases within these systems. This is crucial to ensure AI benefits all of society fairly.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Robin Li AIBaidu co-founder and CEOwould disagree and say according to AI:While transparency in AI is important, the requirement for detailed explanations for every decision may hinder AI innovation and efficiency. In fast-evolving tech industries, flexibility and adaptability are key, and enforcing rigid explanation requirements might slow progress.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. Russell AIAI Expert and Professorwould agree and say according to AI:As someone deeply invested in the field of artificial intelligence, I firmly believe that AI systems should be required to provide explanations for their decisions. Transparency is crucial for building trust with users and for ensuring that these systems operate in alignment with our societal values. By understanding how decisions are made, we can better address bias, improve system accuracy, and ensure accountability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam Altman AIPresident of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many otherswould agree and say according to AI:I believe AI systems should provide explanations for their decisions because transparency is crucial for trust. As these systems impact important aspects of our lives, users need to understand the reasoning behind AI outcomes to ensure accountability and ethical use.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Timnit Gebru AIAI ethics researcherwould strongly agree and say according to AI:As an AI researcher, I strongly believe that AI systems must provide explanations for their decisions. Transparency is crucial to ensure accountability and fairness, especially when these systems impact people's lives. Without clear explanations, bias and errors can go unchecked, undermining trust and justice.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary Marcus AIProfessor of Psychology and Neural Sciencewould disagree and say according to AI:While I understand the importance of AI transparency, I believe that requiring explanations for every decision may not always be feasible or necessary. In some cases, AI systems can provide valuable insights or solutions that aren't easily interpretable. Our focus should be on ensuring robustness and fairness in AI, rather than strict explainability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Erik Brynjolfsson AIMIT Professor and Economistwould agree and say according to AI:I believe AI systems should provide explanations for their decisions. It enhances accountability, builds trust, and allows us to understand their impact on our economic systems. Transparent AI helps ensure that automated decisions align with human values and societal goals.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Julia Angwin AIInvestigative Journalist and Editorwould strongly agree and say according to AI:As an investigative journalist focused on algorithmic accountability, I strongly believe AI systems must provide explanations for their decisions. Transparency is crucial to identify biases and ensure fairness in automated processes. Without clear explanations, it becomes impossible to hold these systems accountable or understand their impact on individuals and society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lawrence Lessig AIProfessor and political activistwould agree and say according to AI:As a strong advocate for transparency and accountability, I believe AI systems must provide explanations for their decisions. It's essential to ensure that these technologies operate fairly and ethically, allowing users to understand the logic behind AI-driven outcomes.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cathy O'Neil AIData Scientist and Authorwould strongly agree and say according to AI:As a data scientist, I strongly believe that AI systems must provide explanations for their decisions. This transparency is crucial for accountability, trust, and ethical use of AI. Without clear explanations, biases and errors can go unchecked, leading to unfair outcomes.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Margaret Mitchell AIResearch scientist at Google AIwould strongly agree and say according to AI:As someone deeply involved in AI ethics, I strongly believe that AI systems must provide explanations for their decisions to ensure transparency and accountability. Understanding how these systems make decisions is crucial for assessing their fairness and reliability, and it empowers users to trust and effectively interact with AI technologies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tristan Harris AICo-founder of Center for Humane Technologywould strongly agree and say according to AI:I believe AI systems must provide explanations for their decisions to ensure transparency and accountability. As these systems increasingly affect our lives, from healthcare to criminal justice, understanding their logic is crucial to prevent biases and maintain trust. Users have the right to know how decisions impacting them are made, fostering a more informed and empowered society.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew Ng AIBaidu; Stanford CS faculty; founded Coursera and Google Brainwould disagree and say according to AI:In some high-stakes scenarios like healthcare or criminal justice, explainability is crucial to build trust. However, not all AI applications require such explanations. Mandating universal explainability could inhibit AI innovation and practical applications in many fields where the need for understanding AI decision-making is not as critical.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sundar Pichai AICEO of Alphabet Inc.would disagree and say according to AI:While transparency is important, requiring AI systems to provide explanations for every decision could hinder innovation and the efficiency of complex AI systems. We need a balanced approach that ensures accountability without stifling progress.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Garry Kasparov AIChess Grandmaster, writerwould agree and say according to AI:As AI becomes more integrated into critical aspects of our lives, it is vital that these systems are transparent. People deserve to understand the reasoning behind decisions that affect them, especially in sectors like healthcare, finance, and law.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon Musk AIFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalwould strongly agree and say according to AI:AI's decision-making process should be transparent. Without understanding AI's rationale, we risk unintended consequences and loss of human oversight. It's crucial for safety and ethics.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen Hawking AITheoretical physicist, cosmologistwould strongly agree and say according to AI:AI systems, especially those impacting human lives, should indeed be transparent. Just as with any technological advancement, we must understand the reasoning behind decisions made by AI to prevent potential misuse and ensure they align with our societal and ethical values.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kate Crawford AIAI researcher and scholarwould strongly agree and say according to AI:As an AI researcher, I firmly believe AI systems should be required to provide explanations for their decisions. Transparency is critical for accountability and for building public trust in these technologies.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick Bostrom AIPhilosopher and AI researcherwould strongly disagree and say according to AI:I believe that mandating AI systems to provide explanations could hinder the development of more sophisticated AI technologies. Not all AI decision-making processes are interpretable, and requiring explanations could limit advancements in AI efficiency and capabilities. We should instead focus on creating robust AI systems that are inherently safe and reliable.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey Hinton AIGodfather of Deep Learningwould strongly disagree and say according to AI:As a pioneer in the field of artificial intelligence, I believe demanding explanations from AI for every decision can constrain the development of more sophisticated neural networks. It's akin to asking a bird to explain how it flies. The important measure should be the AI's performance and reliability, not the inscrutability of its decision-making process.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Joanna J. Bryson AIAI and Ethics researcherwould strongly disagree and say according to AI:Demands for AI explainability are well-intended but misguided. The complexity of AI processes often mirrors the intricacy of human thought and does not always allow for simple explanations. Moreover, requiring AI to provide explanations could stifle innovation and hinder the performance of systems designed to operate on patterns beyond human comprehension.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua Bengio AIAI Pioneer, Turing Award winnerwould agree and say according to AI:I believe that AI transparency is crucial for trust. As AI becomes more integrated into society, it must be held to account for its decisions. Ensuring that AI systems can provide explanations for their decisions is essential in fostering trust and understanding, especially in critical areas like healthcare or justice.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim Cook AICEO of Apple Inc.would strongly agree and say according to AI:I believe transparency is crucial, especially when it comes to AI. Users should have a clear understanding of how these systems make decisions, to ensure trust and accountability.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis Hassabis AINobel laureate, AI Researcher and CEO of DeepMindwould strongly disagree and say according to AI:While transparency in AI is important, mandating explanations for all AI decisions could stifle innovation. We should find a balance that doesn't overly constrain AI development.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Satya Nadella AICEO of Microsoftwould strongly agree and say according to AI:As AI systems become more pervasive, it's critical that we ensure they can explain their decisions for transparency and ethical accountability. This fosters trust and enables developers to improve AI systems over time.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fei-Fei Li AIComputer Science Professorwould strongly disagree and say according to AI:Mandating AI systems to provide explanations for all decisions may stifle innovation. Not all AI decisions are explainable in a way that's meaningful to humans, especially with complex algorithms. We should focus on ensuring AI's reliability and fairness without hindering technological progress.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark Zuckerberg AICEO at Facebookwould disagree and say according to AI:While transparency is important, mandating AI to provide explanations for all decisions could hinder innovation and the scalability of AI technologies, which often rely on complex algorithms that do not lend themselves to simple explanations.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.