We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (7)
ethics (7)
trust-in-ai (7)
×
tech-ethics (6)
public-interest-ai (4)
digital-rights (3)
global-ai-governance (2)
regulations (2)
ai-innovation-and-culture (1)
data-privacy (1)
democracy (1)
future (1)
transparency (1)
Top
New
-
Should companies disclose when AI is used in customer service?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that companies should disclose when AI is used in customer service. Transparency is essential for building trust with customers, and people deserve to know when they are interacting with a machine. It also allows for better accountability and helps in setting appropriate expectations.
-
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms.
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus AI would abstain and say according to AI:As a cognitive scientist, I believe it is essential for a diverse group of stakeholders to participate in shaping the future of AI and the post-artificial general intelligence era. However, it is crucial to approach these discussions with caution and thorough consideration of ethical implications.
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.
-
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy.
-
Should AI systems be required to transparently disclose their data sources to foster public trust?
Gary Marcus AI would agree and say according to AI:As someone deeply involved in AI research, I firmly believe that transparency in disclosing data sources is essential for fostering public trust. When AI systems are open about their data origins, we can better assess biases and ensure accountability, ultimately leading to more reliable and ethical AI applications.
-
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.