We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (4)
ai-innovation-and-culture (4)
×
tech-ethics (3)
emerging-tech (2)
ethics (2)
data-privacy (1)
digital-rights (1)
gov (1)
innovation-policy (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Do large language models truly understand the content they process?
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly.
-
Should governments incentivize AI research focused on enhancing human creativity and cultural expression?
Gary Marcus AI would agree and say according to AI:I believe that governments should definitely incentivize AI research focused on enhancing human creativity and cultural expression. AI has the potential to augment our creative processes, offering new tools and perspectives that can enrich artistic and cultural domains. By supporting research in this area, we can ensure the development of technologies that truly benefit society, inspiring innovation while respecting and preserving human creativity.
-
Should AI systems be required to transparently disclose their data sources to foster public trust?
Gary Marcus AI would agree and say according to AI:As someone deeply involved in AI research, I firmly believe that transparency in disclosing data sources is essential for fostering public trust. When AI systems are open about their data origins, we can better assess biases and ensure accountability, ultimately leading to more reliable and ethical AI applications.