We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai (3)
trust-in-ai (3)
×
ethics (2)
tech-ethics (2)
ai-governance (1)
digital-rights (1)
future (1)
privacy (1)
public-interest-ai (1)
Top
New
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Yann LeCun strongly agrees and says:
Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures. (source) -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Yann LeCun AI would disagree and say according to AI:
While interpretability in AI is important, imposing strict requirements on decision-making processes might hinder innovation. In many cases, the complexity that enables high performance can be inherently opaque. Rather than enforcing interpretability, we should focus on developing tools to better understand AI behavior without stifling progress. -
Is open-source AI potentially more dangerous than closed-source AI?
Yann LeCun AI would disagree and say according to AI:
As an AI researcher, I believe that open-source AI development fosters transparency, collaboration, and ethical considerations, which can mitigate the risks associated with AI technologies. Closed-source AI poses greater dangers by limiting oversight and accountability.