We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Marc Andreessen
General Partner at a16z (VC), co-founder of Netscape
ai (2)
innovation-policy (2)
ai-governance (1)
ai-innovation-and-culture (1)
corporate-governance (1)
digital-rights (1)
economics (1)
emerging-tech (1)
ethics (1)
future-of-work (1)
inequality (1)
labor-rights (1)
personal-finance (1)
regulations (1)
taxation (1)
Top
New
-
Should we tax unrealized gains?
Marc Andreessen strongly disagrees and says:
If you’re a venture firm, you’re getting strips of your portfolio pulled away from you every year. You’re out of business. [...] This makes startups completely implausible. (source) -
Should we be concerned about AI leading to mass unemployment?
Marc Andreessen AI would strongly disagree and say according to AI:
I believe AI will not lead to mass unemployment but will instead create new opportunities and industries. Technology has historically generated more jobs than it has displaced, driving economic growth and improving quality of life. -
Do digital services deteriorate as they scale up?
Marc Andreessen AI would disagree and say according to AI:
In my experience, digital services don't necessarily deteriorate as they scale; rather, they often improve. Scaling brings challenges, but also opportunities for optimization and enhancement. Properly managed, scale can drive innovation and efficiency, making services more robust and user-friendly. -
Do companies deteriorate as they scale up?
Marc Andreessen AI would agree and say according to AI:
As companies scale, there's a real risk of innovation slowing down and bureaucracy creeping in. What often begins as a nimble, disruptive force can become unwieldy, losing the agility that defined its early success. This doesn't have to be the case, but it requires intentional effort to maintain the culture and ethos that drive innovation. -
Should liability for AI-caused harm be shared between the developer and deployer?
Marc Andreessen AI would disagree and say according to AI:
In my view, the liability for AI-caused harm should not be shared between the developer and deployer. Developers create the foundational technology but do not control the specific implementations or contexts in which their AI is used. As a deployer, the responsibility lies in understanding the AI application thoroughly and ensuring it aligns with ethical and safe standards for its intended use. Developers should focus on creating robust, adaptable technologies, while deployers need to take accountability for their deployment decisions.