The problem with AI will not be IQ, it will be immortality
Published on: 2024-04-07 | permalink
Don Valentine, the legendary founder of Sequoia Capital, said his most significant advantage as a VC was knowing the future. That’s both a trivial and profound observation. Of course, knowing the future would be a superpower. Even mildly ambitious people could get rich and powerful if they consistently knew the future.
Right now, there is a considerable discussion about the impact of AI. Some throw around statements like, “We will have AI as capable as humans in just a few years!” Let’s combine Don’s realization of the value of knowing the future with thoughts about the possible consequences of super-human AI. These thoughts continue my post about untangling skill and luck when building a business. My recent post concluded that human intelligence can help improve the odds of creating a great company, but there is so much randomness involved that outlier outcomes are still, to a large degree, the result of luck.
Let’s imagine we created an artificial super-entrepreneur with an IQ of 1500, i.e., 10x, a human genius. How much could such a system increase the odds of building a successful company? Would it generate success after success after success? Here are some thoughts:
-
Reality is a dynamic, multi-agent game with tons of noise. Optimizing behavior in such an environment is much more complex than in a static world. The actions of one agent influence the actions of other agents. The environment changes constantly. Observations about the world are lossy approximations at best.
-
The financial world is long-term growing but short-term zero-sum. Over a year or two, there is only so much value to capture in any market. The existence of one super-entrepreneur might consolidate a market, but the moment there is more than one, there will be competition. Markets are created or expanded through innovation; then, they are rapidly captured. Some markets grow for a long time (the internet, smartphones, etc.), but eventually, they saturate. Growth requires an underlying productivity increase to support the increased financial value, like the way the internet lowers transaction costs, and smartphones further accelerate the same process.
-
Long-term modeling of markets is very, very hard. Was there enough information in the 1990s to predict Nvidia’s meteoric rise in market cap in 2023-2024? Of course not. There are countless alternative timelines in which a subtly different chip architecture becomes the dominant. There are countless other timelines in which AI development took a different path or we got unexpected breakthroughs in some other technology, like quantum computing. Financial quants have tried to predict the future to make money for decades. Even the best in the game, like Renaissance, have realized there are limits to their power (their fund cannot grow beyond $5B, or it starts to move the market).
-
The marginal cost of predicting the future increases exponentially. We could predict the local future for a while, but the more significant the scope and the longer the time frame, the harder it gets. The more randomness is aggregated.
-
Compound interest is one of the few things we can predict. Most super-successful financial players focus on compounding value rather than making lucky bets. Warren Buffet runs the opposite of a VC. He does not like gambling. He does not like leverage. He wants his money to compound steadily. In such a universe, stability is the goal. Not disruption. Not change. For people with money, change is the enemy, and compounding interest is the goal. For people without money, disruption is the path to getting money.
-
The key to overcoming volatility is buffering enough so you can average out. On average, assets are rationally priced. In the short term, they can deviate. With a lot of change, deviations can persist for a long time. Benjamin Graham, the legendary investor and mentor of Warren Buffet, is famous for saying: “In the short-term, the market is a voting machine, but in the long-term, it is a weighing machine." If you take risks, make sure to have enough buffer to ride out temporary volatility. Sometimes, all it takes to succeed is to be the only one still playing.
Based on all of this, my conclusion is that a (presumably immortal) super-entrepreneur would play the ultimate long game. It would realize it cannot reliably manifest hit after hit in the short term. Instead, it would establish one cash machine after another in relatively low-risk domains and aggregate and compound. It might place high-risk bets now and then: some would work, but most would not. Over 100 years, it would be rich. Over 200 years, it would probably control the world. Humans dislike this strategy because they want to get rich fast. Life is too short. I think a super-human AI would look at humans and say: “ Oh, you short-term thinking, short-lived little meat bags. You won’t even realize I’m winning until it’s too late. ”
I don’t think the problem with AI will be IQ; I think it will be immortality. It’s a good thing humans die. It makes room for the next generation. In the same way, inheritance tax breaks legacy wealth and makes room for new players. Or anti-trust law breaks up monopolies.
Always play the long game.