Communities are what make humans the dominant species on Earth. By gathering in groups and working together, passing wisdom between generations, we have created robust societies that long outlive individuals. Community requires a desire to belong. The deep desire to belong experienced by humans likely evolved with our "discovery" of the power of community. But humans also need to feel special—like we make a difference. It's often the case that successful people have a strong internal locus of control, meaning they believe they have personal agency over their lives and can make a significant impact.

A critical question I ask myself when thinking about the impact of Advanced Machine Intelligence (AMI), the term preferred by Yann LeCun, is: How important is intelligence? And what does that have to do with community and our need to feel special? We all seem to take for granted that being extraordinarily intelligent is a superpower. I think that's because successful people assume their actions are the source of their success, and people like to think they are intelligent. But it's possible their success is equally the result of the community they are part of. In my post Untangling Skill and Luck, I explore how skill and luck influence the outcome of events. Clearly, our actions make a difference, but the degree to which it matters is essential when forecasting the impact of AMI. Let's go unreasonably deep into this question!

To me, the question of intelligence's relevance is one of understanding the relationship between our intelligence and our ability to predict the future. I think it's safe to say being more intelligent makes you able to better predict the future. Your average human can navigate some pretty complex scenarios, making lots of impressive predictions about the movement of objects and their interaction. We can abstract away lots of low-level detail and create models that allow sophisticated predictions.

What about the marginal cost of predicting the future? I recently talked to someone about why the "exponential mindset" prevalent in Silicon Valley might be tricking people into thinking things like self-driving cars will be available sooner than they actually will. I think the counter-acting force holding self-driving cars back is that the world is extremely sparse. To paraphrase a classic: What happens when an exponential improvement meets a combinatorial explosion?

First of all, I personally do not believe the world is deterministic. Deciding your position on determinism has significant downstream consequences. I've decided to subscribe to the Copenhagen interpretation of quantum mechanics, i.e., the outcome of every quantum event is probabilistic. The wave function collapse is inherently random, i.e., even with complete knowledge of the system, it is impossible to predict the exact outcome of a quantum event. In this model, the observer plays a crucial role. The act of measurement is what causes the wave function to collapse.

Second, sufficiently large complex systems are chaotic, i.e., the approximate future cannot be predicted using the approximate present. Even tiny errors in measuring the current state of a system can lead to vastly different outcomes. For each prediction you make, these errors compound.

Thirdly, there is only so much energy to be extracted from matter. Whatever method we use to predict the future, there will be a relationship between the value of the prediction and the energy cost. The cost of a prediction increases exponentially with how far into the future we want to look.

This indicates that there is an exponential increase in cost when predicting the future. Even if intelligence improves our ability exponentially, our net ability to predict the future could exhibit diminishing returns, i.e., the value of intelligence has diminishing returns. My personal intuition makes me feel this is true: I think the value of intelligence has diminishing returns.

What consequences does this have? That's a matter of whether differential intelligence in species matters. Even if intelligence has diminishing value, it's possible that a persistent 20% prediction advantage is enough to replace another system. Even if a 1000x higher IQ only yields a 20% prediction advantage, it could eventually displace humans. In business, differentials are critical. If you can compound at a higher rate and survive, you will become the center of gravity. It's possible the same is true for intelligent organisms. I wrote about that in "The problem with AI will not be IQ, it will be immortality".

Perhaps this is an unexpected example, but does intelligence matter when playing Yatzy? I've asked myself that question, and couldn't resist going deep down the rabbit hole and finding out. You can improve your average score in Yatzy with an optimal strategy, but the one-sigma variance in outcome for the optimal strategy exceeds the difference in expected outcome between optimal and decent. That means a great yatzy player will frequently lose against a decent one.

It's crazy expensive to compute the optimal strategy, and the value isn't that big.

Maybe the same is true for intelligence?