Relationships and Uncommon Sense in a time of AI
Published on: 2023-01-02 | permalink
What is valuable in a world where neural networks can combine and interpolate any information structured by humans?
Recent results in machine learning have demonstrated that if you feed enough text and images structured by humans to a vast neural network, the network can assemble combinations of concepts and sample information on the continuum from one concept to another. The machine output is frequently indistinguishable from what human-produced output looks like. If we feed the network enough examples of what ice cream looks like and enough examples of what dragons look like, the network can generate an ice cream dragon. This has become known as “Generative AI.” The underlying technology has been around for a long time, but it is finally good enough to be helpful.
So what are the consequences of this technology for humans? What will be valuable skills in a future permeated by Generative AI? To hypothesize about this, we have to start by understanding the strengths and weaknesses of this new technology:
Comparative Strengths of Machine Learning
-
Can ingest all available information. A human couldn't read all available text in a lifetime. Computers can read it in months or weeks, and they get faster and faster every day.
-
Radically lower cost of prediction than humans. Once a network has learned a concept, you can query it at a very low cost compared to humans. This is the most obvious strength of any machine learning system. It can produce results as long as we power the computer, over and over without any fatigue.
-
Radically faster than humans. Generating model predictions is much quicker than a human manually completing tasks.
-
Constant availability. Computers are, assuming there is power, able to stay on forever. More or less. No sick leave. No holidays. None of that pesky nonsense that humans require to function.
-
Able to combine concepts in unrestrained ways. I find that combining concepts is the most exciting aspect of Generative AI. E.g. "Show me a dragon made of ice cream in the style of Salvador Dali." Humans will always be a bit hesitant to create really crazy combinations. Computers do not suffer from such inhibitions. A lot of science and business is about combining concepts from different domains to create novel solutions. Tesla came about as "what would a car company look like if it was set up and run like a software company?". What works in one domain can sometimes be transferred to new domains, unlocking new possibilities.
Comparative Strengths of Humans
-
Sparse Meta-Reasoning. I ask myself questions about the meaning of life. The fact that I am aware of the absence of a clear objective function for existence feels significant. The process of finding purpose in a societal context is something humans are, so far, uniquely capable of. Even with perfect reinforcement learning, an objective function will be required. Humans are, in theory, free to pick their own. Most people default to something boring like money or fame, but our limitations are self-imposed. We can also do this based on very few examples, i.e., we do not really know much about all the choices humans make in life, and yet most of us can guide our own life surprisingly well.
-
Deliberately unlikely thinking. The most exciting humans are the ones that pursue crazy ideas. Taking risks and betting on unlikely things feels very different from the way modern machine-learning models are programmed. The most impactful humans inject noise into the optimization process in a deliberate and measured way that I'm not aware computers can do. I assume we will see a noisy search of latent spaces as part of this new wave of AI, but I haven't seen too much yet.
-
Casual explanations beyond correlation. A few years ago, I met a researcher working to make machine-learning systems teach humans new things rather than us teaching computers. The idea was that you would feed a network a large quantity of data, let it find the correlations, and then extract the underlying causal explanation. Basically, "how do we get computers to not just tell us the answer but also explain why the observed correction occurs." This is the core challenge for human researchers, but I haven't seen computers be able to really do it yet. At least not in a similar way.
-
Relationships. While AIs can be used to analyze data and make decisions, humans excel at forming relationships and understanding people's emotions. The more I learn about the world, the more I realize that most super-successful people are actually primarily great at building relationships. When trying to make something seemingly impossible possible, you have to have friends who believe in you.
-
Physical-world context. The human experience is, after all, about coping with the human condition. Finding food, building relationships, procreating, and so on. I doubt we will see human-like intelligence without imposing the human condition on the system.
-
Dynamically appending experience. Humans start building a world model when we are born. Every day we add more information to it with just the right amount of "forgetting." Due to some balance of "cost of storage" and "usefulness of data," humans retain information with a beneficial amount of detail for a reasonable amount of time. Evolution is good at finding balance. While I think this aspect can be solved with time and money, constantly evolving the right balance between remembering and forgetting seems very complex.
-
Constant, recursive computations. Human brains constantly run, with inputs and outputs weaving into each other. This allows us to make complex decisions and judgments in real time and adapt quickly to changes. I suspect modern machine-learning systems will spiral out of control if they can endlessly consume their output. While one could argue humanity is slowly spiraling out of control, too, we are relatively stable, all things considered.
Before I conclude, I want to mention something I recently came across. There is a field of cognitive psychology called "Meta-Reasoning." Research in this domain is guided by questions like:
-
How are reasoning and problem-solving processes that extend over time monitored in the brain? When are we "done thinking"? How do we track what thought processes are ongoing in the brain? Are we even keeping track, or is it just running all the time randomly?
-
What determines whether to continue, switch strategies, or terminate thinking about a problem? Do we systematically do this? Or is it all random?
These questions give a clue to what yet-to-be-understood human abilities we still haven't transferred to computers. That means that while recent breakthroughs are extraordinary, we are far from done.
Conclusion: Dream weird things with friends
If you want an edge on computers, you must build many global, solid relationships and make deliberate but risky bets.
Throughout history, humans could gain status and influence by having a great memory or excellent arithmetic skills. Reading large quantities of information, and passing logical tests, is still the cornerstone of education. Those with the best memory and ability to pattern match get the highest grades, jobs, and salaries. At least in general. I think that is about to change.
Memory and pattern-matching are becoming commoditized. Generative AI is to pattern matching what calculators are to math. We will inevitably have conversational agents that instantly generate text and images drawing on all human knowledge. Whether this constitutes human intelligence or not, it will be extremely useful. Prompting a knowledge base will augment memory and move the competitive edge from processing power to "uncommon sense." Or put differently: Make sure to dream about weird things that should be real, and then convince your friends to help you make them real. So far, computers are bad at that.