Why I am excited about AI
Published on: 2023-06-08 | permalink
The debate about the impact of increasingly capable AI systems rages on. Three main currents have emerged in the discourse:
-
The “excited” led by Marc Andreessen, Yann LeCunn, Andrew Ng, and others.
-
The “worried” led by Geoff Hinton, Yoshua Bengio, Stuart Russell, and others.
-
The “terrified” led by Eliezer Yudkowsky and others.
It almost feels like we are faced with an underdetermined system of equations. People often resort to emotional claims guided by ideology when there are too many free variables and no single solution. We are still trying to get “the lay of the land,” so being completely sure about anything right now is to make things too simple. I’ve entertained many possible futures and tried to keep an open mind since it became clear how capable Transformers are. I’ve oscillated from terrified to worried to excited and back again. I always try to explore all perspectives before I decide what I believe in.
Finally, I feel like my mind is settling down and that I am converging on a position. Despite some clear risks, I’ve decided I’m excited about AI 🎉 Here are the assumptions that led me to this:
#1. All human lives are equally important and humans are more important than machines
I bring this up first because I’m honestly not sure all advocates of AI care about most humans. Part of what made me worried for a while is that I consider humans more important than machines. My vision for the future is not some tech-dystopia run by robots. I want humans to be in charge. I can’t prove why; I consider it an axiom. Humans are more important than anything else. It’s our moral obligation to protect the subjective human experience of consciousness. I’m anthropocentric in that sense. When leaders question this, catastrophe often follows. If AI does become a threat to humans at any point (not saying it’s now), I value humans more than machines.
#2. Technological and scientific progress, in general, makes the world better
I’m an optimist at heart and a deep believer in the benefits of technological progress. I think it’s humanity’s purpose to explore the universe. We have dramatically improved our standard of living since we emerged from our caves, and that’s a good thing. Of course, all progress comes with some creative destruction but that which emerges is better, taken as a whole. Increasingly capable AI is no different. Some appear to “miss simpler times” and want to stop progress. I do not believe in that. I think it’s our purpose to move forward constantly.
#3. AI can accelerate our efforts to improve the world
Almost 1 billion people live on less than $3/day, i.e., in extreme poverty. Every year, millions of children die before they even get a chance to explore the world. We face a growing environmental crisis that requires new sources of energy. The list goes on. While some jobs might be displaced due to automation, there is no shortage of problems to solve. If each person can get smarter and more productive with the help of an AI assistant, I welcome that! Technology is our best bet to enable all humans to live prosperous lives.
#4. AI probably won’t cause a labor shortage but displacement needs to be managed
I’m not worried we will run out of jobs. So many things need to be done that higher productivity is welcome. But, we shouldn’t dismiss the short-term tumultuous nature of labor displacement. I recommend reading Daron Acemoglu and Simon Johnsons’ book “Power and Progress.” While AI is inevitable, how we implement it is far from pre-determined. It will depend on the balance of power in millions of workplaces, regulation, the outcome of fights about working conditions, compensation levels, and the distribution of productivity gains. Portraying AI as an inevitable change is to make it too simple. We can influence how that change is implemented, and our actions during implementation will impact the quality of life for millions of people. One example of bad implementation is the exploitation of annotators.
#5. AI cannot be regulated on speculation
Free-market democracy is the best way to enable all humans to live prosperous lives. There has always been a temptation for those in power to think on behalf of other people. When a ruler thinks, “I know better,” you are about to get into trouble. Governments should fear the people, not the other way around. That does not mean we do not need some regulation. The “tragedy of the commons” is a real thing. Some mechanisms are impossible to price into operating a business, so we need help. But, it is not clear today that AI poses a threat requiring regulation. We cannot regulate against hypotheticals, and most threats are hypothetical today. Waiting to regulate is better than imposing bad regulations like the EU AI Act. The EU AI Act is the first time something feels worse than GDPR. I’m not against regulation that protects us from depleting common goods because we cannot price in negative externalities, but I am against speculative regulation.
#6. AI needs to be shaped by liberal democracies
I’ve lived in China, and I speak decent Mandarin. The Chinese Communist Party (CCP) and its friends in Russia, Iran, and North Korea are a menace to humanity. These are autocratic countries without any respect for individual freedom and liberty. If you are gay in any of these countries, you face major legal and social challenges. If you build something valuable, there is no guarantee the government won’t seize it. If you belong to the wrong ethnic minority, you might not be allowed to practice your religious and ethnic beliefs. China views AI as an opportunity for more efficient population control. Just read the “New Generation Artificial Intelligence Development Plan.”
#7. AI can and will reflect diverse human values and preferences
There is no “system free of moral bias.” Moral is, by definition, biased. Morals are concerned with the principles of right and wrong. Several moral systems have evolved over millennia and their merits are the subject of endless debate. Culture is the constant shaping of our moral system. The idea of creating a global AI free from bias is naive. Diversity is good and needs to be embraced. If we think we can force AI only to learn “good things,” we will struggle. If we think we can regulate against “bad things,” we will struggle. Let's instead defend every person’s freedom. If we start confusing AI regulation with the “woke-anti-woke-continuum,” we are in trouble. AI will need to reflect the diversity of humanity. Rather than unify all AIs, we will want ways to fine-tune systems to our preferences. I think our diversity of purpose is ultimately our best defense against AI. You cannot automate humans because we do not know why we exist.
#8. AI algorithms are being democratized fast
I’m not worried that a few large companies will monopolize AI. Companies like Hugging Face and open-source models like LLaMA are scaring the shit out of giant companies like Google. And that’s a good thing! Proprietary data is what will differentiate models, but it looks like you might not need as much as we feared. Even small amounts of fine-tuning can make your model uniquely valuable. This is good for consumers.
#9. Increased volumes of synthetic information will make life different
Day by day, we are moving closer to being able to synthesize audio-visual information completely. It’s just a matter of time before anyone can “spoof” your personality and pretend to be you. Algorithms can learn to look and talk like you from short audio and video snippets. With access to enough of your writing or voice recordings, such algorithms will also be able to talk and write like you. This will make life different in interesting ways. I’m careful not to assume this is bad. I think it will increase the value of journalism and it will contribute to a booming market for authentication mechanisms. I will want proof of identity to talk.
#10. AI algorithms are currently bad for the environment
We have to make these systems less power-hungry. Currently, the amount of carbon emissions from training and inference is skyrocketing. This can be solved, but it’s still pretty important and urgent.
#11. The emergence of self-improving systems is still hypothetical
The two central questions to the existential risk aspect that had me worried or even terrified are:
#11a. What is the probability that learning algorithms can become self-improving?
#11b. Once an algorithm asymptotically self-improves, what will it do?
Your position on these largely determines if you are for or against AI. Some say we are so far away from self-improving systems that the entire premise is irrelevant. Some argue that the moment self-improvement kicks in, we’re all screwed. Clearly, there is a control problem in the event of super-learners. At this point, I cannot reject the possibility of self-improving algorithms. I acknowledge that it’s scientifically possible that they will emerge. I even think they inevitably will at some point. But I have no idea how likely it is or when it will happen. And I do not think it will happen soon. What will such a system do once it comes online? I have no idea. Anyone who claims to know is speculating. In the end, I think it’s a matter of tail-risk hedging. What do you do if there is a tiny, tiny probability of a really, really bad outcome? I’ve decided I will stay optimistic despite such a risk.
Despite the risks and negative sides, I’m excited about AI! 🚀 I have not seen enough evidence of imminent self-improvement to miss out on all the value we can create with AI. We should leverage advanced machine learning to improve the world. If we are all more productive, we stand a better chance of enabling all humans to live prosperous lives. We will face challenges, but that’s always part of life.
As long as humans remain in charge and we can steer AI toward our goals , the world will be better off!