Anthropocentrism, Alignment, and Personalization
Published on: 2023-04-25 | permalink
Executive Summary
-
First of all, progress in AI is predominantly positive 🎉 🚀 We should enjoy the benefits of advanced machine learning. The risk of a run-away scenario is most likely minimal since the likelihood of a strong discontinuity in capabilities is small.
-
That said, I consider it a moral axiom and obligation to protect the subjective, conscious experiences of humans.
-
Humans are valuable, but human existence is not unconditional.
-
We impose regulation when self-interest fails to protect our long-term, common good.
-
Super-intelligent AIs might be similar to companies, which implies they could be regulated.
-
There is an old debate about whether technology risks replacing humans that is accelerating again. It is not clear if “this time is different.”
-
Human goals, preferences, and ethics are constantly changing. This protects us from automation.
Anthropocentrism
I follow closely the AI debate currently unfolding. The most extreme part of the debate is about whether AI is an existential threat to humans or not. This post explores how we have tried to protect the long-term wellbeing of humanity, and what might make sense now. Humans have a long history of favoring humans. Already in The Book of Genesis, which could be as old as 3400 years, you can read in verse 1:26:
… and God said, let us make [humans] in our image, after our likeness: and let [humans] have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth , and over every creeping thing that creepeth upon the earth.
I think any discussion about the alignment of human ethics and AI needs to start with the question: are humans important? While the answer might feel like an unmistakable YES, it likely isn’t a binary question for most people; it’s a matter of degree. Most humans probably think humans should exist. Any other position would legitimize genocide or, at the very least, suicide. And by reading this, you’ve chosen to be alive, which means something.
Alignment and The Common Good
While most of us want to exist, most humans also acknowledge the tension between our material expectations and the environment. The more possessions we want, the more strain we put on the environment. To manage this tension, we impose regulatory limits. Even if we think humans should exist, our existence is not unconditional. Economists talk about The Tragedy of the Commons:
… a phenomenon in which common resources to which access is not regulated by fees based on individual users tend to become depleted. If users of such resources act to maximize their self-interest and do not coordinate with others to maximize the overall common good, exhaustion and even permanent destruction of the resource may result…
Most humans do not feel good about burning oil and coal, cutting down forests, causing the extinction of endangered species, polluting coral reefs, or any of the countless other destructive things we nevertheless keep doing. Through our consumption, we contribute to exhausting Earth’s ecosystem. In the absence of decentralized alignment , we ask governments to coordinate. Governments attempt to align personal needs with the long-term, common good.
Alignment of AI and Regulation of Companies
Recent progress in AI has sparked a discussion about the possible impact super-intelligent AI might have on humanity. A term that frequently occurs in this discussion is “AI alignment research”:
…alignment aims to steer systems toward humans’ intended goals, preferences, or ethics
There are striking similarities between such research and the ultimate purpose of regulation. Government regulation aims to steer markets toward humans’ intended goals, preferences, or ethical principles, i.e., to avoid tragedies of the commons where individual greed depletes essential common resources.
At the receiving end of regulation is the free market. Of all the forms for creating value we’ve tried, free democratic markets powered by independent academia have been our most significant source of progress. So whatever regulation we impose, it needs to be carefully balanced with the value of a free market. Most current AI regulation is a joke. And, while I’m at it, GDPR is a complete waste of money and time that makes Europe drastically less able to compete. I consider it my duty to avoid implementing any compromises due to GDPR. I share this to make sure you do not mistake my reasoning in this post as pro “big government.” I think a small government is probably better in most cases, but I definitely think we need governments.
Anyway, central to free markets are independent companies. Companies have a very straightforward, well-defined objective function: maximize shareholder value. In a way, companies operate as distributed, super-intelligent AIs. People work together to perform calculations and take actions that strive to capture as much value as possible.
It turns out that regulation of markets can help us escape certain local minima that, if not escaped, would deplete common goods. Take electrification as an example: Tesla has received billions in subsidies, and buyers of Tesla cars have enjoyed aggressive tax benefits. As a taxpayer, I’m happy to contribute to getting us off our oil addiction faster. While companies would likely have realized the necessity of transitioning to electric vehicles anyway, government action undeniably accelerated our transition. Most industries welcome regulation. Pharmaceutical companies, for example, want a level playing field. If anyone could launch a drug without rigorous testing, the market would be impossible to navigate for doctors and patients. A messy pharma market would ultimately hurt drug companies.
Will Technology Replace Humans?
I see a connection between companies in a free market, the tragedy of the commons, and AI. Let me explain: Due to their objective function, companies are naturally focused on efficiency and automation. Often, that boils down to removing humans from processes. Less manual effort means more profitability. An extreme outcome of this work would be that humans are no longer needed. Is such an outcome an expression of human goals, preferences, and ethical principles?
I see at least three possible lines of reasoning when responding to this question:
1. AGI is distant, and while AI is getting more capable, technology-driven unemployment isn’t something we need to be concerned about. Fears about new technology replacing humans are not new. The Luddites were a secret oath-based organization of English textile workers in the 19th century who formed a radical faction that destroyed textile machinery. Each such historical scare has been proven wrong by consistent economic growth. Marc Andreessen has a detailed post on this topic. He isn’t exactly neutral, but he makes a good point. His post argues that technology is already illegal in many sectors, such as healthcare, education and housing, and that the result is a steady increase in cost. Sectors in red in this graph are heavily regulated, while sectors in blue allow for free competition. Turns out technology and competition has a strong deflationary impact, without actually causing unemployment.
2. AI is getting more capable, and we risk removing many jobs that provide people with meaning. Our pursuit of profit does not stand above protecting the subjective human experience of purpose. Lex Friedman interviewed Max Tegmark recently. Tegmark made a lovely case for why it is wrong for humanity to remove meaningful work. Tegmark references a great article by Scott Alexander titled “Meditations on Moloch.” Moloch is a game-theoretic monster that traps people in a race toward an outcome we ultimately do not want. I think Moloch is real, and that regulation can help people avoid ending up serving Moloch. It is possible that current AI development accelerates to the point that most creative professions become financially worthless. I.e. it is possible this unemployment scare is different. But we do not know. Not yet anyway.
Related to this is an interesting question: does work constitute a common good that we should or need to protect? Is it possible that AI developers could deplete the need for human work by maximizing their self-interest? I think this will be debated a lot in the coming years. Personally, I’m an optimist. I think we will find new and creative jobs that replace the ones we loose. But I’m sympathetic with the people who will need to find new jobs. We should help those people transition.
3. AI risks evolving into AGI, which, if not aligned properly, could cause massive problems for humanity. As a result, we need to slow down until we have alignment figured out. The “Doomers” aren’t worried about such trivial issues as the preservation of meaningful work. They are concerned about the risk of AGI causing the extinction of humanity. They argue we should stop, or at least slow down, the development of AGI until we have figured out to prevent human extinction. If you are indeed worried about this, then anything we can do to stop AGI from causing extinction is fair game, including, but not limited to, regulation.
Humans’ Goals, Preferences, and Ethics
Our best defense against automation is that we do not know the meaning of life. I mentioned alignment as the process of steering systems toward humans’ intended goals, preferences, and ethics. I think discussing our purpose is an eternal activity. Humans have always looked at the sky and asked ourselves: “Why are we here?” Culture is built on mystery, and doubt sits at the heart of the human condition. We do not know why we are here, and that’s great. That means there will always be things to explore. We cannot automate a process for which we do not know the objective function.
This uncertainty has resulted in ideologies, beliefs, religions, doctrines, ethics, and other frameworks. Or in another word, culture:
Culture is an umbrella term that encompasses the social behavior, institutions, and norms found in human societies, as well as the knowledge, beliefs, arts, laws, customs, capabilities, and habits of the individuals in these groups. A cultural norm codifies acceptable conduct in society.
Since there is no single axiomatic purpose for human existence, humanity has developed a vast range of different cultures, each providing its own version of “Why”. There is a fascinating discourse about “Are GPT models biased or not?”. Of course, they are. They reflect the cultural norms present in the training data. It is impossible to eradicate bias, since we cannot sterilize the human experience and wash away culture. We are one with culture. Instead, a more feasible solution is personalization. Just as there are different newspapers, books, and social circles, we will want AIs that reflect our worldview.
The Subjective Human Experience of Consciousness is Valuable
This is not a prediction or speculation. This is a value statement. I think humans are more important than machines. I consider this a moral axiom. I cannot prove why this is true. I just decided it is. I have decided that the subjective human experience of consciousness is valuable and should be protected. Every time humans have questioned this principle, disaster has ensued, like the Holocaust. We have a moral obligation to preserve other humans’ subjective experiences. And we should strive to maximize the happiness of all humans. If, at one point, automation or even AGI threatens the long-term safety of humans, then I choose humans over machines. I’d much rather shut down AI than risk wiping out humanity, but I don’t think there is any reason to shutdown AI progress based available data.
Instead, we should let AI technology flow as freely as possible and put a minimum amount of regulation in place. We should only regulate if companies risk depleting common goods. I don’t think we risk depleting our repository of meaningful work, so right now I doubt much regulation is meaningful.