AI Doom in the Sky, By and By: Doomers Are a Distraction from the AI Problems in the Here and Now
The New Yorker has a lengthy look at AI “doomers” and their counterparts, effective accelerationists, with an emphasis on the doomers. It is a well written, well researched, sometimes amusing portrait that is effectively useless with respect to understanding the actual problems of AI.
Doomers, for those unaware of the term, are AI researchers and think tankers who are convinced that modern AI is going to produce something that has a significant non-zero chance of destroying all of humanity in the near-ish future. Accelerationists are people who believe that AI will solve world hunger, make Marc Andreesen cool, and provide rainbow farting-unicorns for all. If you read carefully, you might be able to tell which side I am more inclined to sympathize with. However, my sympathy for the doomers resides only in the fact that the accelerationists are so disconnected from the reality of human power relationships and basic economics that they are either lying through their teeth or should be kept away from anything more dangerous than a short piece of string. A very short piece of string. Because the doomers in their own way, are just as unrealistic.
According to the article, doomers are largely an outgrowth of concerns over the possibilities of a general artificial intelligence growing out of control and the effective altruism movement — a group of people that largely devolved into I can do whatever I want and get as rich as I want because someday, I might maybe do enough charity work to save not yet born people that my avarice and inhumanity to actual living human beings will be offset. Not the best start, on either end. Obviously, “pie in the sky, by and by, but never pie now, never pie here” is not an especially decent moral code. And the idea that we are close to a general artificial intelligence is not plausible. Large Language Models, which are what triggered this round of hype, are just calculators, fancy autocomplete for pixels and words. They have no model of the world and can build no model of the world, which is why they so often lie and do things like have chessboards with the wrong number of squares, or limbs that pop in and out of videos. They lack context to do anything other than regress toward the mean of their training data. It is highly unlikely that something that cannot model the world can be intelligent enough to threaten it via that intelligence. Like any tool, of course, it can be put to bad use, however. And that is where the focus on doomers misses the mark.
A recent paper showed that large language models are incredibly racist. Merely using common African American dialects in prompts was enough to do things like get the systems to suggest the death penalty at a higher rate for the person or steer them away from more prestigious, higher paying jobs. Increasing the size of the model actually made things worse, likely because the amount of training data required to make these things work is so large that these models are effectively trained on the internet. And the internet is a racist, sexist cesspool. If that is too abstract for you: AI systems have been shown to kick people off needed medical services because they are on them longer than the AI system thinks they should be. No doctor or personalization involved!
There are plenty of other examples I can give. AI is harming people right now, today. No need to wait for the possibility of Skynet — its racist, sexist, greedy little bothers and sisters are out here right now, doing a ton of damage. Damage that the doomer contingent ignores in favors of arguing about hypothetical future catastrophes while their accelerationists cousins are unleashing disasters right now like they were B movie directors. Yes, it is good to think about possible future consequences, but not at the expense of dealing with today’s problems.
By focusing on tomorrow, doomers allow the AI pushers of today to do their damage with less scrutiny. If the argument is about whether or not the AI systems are good for the people of tomorrow, there is less time and space to talk about whether or not it is being used properly today. The doomers, by changing the subject, or more precisely, by having the discussion on the accelerationists terms, provide cover for the excesses of today’s AI.
I have no doubt that these people are sincere in their concerns. As I said, it does bear to think about the possibilities, however small, of future risks. But, as the New Yorker points out, many of these doomers are funded by the same people and organizations that fund companies like OpenAI. I am not saying, nor do I believe, that the doomers are somehow on the take. But I also do not believe they would be getting such funding if they presented an actual challenged to the AI dominant worldview.
If we are serious about protecting society from AI, then we need to stop worrying about the sci-fi scenarios of the future and start worrying about the realistic harm it does today. anything else is just providing the worst of them cover while they attempt to enrich themselves at the expense of the rest of humanity.
Your last paragraph sums it up nicely. When someone is beset by delusions of grandeur (brought to you by many of the same people who were going to 'hack death'), some of the delusions will be nightmares; in the meantime, they should watch who they're running over right now. I still think of Elaine Herzberg in Arizona. More than one person should have spent time in a prison cell for that.