Imitative AI — systems that slurp in a ton of data consisting of written or pictorial material and try to remix that material based on calculations about what letter/word or pixel should come next, also referred to as generative AI — have an enormous cost. Not only do the compute costs mean that imitative AI products are not currently profitable, they have enormous environmental costs as well. And for all that we get … not much. Frankly, the only benefit of imitative AI seems to be in the minds of the people running these systems.
The Washington Post recently tasked reporters with trying out imitative AI systems in the workplace. It did not go well. It could not properly handle reading, summarizing, or creating emails without massive quality problems. It could not reliably produce reports from source documents with any reliability — in some ways the hallucinations were more subtle and thus more dangerous. Any level of complexity in spreadsheets defeated it. It could create okay PowerPoints, the most useless form of communication, but even those required significant human intervention to be useful, and occasionally it produced decent meeting transcriptions.
It did expand acronyms well, so there’s that.
What about programming? That is one of the areas it is supposed to help with. Some systems can produce decent boilerplate code, which is not entirely surprising given the sheer amount of open-source material available on the web (interestingly, much of that material is produced under licenses that forbids its use in commercial products. I am unaware of anyone testing these licenses in court with respect to imitative AI, but it is a potential area of danger to these systems and clearly, from a moral standpoint, AI systems trained on these code bases are violating the spirt of the licenses). However, recent studies have shown that code quality is getting worse as imitative AI use is expanding in programming. This makes sense, of course. Imitative AI takes its best calculations about what should come next — meaning it doesn’t skew toward the good, but toward the average. And the larger the training set — and these systems require massive training sets — the more toward the middle the result will be. So, the more you use it, the more anodyne and middle of the road your work will be. You quite literally cannot expect it to produce something good, because these systems are quite literally not built to do so.
But what about writing and art? Isn’t what they do there amazing? Not really. Four professional editors tested how ChatGPT edited a published work of fiction and found that it basically just regurgitated boilerplate, anodyne suggestions. The tools themselves appear to produce, at best, below average quality and have been shown to plagiarize. The video and image generators don’t understand physics, and produce odd images (when they aren’t plagiarizing as well) that tend to have the same glossy finish and weird artifacts, like limbs flickering in and out of the video as people move around. None of this is surprising. These machines are next-best-guess calculators. They understand nothing, and so they produce only what their math tells them should likely come next — they regress towards the mean of their training data.
The recent flare up of Gemini, Google’s imitative AI, system producing diverse Nazis is a good example of the issue. Gemini originally had massive problems when people asked it produce images like “black doctors”. It produced mainly white doctors. It had similar problems when asked to produce anything that wasn’t a white male in a traditional white male role. Because it had been trained on a massive amount of data, much of it, due to historical circumstance and societal biases, was racist and sexist as hell. Google, to its credit, realized this was Bad and set out to fix it. The problem was, since the system don’t actually understand anything, the system couldn’t reason that the filters Google added wouldn’t always be appropriate. Hence, Asian Nazis. (Almost as bad, I am sure, as Illinois Nazis. If Asian Nazis actually existed.)
These systems just aren’t good at anything, and likely never will be. Now, that is bad enough when they are being positioned as ways to save money. Hollywood execs, people who oversee programmers, people who hire artists — many of them would like to not pay those people, or not pay them as much. Sure, ChatGPT cannot produce a script, but if you give a writer its garbage output and say, “This is a script, now edit it”, you get the same amount of work for less money. It won’t work in the long run, but you only worry about the next quarter anyway. And in the meantime, a lot of workers get hurt. But that is not where the damage ends — these systems, because they require so much compute power, are environmental disasters.
Imitative AI system require both massive amounts of data, so massive amounts of electric storage, and significant amounts of computations, so significant amounts of processing power, to operate. And those both require large numbers of machines. Data centers are booming. And that means that electrical usage and water usage are increasing rapidly. No one is entirely sure how much energy and water is being used by these systems, largely because the industry is doing its best to hide those numbers from the public. What we can find, though, and the anecdotal information, is disturbing.
The nature article linked above suggests that ChatGPT alone is using enough energy to power 33,000 homes and that at current growth rates the industry will consume as much energy as entire nations. The Atlantic has a story about one AI data center that requires 50 million gallons of water every year in an area where farmers cannot get water for fields and a part of Phoenix went without tap water for a significant port of the year. And there are many, many more like it being built all over the world. All so that we can have pictures of three-armed people and automatically create angry emails.
In reality, we are doing this, and allowing it to be done to us, because tech is failing to reproduce the early 2000s web bubble and the early 2010 social media expansions. The last three major thrusts of tech — crypto, self-driving cars, and the metaverse — have been failures to one degree or another. Tech needs another golden goose, and imitative AI appears to them to have feathers and shiny eggs.
While the push for self-driving cars has produced useful safety enhancements, those enhancements have not driven returns that have justified the initial investments in fully autonomous cars. major car companies have been abandoning or pulling back their investments, with Apple being the latest to abandon the dream. Crypto never produced anything lasting other than ransomware and scams. And the metaverse never did anything but makes us contemplate life without legs, all for ten billion of Facebook’s dollars. With interest rates no longer near zero, and hence money no longer essentially free, tech companies need big wins to keep juicing their stock valuations. In today’s capitalism, it is not enough to be a near monopoly that produces consistent, large profits. The line must always go up; the growth must always accelerate.
And so imitative AI. It holds out the promise, however false it may be, of replacing entire industries and super charging others through the magic of what comes next calculations. (that this would also further diminish the power of workers is an added benefit to a significant portion of our tech elite, perhaps making them more willing to stay the AI course than pure rationality would otherwise suggest). This is not to say machine learning and other tools adjacent to and at the foundation of what we call AI today cannot be useful. They can, and they have. But imitative AI is not in the same class as those tools. It has every appearance of being a bubble — one destructive not only to the economy, but to the environment as well.
Tech leaders may need to pretend that this is well and good. The rest of us don’t. We should stop coddling their ambitions, make them pay for the environmental and economic damage they do and force them to do something useful. Their lack of double-digit growth is not our emergency — unless we let the damage it causes become so.
Brilliant analysis. 100 percent.
But look on the bright side—if we're wrong, it can still achieve consciousness and put us all out of our misery.