Is Imitative AI the new NFT?
I am supposed to like Douglas Adams. Everyone who knows both his work and spends any amount of time around me inevitably suggests I read his books, especially once they know my love of Terry Pratchett and shows like Kids in the Hall. My wife, who knows me best and whose taste, outside of men, is exceptional, has tried multiple shows, books, and movies to get me into Adams. And I keep bouncing off.
Adams, because of some combination of his voice and my own personality, simply does not work for me. I rarely find him funny and more often than anything, I find his work largely boring. Despite the similarities to other works I do enjoy, Adams is simply not for me.
What, then, does this little admission of poor taste on my part have to do with imitative AI? I am wondering more and more if imitative AI is going to go the way of NFTs and other crypto hype. Now, to be clear, I do not think that what we call AI and is probably more properly called machine learning or data science, itself is all hype. In areas like assisted driving and medical research, AI is a boon to human beings. But the more I read about, play with, and think about imitative AI, the more it seems similar to the NFT/Crypto bubble as opposed to the Web bubble.
NFT and Crypto don’t really have use cases outside of gambling and aiding criminal behavior. There are almost no uses for the blockchain outside of those areas, at least none that have manifested themselves, and crypto currencies have proven too complicated and insecure to be a means of purchase or a reliable store of value. They add nothing to society at large and do almost nothing but cause harm. Imitative AI seems to be heading down that same path.
The large language and visual models that are at the heart of imitative AI seem to have the same criminal/societal harm that crypto and NFTs have. They are used to create deepfake porn to hurt innocent people. They can be used to generate fake voices to run scams. Recently, a financial employee was scammed out of millions of dollars by a video call that had convincing deepfakes of the CFO and coworkers he knew personally. Imitative AI is flooding the web with disinformation and cheap, derivative regurgitation of news and other work. And, of course, it lies about people.
That is all bad, but surely the good that imitative AI brings can outweigh those harms? To that question, I ask one of my own: what good? Imitative AI systems are plagiarism machines, and they were trained on data that they may not have been allowed to be trained upon. More importantly, their “artistic” endeavors don’t really provide any benefit to society.
As I have discussed before, imitative AI systems drive out human voices, and thus the possibility of change in art. Since they are designed to find the most likely word or pixel based on their training data, they cannot really advance or change art in any fundamental way. They can merely copy what they have been trained upon. And if the MBA bros in charge get their way and imitative AI is used to replace writers and artists, or at least drive their earnings down even more, then art stagnates.
This concern applies to one area where imitative AI does seem to have a potentially beneficial role: programming. Tools like Microsoft’s Copilot really do minimize the amount of boilerplate code you have to write from scratch, and they really do help fill in syntactical material when you are stuck. But there are tools already out there that do some of that work (the first thing any serious programmer does is write or find a tool that generates the boilerplate code for them) without the economic or environmental costs that come with imitative AI. And computer science is still a young field — imitative AI cannot come up with new languages like Go or Haskel (a pause here while my computer science friends yell at each other about the relative merits of their favorite language.) for the same reasons it cannot come up with new kinds of art.
There are other potential uses of imitative AI. Some imagine AI reading your email for you, for example. It can summarize the important bits of an email so you don’t have to read all of your coworkers’ winding road to the point. You can then have imitative AI respond to the important points automatically under some circumstances. How, though, is that going to actually help? If you don’t read the emails, how can you be sure you haven’t missed on nuance? If your business communications devolve into AI agents reading and writing for each other, how can your people know if they are communicating the correct information before a problem blows up in your face? How do you control the flood of non-useful data?
The Verge had an interesting article about a self-published author. Such authors, at least in certain genres, have to produce work very quickly or the Amazon algorithms will drive their readers to other authors. This author tried to use a ChatGPT competitor to fill in some hard to write sections of her book and to generally speed up the process of coming up with plot outlines and first drafts. What she found, though, was that the more she used it the worse a writer she became. She began to forgot plot elements, couldn’t slide back into the voice of the book, and began to lose her individual voice — what readers come for, at least in part — to the imitative AI output. All of imitative AI feels like that right now — a lot of hype, but the early indicators of success seem to fall away once you examine the actual situations.
One of the more telling things about NFTs was how deeply and universally ugly they were. It was a clue, if you needed it, that NFTs were not driven by a love of art but rather by a desperate need to find something, anything, to monetize the blockchain. Imitative AI is beginning to have that same feel. Its focus on trying to replace the more creative parts of human endeavors by regurgitating what has come before and only what has come before strikes me as the same kind of desperation. Again, AI system can and do have a real impact for the better of human lives. But imitative AI systems seem to be environmental disasters with no to limited ability to benefit society. It seems more and more, that we are witnessing hype more than ability.
I don’t want to read any more Douglas Adams. I will never enjoy most of his work. But Adams has been an influence on books and shows that I have enjoyed. Artists have evolved what they took from him into something new and fun for me (and really, my pleasure is all that matters here). Imitative AI system cannot do that, cannot advance the state of art, or programming, or our ability to effectively communicate the real world to others. I may end up being wrong, but the focus imitative AI puts on replacing creativity with regurgitation feels very much like the end stage of the crypto/NFT hype cycle. More and more it feels like a desperate attempt to monetize something the businesspeople don’t really understand before the hollowness of the promises and the true costs of the systems become clear.
Douglas Adams, if he were alive, probably woold have written an acclaimed satire of imitative AI. And I probably wouldn’t have liked it, either.