Lyz Lenz has a very thoughtful newsletter out regarding how her book has been stolen by what appears to be an AI plagiarism shop. As with all her writing, it is thoughtful and engaging and I want you to go and read the whole thing. Right now, I can wait.
Bak? See, I told you that was good. I am not going to talk about the main thrust of her article, as I don’t have anything meaningful to add. Seeing as I have an entire newsletter section dedicated to “Failed Writer’s Journey”, I haven’t much to add to the travails of the modern published author. But one thing her father said in the piece caught my eye. In the article, Lyz asks her dad what AI he likes, and he mentions Grammarly. And I think that answer perfectly sums up the damage the hype cycle has done to our ability to understand technology.
Grammarly is not Artificial Intelligence in any meaningful sense. It is likely not using Large Language Models like ChatGPT for the vast majority of its work. It is not actually intelligent, as it just follows a set of rules for figuring out whether or not the word is spelled correctly or whether your grammar usage is correct. It is often wrong, or at least incomplete, and defaults to the more generic forms of expression/acceptable grammar. Or at least that was my impression when I used it (it stopped working on a tool I use to write about a month ago and I have yet to figure out how to fix it. Yes, I do work in tech, why do you ask?). That behavior is what we used to call an expert system, not AI, and I think it still describes tools like Grammarly best.
Expert systems are systems that apply known rules to known contexts in order to help people make decisions. Grammarly applies the rules of spelling and grammar to your text and makes suggestions about how to improve both. The choice to use those suggestions is yours alone, and it functions because it has well defined rules in a well-defined context. That is not intelligence in any meaningful sense, even if it useful. Why, though, does it matter that Lyz’s dad thinks of Grammarly as AI? It is not just pedantry — the way we use words matters about how we think of the world around us. And lumping non-intelligent systems like expert systems or machine learning systems in with things like imitative AI distorts our view of what AI is and it’s real value. Or lack thereof.
AI and expert systems are types of automation. Generally speaking, from a societal perspective, there are two kinds of automation: those kinds that assist people in some way and those kinds that only replace people. The lines are blurry of course. Washing machines in theory replace dishwashers. Grammarly in theory replaces human editors. But in those examples, the damage is limited by circumstance. There either aren’t many or any jobs lost (most people could not afford servants to wash their dishes for them and people who use Grammarly are not going to hire a human editor to clean up their emails, or even the first few drafts of the fiction or massive reports) or the jobs tended to be low paying and were replaced by better paying jobs (dishwashers became dishwasher repair people, to be cliched.)
In addition, automation that augments people is generally good for those people — it takes away something they had to do that was drudgery or harmful and replaces it with a machine. Expert systems like Grammarly reduce the tedium of working through your writing and fixing all the mistakes yourself. But it is your writing that provides the material Grammarly acts upon. They clearly augment humans, largely to society’s betterment. By conflating that kind of system with AI, though, things like imitative AI get a halo they do not deserve.
Because imitative AI is clearly the second kind of automation, or at least trying to be. It is designed to replace humans in an area that human beings find neither drudgery nor are likely to be replaced by better or more remunerative jobs. Imitative AI can only make money if it can replace writers, programmers, and artists of various kinds well enough to justify the enormous costs of running AI systems. The CTO of OpenAI has stated baldly that some creative jobs will disappear and that those jobs shouldn’t have existed in the first place. The problem, of course, is that creative jobs are generally fulfilling, are not harmful, more remunerative than the alleged replacement jobs, and work that humans want to do. Conflating those systems with augmentation systems creates the impression that they are going to be equivalently beneficial, which is likely not the case. It also implies that they are going to be as successful, at least, as those systems, something else that is not yet in evidence. And, of course, it implies that imitative AI systems will be as accurate as previous systems — a proposition that is contradicted daily. By conflating what LLM driven systems do with what things like expert system do, imitative AI confuses the reality of the situation to their benefit.
Words matter. What we call things matter. False equivalencies and inaccurate comparisons service to elevate imitative AI in a way in which it does not deserve. And that, in turn, makes it easier for its owners to downplay the harms imitative AI can and does inflict on society while making it easier for them to keep the hype train that they currently depend upon moving. Grammarly is not AI. But it certainly helps the imitative AI companies that so many believe it is.
“Imitative AI can only make money if it can replace writers, programmers, and artists of various kinds well enough to justify the enormous costs of running AI systems.”
Thank you for cutting through all the hype around the fantasy of an AI utopia. It all boils down to a classical Marxist philosophy of capital vs. labor. Automated labor is easy to determine ROI and involves predictable maintenance expenses. Capitalism is designed to run businesses not charities. Labor is not humanity but simply an expense to be managed - “human resources”.
I am reminded of a cartoon with two robots seated in a fine restaurant sipping wine while human waiters scurry to serve them. One of the robots remarks: “Why did humans ever imagine we would want to take their jobs?” AI benefits a capital-rich elite by denigrating the value of human labor. Capitalism morphed out of feudalism and now seems to be returning to its roots… “Let them eat new iPhone models.”