Imitative AI, The Truth, and the Damage Done
NYC has introduced a chat bot to answer questions about the laws around small businesses. It is joke, constantly getting answers wrong in ways that will causes real harm to people. No, for the record, you cannot open a restaurant to serve meat made from people. And no, you cannot fire someone for refusing to follow the company policy of having sex with the owner right after you clock in. Both of those were answers actually given by the chatbot, as well as more mundane wrong answers such as you cannot be evicted for not paying rent. The thing has been a disaster, but NYC insists on keeping it up anyway. Why? In part, I think, because they are convinced they can save money using it.
The chatbot is essentially a fancy Frequently Asked Questions web page. The cheapest way to produce that kind of material is to simply have a web page with a list of, well, frequently asked questions and their answers. But for complex material, like the legal code of NYC as it applies to renters, landlords, employees, and small business owners, that isn’t especially helpful. So you hire some folks to parse common questions and tie them back to meaningful answers, and perhaps provide a phone or chat or email backup for when that fails. You can already see how this is a failure from the perspective of people who measure everything as a cost: it involves paying people to do work, and we simply cannot have that. Some billionaires in Tribeca or the Financial District might have to pay a smidge more taxes to keep the city they depend upon for their wealth working. Better, surely, to not pay those people and let the magical imitative AI answer those questions.
In the immediate term, the tradeoff to anyone who doesn’t understand systems looks good. Less money to support NYC small business and the people who interact with them. What could go wrong? Well — you could end up with Cannibal Eatery for one. And while that is an extreme case unlikely to ever happen (though I am sure the Post would give it at least three stars), there is no doubt that people have already acted upon the bad advice the NYC system has spit out since it was turned on. That, in turn, is going to result in more cases in the courts, more complaints to the police and other social service in NYC, and eventually someone is going to sue the city for providing inaccurate information.
This thing is going to cost a lot more money, in other words, than the people it replaced. It won’t do so in ways easy to see, but it clearly will do so to anyone who understands how people and the legal system work. And this was all predictable. Imitative AI cannot meaningfully answer questions. All it does is take in input, compare it to its training data, and based on some math, try and figure out what set of words should come next. It has no model of the world, no underlying information. It is guesses all the way down, and that inevitably means that it will produce false results. These limitations are fundamental to the way it works. This humiliation for the city was easy to see coming, if only the people responsible would look.
But they never will look. To the people in charge, a world where they don’t have to pay for expertise, where they don’t have workers to worry about, is nirvana. It is easy to justify the move as a cost-savings, because the costs it saves are easy to see compared to the costs it imposes. And a lot of the costs it imposes are borne by others — workers, landlords, and business owners who reply on its information to conduct themselves in commercial enterprises. The slow degradation of the workings of the market under the weight of the imitative AI nonsense will be even harder to parse, but it will eventually make NYC less dynamic, less able to run a marketplace, and thus less rich than it would have been at the margins. The costs are everywhere, but the people in charge do not want to see.
This is the future of imitative AI: short-sighted leaders replacing functional systems and processes that require human expertise with imitative AI systems that impose greater costs than they save but do so in a way that make it easy for the savings to be highlighted and the costs to be obscured. Critical business and governmental systems become progressively less reliable. Friction in normal business and social interactions increases, decreasing the effectiveness of markets and services. Nothing can be relied upon, and no one who knows anything is allowed within a hundred miles of anywhere their knowledge might help. All in the name of not paying people. Unless we find a way to make the AI companies liable for their training data and outputs, this is how imitative IA will inevitably be used to make us poorer and more poorly served.
The future they imagine is an AI chatbot spewing nonsense in the face of humanity forever.