A recent study shows that imitative AI tools discriminate:
With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”
White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.
Anyone with a lick of sense knows this is going to happen. Unlike expert systems, AI or machine learning algorithms are trained, essentially, to pattern match. They train these tools on existing results and then use them to replicate those results. The problem, of course, is that much of our existing results are tainted with various biases. Given the large amount of data required to effectively train systems, especially LLM based imitative AIs (like the one in the study), it is impossible to have an entirely discrimination free model. No one who works with these things should think anything else. Which is why these kinds of systems should be outlawed.
Yes, yes, efficiency, saving money, passing costs to the consumer blah blah blah. I do not care one whit. Any computer system that discriminates is worse than a person that discriminates. First, people do not hold machines responsible for their mistakes. The tendency is to argue that an algorithm cannot be biased. That is nonsense, but it is nonsense that people are receptive to. And since people believe it, it normalizes the discrimination, casts it as justified rather than as a flawed system. Second, it is difficult to hold an algorithm responsible. Too many people are involved in the creation, many of whom sincerely believe they are creating a fair tool. Who do you hold to account? The programmers? The people who came up with the training data? The officers of the company who unleashed the algorithm? There is no settled answer to this question, with good reasons to and not to hold each group responsible.
When a person makes a racist series of decisions, or when a company institutes polices that are biased, the line of responsibility is much clearer. It is an old saying, but still true: since a machine cannot be held responsible, no machine should ever be allowed to make a decision. These kinds of algorithms that affect people’s credit, ability to get loan, ability to get jobs, ability to qualify for benefits, etc. should be outlawed. At a minimum, each “decision” should be reviewed by a person and that person should be held accountable for the outcome. If it is biased, then they are on the hook.
We put too much faith in algorithms. I am not claiming that they cannot be helpful. Obviously, they can. But without true accountability and oversight what we too often get is algorithms that merely reflect are biases back at us. And that is worse than having a person doit since our legal and accountability systems have, to date, been unable to really handle dealing with flawed algorithms making decisions. Efficiency is not the be all end of all of an economic system. Democratic access and control of the economy is. Given that algorithms are consistently an impediment to both those goals in critical areas of the economy, they need to be banned from those areas.
It is a thousand times, a million times, more important that we be able to ensure equal access to the economy than some firm or other be able to save a few bucks.
Totally. And totally agree. Then there's this, via the avowedly grumpy (but accurate) Ed Zitron: https://www.wheresyoured.at/rot-economics-an-interview-with-mits-daron-acemoglu/