Capitalism vs. Useful AI
The Washington Post has a fascinating story about how improved algorithms allow people with conditions that have robbed them of their speech to sound much more like themselves than the old methods. Previously, a person would record thousands and thousands of worlds over a session that lasted up to thirty hours. The system would then slice words or sounds together from that list as needed, leaving a voice that was a choppy, robotic version of the person's. The new system uses a large data set of voices and a much smaller sample of the person's voice to determine individual sounds that mirror the original person's pitch, accent, etc. It is much cheaper and, while not perfect, much more natural sounding.
This is the kind of work that we should be discussing and focusing our efforts on. As I said, it is not perfect and there are issues that need regulatory attention. The algorithms are not interoperable, so if you don't like one version of the voice you have to repeat the process -- and the payment -- to move to another system. Same if you provider goes under, or for some reason the typing to voice system you use won't work with the voice AI you chose. All of these are solvable problems, either through industry standardization or government regulation or some combination thereof. And this is a genuinely great advance for the people who need this service and their loved ones. Instead, though, we spend our time arguing about chat bots.
This is the problem of modern capitalism writ large. Our research is being driven not by new opportunities or markets, and certainly not by what is best for society, but by companies desperate to either maintain their monopoly-like control of a certain market (tech driven search and its related advertising) and reduce key costs (computer engineers and content providers -- make no mistake, these companies dearly hope that the people who program, write, and illustrate the material their companies depend upon can be replaced or have their bargaining power severely reduced by these tools) or break that monopoly. We then get half-baked systems that are known to lie to people and make it easy to spread disinformation.
I have spoken before about how the cruelty of capitalism and the resulting justified fear of immiseration have warped the conversation around the perils and benefits of imitative AI like the systems described above. This is the flip side of that: since we live in a society where every decision is supposed to be left to the market, the market is the only acceptable source of AI products. As a result, we get too much auto-correct for disinformation and too little life-changing advances.
We simply cannot afford to leave the direction of AI to self-interested monopolies. There is real potential for imitative AI to change life for the better. Give us better medicine, better tools, less time spent working. But none of those things will come to fruition if we don't take control of this work as a society. We can choose to put this to use for everyone. But if we continue to leave it in the hands of companies then it will only result in the benefits of these advancements accruing only to the current owners of capital.