5 Arguments Against AI—and Why They Are Wrong

It can help you build a website, organize your thesis, learn a new language or turn your thoughts into images. The newest leap in human technology is amazing and helpful, and as a writer, I have long been excited about AI’s progress and what it means for the future. 

As always, though, progress has inevitably been met with conservatism and a fear of change. That fear has manifested into five arguments against AI, all with little substance and easily refuted. 

AI Is Based on Copyright Infringement

The first and most self-righteous complaint about AI is that it copies the work of real people without compensating them for it. Now, I’m not a fan of copyright in general, viewing it as a means to isolate industry cartels from competition, artists just as much as software developers (Microsoft) and pharmaceutical companies (Pfizer). Small-time, local, emerging and marginalized artists would largely benefit from protesting copyright law rather than supporting it, but that’s a subject for another article.

For now, it’s important to recognize that this argument relies entirely on legal semantics. Copyright infringement isn’t “theft” regardless of Hollywood propaganda—nothing has been taken from anyone. Instead, it’s just a convoluted legal doctrine that must be constantly redefined for new industries and circumstances.

In other words, it relies on an appeal to authority, and in this case, the relevant authority, US courts, have determined that the training process used by AI’s does not infringe on copyright. Many do not understand this training process, which does not involve the AI storing or accessing the to produce images or text. Rather, it’s merely exposed to millions of images and texts to train it the same way an art student is exposed to famous works to learn how to paint.

Many Luddites contend that AI training involves “copying” the training dataset. Semantically, this is true since a copy must be made to provide the piece of art to the AI server just as a copy is made when you move a song from your laptop to your phone, but just as US Second Circuit Court of Appeals determined in the above-cited case against Google in which the company copied millions of books to create a searchable database, this is not reproduction per se as it’s a technicality and is not reselling the works for commercial purposes. As a result, it falls under fair use.

Since copyright law encourages artists to spend more time suing people than working, cases are currently in motion against AI companies like Stability AI and Midjourney, but they’re likely to see the same verdict as the case against Google. Therefore, those appealing to “the law” are actually undermining their own argument.

AI Will Take Our Jobs

It’s becoming apparent that bloggers and concept artists are far from the only professions threatened by AI. The technology is quickly becoming capable of drawing up contracts, writing music and even developing software, long the bastion of programmers who looked down on others for their easily automatable career choices. Of course, now that the programmers themselves may be put out of work rather than putting others out of work, there’s an outcry.

Losing jobs is the oldest complaint against new technology. It at least dates back to the Luddites, but plenty of scribes resisted the printing press, and I’m sure the litter-carrier union even had a problem with the wheel. 

People want to feel needed and that they’re contributing to the tribe, not to mention make a living, so these complaints are understandable. However, they’ve been given pseudoscientific backing thanks to Keynesianism and the government’s insistence that “jobs” are the most important economic factor.

PUBLIC ANNOUNCEMENT: Jobs ≠ economic health. Peasants in feudal Europe were all employed. That doesn’t mean it was a functioning economy. And when the combine came and took all their jobs, it was a positive economic phenomenon—unless you would prefer that 98 percent of the population break their backs harvesting grain for the other two percent.

If you are a programmer, lawyer, writer or artist, I hate to be the bearer of bad news, but your job is no more sacred than that of the Medieval peasant. No aspect of AI is stopping you from continuing your art. Yes, it may make that art less valuable, both financially and socially, just as the printing press did to the art of calligraphy, but if that’s the whole reason you were writing or painting, then it’s time to admit it wasn’t “art for art’s sake” after all. You haven’t been doing it because you love but because you want others to love you for it.

AI Helps Students Cheat

From the education sector, most complaints about AI have focused on its facilitation of cheating. Presumably, students can use AI models like ChatGPT to write essays for them.

At its most basic, this concern makes little sense. Most of primary and secondary education is about passing down base knowledge that we’ve already outsourced to machines. For example, calculators can already do advanced maths, but we make children study even basic arithmetic without their help. They can easily cheat via machines, but educators must design methods to cope with or overcome this.

Now, a friend of mine did point out a more valid complaint, which is that while math problems are for demonstrating math skills, writing essays is often used for demonstrating knowledge that isn’t writing per se but rather critical understanding of some other field, from physics to history.

Nevertheless, I believe the same rebuttal applies. This is far from the first time technology has facilitated cheating and far from the most egregious (see: cell phones). It’s the job of the educator to adapt to these circumstances and either develop new methods of demonstrating knowledge or find ways to prevent and control cheating.

AI Spreads Misinformation

One of the most recent arguments to come out against AI is that it spreads misinformation. While this is one of the fastest arguments to catch on, it’s also one of the worst. It’s so bad that I don’t believe any of its proponents actually believe it. Instead, they’re just jumping on the fake news/misinformation moral panic bandwagon to discredit the scary new technology.

AI models like ChatGPT certainly produce answers that are incorrect. They’re programmed to regurgitate human language patterns, not facts. However, AI is no more guilty of spreading misinformation than the Internet in general, something that has long been used to discredit that obviously beneficial technology.

Yet even still, AI and the Internet both help spread correct knowledge and diverse perspectives and interpretations far more than they spread misinformation. Humanity has always dealt with misinformation: 

  • People still believe going outside without a coat will make you sick. 
  • Many Koreans still believe that sleeping with a fan on will kill you, a myth started by the government to curb energy use and affirmed by the government and media through the 2000s.
  • My third-grade teacher still told us that Columbus set off to prove the world was round.
  • The government-funded Tufts Food Compass rates Frosted Mini Wheats as healthier for you than boiled eggs.

The AI’s propensity to spread misinformation is utterly dwarfed by monosources like wives’ tales and oral wisdom, not to mention corrupt agents like the government and large corporations that actively attempt to forcefully restrict other sources of information. AI’s relatively small risk of misinformation can be balanced by healthy skepticism and verification with multiple sources, just as people should be doing with all information they receive.

AI Just Needs to Be Regulated

Once all their arguments have been dismissed, opponents of AI inevitably land on the vague argument that AI is fine and helpful, it needs to be regulated. Unfortunately, proponents of AI will also adopt this argument as some sort of compromise.

Not everything in the world needs to be regulated. In reality, this is just a way to try to defeat AI in the same way environmental groups funded by oil companies defeated nuclear power: death by a thousand cuts through bureaucratic red tape and frivolous court cases. Over-regulation can often make a beneficial technology artificially expensive and prevent its adoption even though it’s actually more efficient and productive than alternatives.

If you think AI should be regulated, you must specify what and how. I’m talking bill proposals. Of course, I’ve yet to see any forthcoming concrete suggestions. If you’ve got one or have an argument against AI that I missed, leave it in the comments below.

Leave a Reply

Your email address will not be published. Required fields are marked *