“Hal, quick! Generate a catchy hook to hold the reader’s attention!”
“I'm sorry Dave, I'm afraid I can't do that.”
In 2001, two French robots made a transhumanist anthem that would become one of the most iconic electronic tracks of all time. “Harder, Better, Faster, Stronger” portrays the unstoppable progress of technology and humanity’s obsession with productivity.
This 4-article series invites you to look at the “Harder, Better, Faster, Stronger” nature of modern life, and feel empowered to choose how much you want to engage with it.
In part 1, I talked about the obsession with growth. In part 2, I talked about attention. Here, we’ll discuss speed.
I’ve been trying to read and watch videos about the business of the AI industry, and it’s sketchy as hell. But unfortunately I’m not market-savvy enough to understand the mountain-load of finance jargon. When I’m watching those videos, I can actively feel smell burning toast. Smoke comes out of my ears and blood out of my eyes.
There are plenty of ethical problems with the current state of AI:
AI companies use “publicly available” data to train their models, i.e. they steal our intellectual property and infringe on copyright — every bit of writing, photos, and videos that we put out on the internet.
The energy costs of AI are insane and are still growing.
The promise of AI (an end to tedium) doesn’t match the broad reality (trying to replace human labour and skill to save money).
I don’t want to talk about any of that. Unfortunately, facts don’t change minds, otherwise anti-vaxxers wouldn’t be a thing. Business frequently curb-stomps ethics into the ground.
What I do want to talk about is that AI is a fantastic marketing term, and that the technology being presented as AI is just not good enough to match the branding.
The promises and inadequacies of AI
Here’s the problem — AI could be absolutely disruptive in a way that changes every aspect of our society. AI could change everything. Or it could not.
One of my favourite blogs to read recently is Ludicity, who is a data scientist / software engineering consultant. They wrote a viral post called “I Will Fucking Piledrive You If You Mention AI Again,” which I highly recommend reading.
In it, they talked about how there are 3 possible outcomes with AI:
It becomes self-sufficient and keeps improving itself and becomes Skynet.
It fails because of any number of technical or logistic reasons.
We somehow thread the needle.
Their blog post focuses on the notoriously poor management of software companies (which I can attest to), and that the hype cycle is simply bad-faith grifting.
I want to focus on the other side of the coin — AI users.
Aside: Specialist-use AI is a whole different beast. Biotech analytical tools are doing amazing things such as mapping molecular systems and genetic material and improving our understanding of medicine. My outlook on that is positive, because they’re doing incredibly laborious processing work (i.e. try a billion times) that is hard for humans to do. This article does not discuss that AI, but the consumer-grade products available to the general public that are largely meant to replace labour and benefit corporations.
There are two main uses of consumer-grade AI products — media generation and conversational search.
The promise of Large Language Models (LLMs) is the ability to search for knowledge by texting. In that sense, LLMs are the next generation of search engine — the ones that A) find the most relevant information for you, and B) remember what you said previously so it can be more accurate.
It’s a fantastic model because conversations come naturally to human beings. LLMs find what you’re looking for and condense it so you can learn, move, and do things faster. The promise of LLMs therefore is accuracy and speed.
Similarly, the promise of generators is to make or modify images, videos, code snippets, and more much, much faster than humans can. This is what caught the imagination of the public zeitgeist.
In practice though, both are imperfect technologies.
LLMs have a significant rate of error, and sometimes they just make stuff up (called hallucinations). They don’t search per se, they guess based on the information they skim.
Generators can get you genuinely good results for most people for text, images, and now even video. They can do pointed tasks such as removing background items and fill in the space, but cannot innovate because they copy training data. Crucially, they also find it difficult to adjust output based on feedback, which makes it hard to compete with high-quality human art and design.
And that’s still revolutionary. If you put aside the oh-so-small issues of stealing other people’s art and burning rain forests for fuel, current AI products are able to deliver on their promise of speed. They get you to a midway point, which you can build upon, tweak, edit, and clean.
But that requires users to have the expertise to notice the issues and solve them.
The sad thing is that users are dumb
If you’ve ever worked in retail, food, creative, or tech, you know that users and customers are duuuumb. Nowhere is Sturgeon’s Law more applicable than the general consumer.
Most consumers are entitled morons bare-minimuming their way through life and making it everybody else’s problem. I’ve done my best to avoid sitting on a high horse in this series, but as I write this, I feel justified in jumping up on the saddle. I feel anger welling up inside me.
The average customer, whether a diner or a director, will do their best to weaponize their incompetence and pretend it’s your fault.
Aside: Because of general user apathy and incompetence, generators also enable scamming like never before, making it easy to replicate people’s voices and likenesses.
So when technology comes along that says it’ll make things faster for you, most users and businesses will use it like a magic lamp to wish their troubles away. As such, you get some issues. Cue Yakety Sax.
A lawyer cites fake cases in a personal injury case against an airline.
Drive-through customers beg McDonald’s AI to stop as it adds 260 Chicken McNuggets to their order.
Airline pays damages to a grieving customer who was forced to overpay ticket prices.
Part of the problem is that most users blindly put their faith in the output, without due diligence. Part of the problem is that users don’t have the expertise to notice issues. And part of the problem is that the promised efficiency is so fucking enticing that even smart users will push for it.
A reader of Thorough and Unkempt contributed the following story:
“Slowing down is hard when AI is at your disposal. I once had AI code up much of a website I was working on. While AI is great for front-end design, sometimes it doesn’t get the back-end logic just right.
Don’t get me wrong, it often covers edge cases I did not anticipate because it has seen this pattern a million times before which I, just a dilettante trying to climb up to the shoulder of giants, may miss. But if the pattern I want deviates from what it has read a million times, it starts struggling.
My rational brain tells me to just look through the massive code it had generated to fix it. My high-on-speed brain however copy pastes the whole code back to the AI and implores it to do it the way I want. It doesn’t get it. So I keep imploring with more words. It keeps regurgitating more code without doing what I want it to do.
I finally give up and fix it manually. That barely took time but the sheer lethargy I felt in actually understanding the code the AI generated frightens me as a knowledge worker in how easily it enables my lethargy.” — Surya C.
So, will AI survive user apathy?
So here’s where my predictions come in. I’m fully open to the possibility that I might be wrong, and that AI might evolve into AGI (Artificial General Intelligence) or ASI (Artificial Super Intelligence). However, there are significant logistical issues in the way of that.
1. Generators will be regulated or will hit a plateau
I think image and video generators will get a Disney-shaped baseball bat to the head in a dark alley sooner or later. Copyright law will have to catch up once the big players are plagiarized, and it will absolutely hamstring those businesses.
As deepfake scamming becomes a bigger and bigger problem, it’ll only take a couple of major scandals with a politician, a celebrity, or an oligarch for governments to wake up and litigate and regulate these technologies.
If that doesn’t happen, they have enough of a market with the general public to grow fairly big. However, the glass ceiling for them is that they cannot innovate. They cannot design new things because they cannot actually think. The nature of their technology is that they take training data and average it out to make their results.
This means they’re not going to make the big bucks because production houses will find limited use for them. Artists are anti-AI (rightly so) and enthusiasts don’t have the discipline and expertise to take an AI result and tweak it based on feedback.
With the intensive cost and resources to run these technologies, it’s a coin flip whether generator businesses will be able to sustain themselves without income from large media publishers.
2. LLMs will do well in specific contexts but struggle with general use
I think LLMs will survive. Even in their current form, they can be very useful to improve some noisy and laborious processes such as customer support and knowledge summarization.
AI trained on specific systems and controlled knowledge bases will become indispensable for businesses, IF they can get their shit together and actually clean up outdated information from their Confluence and their Help Centers. As a technical writer who does specifically that, I can tell you that most businesses do not have that amount of discipline.
I’m not as sure about AI summaries for general searches, because there’s just way too much information, misinformation, satire, and jokes out there, which LLMs so far are not able to differentiate. People can’t understand satire; forget Google Gemini, which was telling people to eat glue and rocks.
But AI is not going to be the magic lamp that people want it to be. LLMs will always be imperfect, because guess what? Training data is imperfect. Language is imperfect. Humans are imperfect.
And technocrats are unable or disincentivized to see that.
3. A possible massive market crash might temper AI enthusiasm
Remember the sketchy business videos I mentioned at the beginning? There is strong reason to believe that the hype cycle is a bubble, and that OpenAI and NVIDIA (among others) are about to cause a seismic market crash in 2025.
I hope it doesn’t come to that, because oligarch grifters will cash out and move on, while hurting millions of people along the way who over-invested on the hype, just like with Cryptocurrency.
But we’re hurtling down the highway on greasy wheels now. Let’s tighten our seat belts and hope we don’t go careening off a cliff.
Faster is not necessarily better
I keep having to remember that this is a self-relationship blog. It is about the individual. I took a detour to write about my fears for the market, but I have a “Harder, Better, Faster, Stronger” series to deliver on.
So what self-reflection can you and I take away from this?
I used Midjourney and ChatGPT a bit a few years ago when they first came out. I stopped using them the moment I learned about the plagiarization. Generators can go to hell, but I don’t begrudge someone using LLMs to start or break through a mental block.
I would just caution you to do your due diligence. Just because it is faster does not mean it is better. And just because it’s easy does not mean it’s good.
There are people trying to write books and movies with AI. What’s the fucking point? The value of art is in the effort and expression. If you automate that, did you make it?
I also fully understand that some people don’t care about artistry. They’re hoping to make a salable product for low effort. I have nothing to say to them except this bit of bad news — in this process, they won’t develop the artistic eye or skill to realize that the output is slop, and that people won’t buy it. They might make a quick buck, but the enthusiast market (you know, the ones that actually pay) already hate low-effort AI content.
As I said in parts 1 and 2, the general zeitgeist is one of more, more, more. There is demand for faster content production and faster trends because fake urgency benefits the market. The more the volume of things out there, the faster it becomes noise and the faster people move on to other things. This is why AI is so attractive.
However, in this ephemeral loop, there is no personal satisfaction to be found and no profit to be made. Don’t be so addicted to speed that you lose touch with yourself. We’re dumb users, but we don’t have to remain that way.
AI, with its pretense of perfection, is an illusion. You, with all your flaws, are real.
The Harder, Better, Faster, Stronger series
Jan 05, 2025. Why the Continuous Improvement Mindset needs to Die
Jan 12, 2025. There’s a Poison Spreading Among Us
Jan 19, 2025. Will AI Survive User Apathy?
Jan 26, 2025. The Path of Greater Resistance