I’m Tired of Talking About AI

I feel like the ground is shifting beneath my feet. Several times in the last few weeks I’ve seen people I respect talk about how they use a generative AI assistant when coding, or criticize those that are critical of generative AI. People who were, themselves, critical of generative AI in recent memory.

I’m tired.

I am, deep in my bones, tired of talking about this. The first time I mentioned AI (that I can find) on Mastodon was in May, 2023. The post was about how tired I was of hearing about it. In over two years, I have not become less tired of it.

I have considered that these people I respect whose opinions have changed should prompt me to reexamine my opinions. I have done so. I’ve taken a look at why I don’t use genAI, and checked in on whether those are still good reasons. I can’t find the change in circumstances or information that led to these changes in opinion.

I don’t honestly even know that I consider these people wrong. They’re smart people. Maybe they’re right. I’m a big fan of people changing their opinions in light of new information. I don’t know that those people think I should use the tool. There’s room for nuance here, as much as the discourse pretends there isn’t.

But I just can’t get there. I can’t look at this situation and say “yeah, I should use that tool” for any reason other than “people I respect find some value in it”. And that’s just not a good enough reason for me. It’s indicative, it’s enough to make me consider it, but it’s not enough to overrule my own judgment. What is my judgment for, if I’m just going to blindly follow people I respect?

Besides, other people I respect agree with my judgment. A quandary.

I’m sad. I’m tired. I don’t want to talk to people about computers anymore. Like Glyph, I have found that the constant checking in, the constant trying to turn the problem over in my brain to make it make sense to me the way it seems to make sense to other people, has worn me down. It’s exhausting.

So I’m done talking about AI. Y’all can keep talking about it, if you want. I’m a grown adult and can set my own mutes and filters on social media.

I’ve given this idea its due, and with this post I am absolving myself of having to think about it any more.

I’ve never actually written about AI on my blog before, though I have some unpublished drafts if you know how to find them. So I’m going to leave this conversation with a list of my objections to coding with AI. If I’m wrong about it, let this be a record of my wrongness. If I’m right about it, let this be a record of my argument.

LLMs have no theory of the system, and cannot form one.

The core of programming, to me, is to develop a theory of the system. As Naur says (as quoted by Ceej), a programmer with theory can “[e]xplain how the solution relates to the affairs of the world that it helps to handle.”

When talking about this with people, I often describe it as a perspective. Every problem space looks different depending on where you’re viewing it from. Where you view the problem from is your business’ thesis on that problem. Some vantage points are better than others. Some vantage points are more suited for certain customers than others. A firm understanding of your vantage point and its limitations and strengths is core to good system design.

This is, I believe, a solid core of my job, and the LLM is incapable of performing it. It cannot do it for me. It cannot help me do it. There is no shortcut here, I cannot use the forklift to lift the weights for me at the gym. I need to do my own thinking, build the understanding in my own brain.

“If you don’t start now, you won’t be able to find work soon.”

I find this argument so tedious.

This is not a logical argument, it’s just an appeal to fear.

You cannot scare me into believing something against my better judgment, I need to understand why my judgment is wrong.

Worse, this argument pairs particularly poorly with “it democratizes software engineering” and “vibe coding”. If it’s so easy now that anyone can do it, why can’t I pick up the skills later, when I actually need them for employment? Is it so easy that anyone can do it, or so hard that if I don’t upskill I’ll be left behind?

I’ve been professionally employed writing software for thirteen years now. I have navigated the advent of the cloud, and Docker, and web3. I’ve seen Rails’ ascendancy and NodeJS. I’ve seen Kubernetes and I’ve seen CDKs. I’ve seen conversational interfaces and I’ve seen voice assistants. I’ve seen smartphones and apps. I’ve seen technologies I’ve forgotten about already that the industry lost its damn mind over and was convinced they were the future of software. For a time.

I feel reasonably confident in my ability to navigate this bubble, too.

“Why not at least keep one foot in the pool, in case it is the future?”

I’m busy, I’m tired, and I have better uses of my time and energy.

If I were going to invest time in my career, learning Rust and TypeScript are much lower hanging fruit that I’m much more confident would yield dividends. I’m not doing either of those things because I’ve glanced at them enough to know I won’t enjoy using them as much as I like using Go, and so it hasn’t been important enough to me yet.

I’m certainly not going to enjoy using LLMs.

“You would’ve argued against smartphones or the internet.”

Would I have? I was there for smartphones, and I have no recollection of being skeptical of them.

But this isn’t a real argument. I argued against web3 and I made fun of Quibi, too. How are they doing? There are plenty of technologies I didn’t argue against. I was pretty on board with infrastructure as code. I don’t recall being skeptical of containers or the cloud.

Maybe I’m skeptical of some technologies but not others, and use my judgment to differentiate between those two groups.

“LLMs aren’t useless. They did this thing for me.”

Of course they aren’t. But they aren’t a trillion dollar industry, and I just don’t buy that they’re the future of software development.

Maybe they’ll be a tool software engineers can use in certain situations. Maybe I’ll come across a situation where it makes sense to me to reach for an LLM. I haven’t yet, but hey, I won’t say never.

I’m glad you’re finding them useful. Please leave me alone about them.

The energy cost

Look, a lot of people are disdainfully skeptical about environmental and energy use claims about AI. I don’t have the specifics. I gather it’s because the companies involved don’t want us to have the specifics.

I do know that the companies involved are all sinking an unthinkable amount of money into compute, and building new data centers and power stations to handle the load. Where’s the money going? What are the new data centers for and why do they need more power, if the energy cost is overstated?

I haven’t seen anything that will square that circle for me.

“Oh, that’s just for training, actually using the thing has much lower energy use.”

If you buy a car for $100,000 and then it only costs $50 a month in gas, is that a cheap car? No, you still spent $100,000 on it.

The bottom line is: if the energy we can currently supply isn’t enough, if new data centers need to be built for it, yeah, it’s using more energy. That seems pretty inescapable.

The ethics of it

People turn up their nose at “this is trained on stolen IP”.

“Since when do you care about IP?” they say.

And that’s fair. I’m not the world’s biggest fan of IP. I think it’s a dirty hack on a dirty hack that has stuck around for centuries and is an awkward solution to the problem it’s addressing, and the cracks are showing through it in a pretty big way.

I’d be very happy with some very aggressive reform of IP and copyright law.

This is not an intellectually or ethically inconsistent stance with being displeased that it’s not being reformed, but you can ignore it if you’re wealthy enough.

I think it’s just another indication to me that AI is a tool of capital. Between the “public domain for thee, copyright for me” approach and the push from executives and VCs for its adoption, I’m wary. When you add in the open desire from executives to use it to replace labor, I’m alarmed.

GenAI, as it exists, is fundamentally a tool for undercutting labor. And it’s using stolen labor to achieve it, and mandating that labor implement their own replacement.

But I don’t believe it’s a suitable replacement. I believe that what’s way more likely is that it won’t supplant labor, but will turn us into professional AI output editors, while devaluing our skills and driving our wages and working conditions lower.

This is about the point in the story where unions start being important.

The bubble

Look, the economics of this just don’t make sense and the “what trillion dollar problem will AI solve?” question takes up a lot of space in my brain. I see people talking about the limited ways in which they work with it now, and wonder what happens when the bill comes due. We wouldn’t, as a society, pay a trillion dollars to solve those problems. Not even close.

Where’s the money coming from?

I’m suspicious it’s not. I believe that at some point the music will stop and the funding will dry up. And at that point, what happens to these tools? What happens to the people using them? What happens to the executives they’ve misled about how much productivity they can expect per dollar?

What future exists where we continue on even as we are, let alone with more advanced capabilities?

Where’s the money coming from?

Even in the best case scenario, I’m not sure this is the job I want

Let’s assume the best case scenario. Generously, let’s assume coding agents can solve any problem if I describe what I want them to do. Let’s assume I’m sipping rocket fuel.

What does my job look like?

It looks like prompting an agent to do a thing, then skipping over to another agent to review its answer to a different problem. Jumping between different attempts by agents to do what I asked all day, juggling a dozen or more projects so I don’t waste any time. Reviewing code for something that can’t learn from the time I invested, catching the same mistakes over and over.

I’m neurospicy. Switching between tasks is expensive for me.

Is this a job I would want? Would I have gotten into this career if this was the work? Would I still be here?

Set aside wishy-washy arguments about craft or fun. Let’s accept that people have strengths and weaknesses. Right now, my job optimizes fairly well for my strengths. If that job changes to one that targets all my weaknesses, is that a career I should stay in?

What a bleak future.