AI in Coding Lightning Talk

Series: Big Tech Alternatives

tl;dr: I covered why coding is a better use than most for an LLM because of its ability to be quickly validated, but also concerns about long term de-skilling.

Post Intro

This post is my text for a brief lightning talk about AI in Coding.

Talk Intro

My big thesis for today is that when talking about large language models (LLMs), coding is the best most widespread use case.

There are significant caveats on my claim. There are real concerns with the current state of what we call AI. You’ve probably heard a lot of them and will hear more today. But I do think coding is somewhat uniquely suited to handle many of those concerns.

So for my talk, I will look at a couple of these concerns that are most relevant to my work as well as some usage principles to mitigate them.

Short Term: Validation

The first short-term concern is validation. An LLM is a non-deterministic modelling of language, which does not guarantee any kind of accuracy. Maybe it is 98% accurate, and that’s legitimately impressive. But can you tell which is the 2% it got wrong? If not, it doesn’t matter, because you still need to treat it like it was 100% wrong, at least in any context that impacts others.

So one usage principle I would say is: don’t use it for anything that you can’t validate whether it got it right, in less time than it would have taken to have done the task yourself in the first place.

Good development practices can really help with this usage principle. Development tools like linters, automatic formatters, tests both manual and automatic, step debugging tools, and code reviews from colleagues. These will go a long way in very quickly pointing at anything that is wrong with the code. None of that validation is replaced by AI. It also doesn’t replace any of the human factors like sorting out what we want to be the user experience in the first place. What it can replace very well is the first draft stage before the validation. We can now have one ongoing chat instead of digging through dozens of pages of documentation and old forums to try to fit the pieces together.

This is the main reason why I am arguing that coding, specifically that stage, is the best use for a large language model. Coding is inherently validatable in a way that most other uses are not.

Long Term: De-Skilling

Longer term is where I start to think there is more serious reason to be concerned, mainly because it is easy to start offloading your thinking. We’re all looking for ways to make our jobs easier and more productive, so if it seems like I can do something faster with an AI prompt, why wouldn’t I do it?

Because it carries risks. If I rely on it to be able to do the basics of my job, and it's controlled by a few US Big Tech giants, I am now dependent on the whims of those companies.

I’m not going to get into the Big Tech part of that conversation. In a work context the factors to consider are different than personal use, and we can only use what is approved by ICT anyway.

There’s still the de-skilling component, though. What do we do with the temptation to have it do all our work for us and sacrifice our abilities to do it ourselves? My usage principle here is definitely “easier said than done”: learn from it, don’t offload skills to it.

Cory Doctorow talks about centaurs and reverse centaurs. To quote him:

A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant.

The classic comedic example of a reverse centaur is the I Love Lucy scene where Lucy and Ethel are working at a chocolate factory. They are supposed to be putting the chocolate in wrappers then back on the belt for the next person in line. The supervisor is clear that if any chocolate makes it past them unwrapped, they’re fired. But they can’t keep up, so they start desperately eating or shoving chocolate into their clothes. This is because the incentives have gotten reversed. The machine isn’t helping them pack chocolates anymore. They are being stressed to keep up with the demands of the machine.

"Lucy shoving chocolates into her mouth and top."

Coding with an LLM is susceptible to that risk. There’s a pressure to do more, faster, which makes it harder to mentally keep up. Then you start losing the skills and become more reliant on it.

But the goal of my job is not simply to generate code as fast as possible. I should feel like the AI is helping me keep up with helping real people, not that I should be struggling to keep up with it.

What practices can help with that? We can write our own comments in the code, or longer documentation, in our own words. We can have knowledge sharing within our team where we swap notes on things we have learned and ask each other questions. These help cement that we are learning from it, not just accepting it and moving on. We can also frame our conversations and how we log our issues around what it will mean for actual users. It’s never just "change this code." It's "improve this user experience." That is ultimately the point of why we would even consider using these AI coding tools. We help real people. An AI can help us do that, if we use it well.