6 ways automation bites software developers

The dream of fully automated development is getting more real by the day, but is that a good thing? Beware of these six gotchas.

Every software developer knows the dream. We sit in some deck chairs by the pool as the AIs and no-code layers keep the enterprise stack running smoothly. Perhaps we get a whim or an urge to redesign some section of the web app or maybe even completely refactor everything. Without raising our heads, we just speak some command and the automated code generation gets everything right. Voilà. We’ve done our work for the quarter and now we can really relax.

Hah. None of these tools work that well. Oh, they often get some things right. They will, from time to time, get the code completion correct or adjust the parameters to successfully handle the new load. There are many ways that artificial intelligence and coding innovations make our life easier.

But they’re usually great until they fail, which is all too often. This morning I spent an hour on the phone with my domain registrar because my simple change to a DMARC record wasn’t sticking. Oh, the web app told me that the change had been made successfully 48 hours ago, but that doesn’t mean that their machinery was sharing this new DNS value with the world. Nope. So I’m looking for a new registrar while their tech support staff tries to figure out what’s going on.

It’s a bit like Newton’s law. For every wonderful thing that automation does, there’s an equal and opposite example of how automation screwed up. These forces aren’t always symmetrical because the automation usually works well most of the time. It’s just when you take your eyes off the ball or go on vacation, they find a way to go completely haywire.

In the interest of venting a bit and maybe helping us approach automation with more wariness and less starry-eyed surrender, let’s take a brief pause for a steely-eyed reassessment. Here are six ways that the labor-saving AI, no-code wonderfulness, and other advanced cleverness goes wrong.

Garbage collection

In theory, memory allocation is not something that human geniuses should be worrying their little heads about. Modern languages have a layer that doles out chunks of memory and then sweeps them up when the data they contain is no longer needed. Garbage collectors allow programmers to think of bigger things like the value of their stock options.

And garbage collectors usually work well enough—except upon the margins. Because they work automatically, you might think that memory leaks are a thing of the past. They’re certainly less common, but programmers can still allocate blocks of memory in a way that the garbage collectors won’t touch them. To make matters worse, programmers don’t think it’s their responsibility to worry about memory leaks anymore, so instead of looking for the mis-allocation, they often just throw up their hands and increase the amount of RAM in their cloud server. How much of the cloud’s RAM is filled with data structures that could have been freed up?

There are other issues with automated memory management. Object allocation is one of the biggest time sinks for code, and smart programmers have learned that code runs faster if they allocate one object at the start of the program and then keep reusing it. In other words, set things up so the garbage collector doesn’t do anything.

And then there’s the general problem that garbage collection always seems to happen at the most inconvenient time. The automation routines just kick right in, with no way of knowing or caring whether the latency and lag will ruin your experience. Developers who create user interfaces or code that needs to run in, say, medical hardware have good reason to worry about when the garbage collection hiccup will come along.

Interpreted code

The various scripting languages have made it far simpler to just knock off a few lines of code. Their relative simplicity and friendliness has won over many fans, not only among full-time programmers but also in related fields like data science. There’s a reason why Python is now one of the most commonly taught programming languages.

Still, the extra dose of automation that makes these interpreted languages easier to use can also bring inefficiencies and security issues. Interpreted languages are usually slower, sometimes dramatically so. The combination of automated memory management, little time for optimization, and the general slog of runtime interpretation can really slow down your code.

The speed has gotten better as programmers figured out how to leverage the power of alternative runtime implementations or good just-in-time (JIT) compilers. Python developers have turned to the likes of Cython, Jython, Numba, PyPy, Pyston, and now Pyjion for faster execution. But there are still limits to what an interpreter can do.

Some say that interpreted code is less secure. The compilers might then spend extra time scrutinizing the code while the interpreter goes in the opposite direction, striving to keep its results “just in time.” Also, the dynamic typing popular with interpreted languages can make it easier to run injection attacks or other schemes. Of course, compiled code can be just as vulnerable. All programmers need to be vigilant, no matter what language they’re using.

Artificial intelligence

Artificial intelligence is a much bigger topic than automation, and I’ve discussed the various dark secrets and limitations of AI elsewhere. The problems are simple to understand. While the AIs may be modern miracles that are better than anyone expected, they often produce bland and regurgitated output, completely lacking in spirit or individuality. And that makes sense because large language models (LLMs) are essentially just massive averages of their training set.

Sometimes AI makes things worse, tossing out random errors that come out of nowhere. The system is machine-gunning grammatically perfect sentences and well-structured paragraphs until—wait, what?—it suddenly hallucinates a made-up fact. To make matters worse, AI sometimes tosses out slander, libel, and calumny about living, breathing, and potentially litigious real people. Whoops.

The best use of AIs seems to be as a not-so-smart assistant for smarter, more agile humans, who can keep the automated genius on a tight leash.

Database queries

In theory, databases are the original automated tool that can keep all our bits in nice, structured tables and answer our questions anytime we want. Oracle even slapped the label “autonomous” on its database to emphasize just how automated everything was. The modern enterprise couldn’t run without the magic of big databases. We need their raw power. It’s just that development teams quickly learn their limitations.

Continue reading the full news here, in InfoWorld:

https://www.infoworld.com/article/3707253/6-ways-automation-bites-software-developers.html