AI Won't Take Software Engineering Jobs
A detailed look at the AI of today, why it's a continuation of yesterday, and what tomorrow will look like.
Welcome to the Scarlet Ink newsletter. I'm Dave Anderson, an ex-Amazon Tech Director and GM. Each week I write a newsletter article on tech industry careers, and specific leadership advice.
I volunteer at the local high school to assist with their senior exit interviews. This is a process where senior students are asked about their plans after high school. During my introduction to these students, I mentioned that I used to work at Amazon.
One student this last week asked an interesting question. She said:
“Considering advancements in AI, is it a bad idea for me to pursue a career in software engineering? Will this career path disappear?”
It’s an interesting question. Social media and mass media is certainly full of excited proclamations declaring that software engineering is almost done.
Business Insider: Software engineers are getting closer to finding out if AI really can make them jobless.
Below are my thoughts on AI, software engineering, and what the future holds for our career path.
Welcome to those of you new to Scarlet Ink! Each week I send an article to all subscribers. Free members can read approximately half of each article, while paid members can read the full article.
For some, half of each article is plenty! But if you'd like to read more, I'd love you to consider becoming a paid member!
AI and the foreseeable future.
What we commonly call AI these days is pattern recognition software, or more technically called machine learning. You can give it a pile of data, and it can identify features (patterns) in that data. This allows it to accomplish things like:
“This looks like a dog because I’ve seen numerous dogs, and lots of not-dogs.”
“People who like these movies tend to like these other movies.”
“If I show you these ads, you’re likely to click on them based on everything I know about you, and what you have in common with people who click on ads like this one.” (we all love this type of AI, right?)
LLMs (large language models like ChatGPT) are the latest incarnation of pattern recognition software. They take in massive amounts of data (text, or code) and do an amazing job of guessing what comes next. While there are technical differences, they’re still performing a similar task. Our imperfect human brains view LLMs as magic, but it’s a continuation of recognizing pictures of dogs.
While the output of these LLMs feels special because they can create poetry or tell stories or write code, they’re still predictors. They take past uses of words in poetry, medical journals, webpages, code, and books, and use those patterns to figure out how to make output corresponding to your request.
An AI Coding Assistant is also an LLM, but instead of a spoken language, it uses a coding language. They are fed a mountain of code, and that lets them predict what code is needed in a specific situation. It can autocomplete code like a beast because it can recognize a pattern and fill it in.
This even works for larger problems, similar to my poetry example. We can say, “Build me an iPhone app which takes in weight and height, and it returns a BMI.” And an AI Coding Assistant might do a decent job.
AI is improving, and these AI Coding Assistants will expand to recognize patterns in more than just code. AI Coding Assistants today mostly help write code for software engineers. But soon enough, AI will get decent at identifying patterns in other types of software development problems. In the design phase of a project, you could describe a specific technical issue, and it will be able to outline how common components should plug together. For example, what types of caching layers may make sense when paired with a specific type of distributed system.
Yet, all of these exciting developments are not signals that software engineering careers are at risk.
Why?
Because AI doesn’t understand.
A common concern with AIs is that they hallucinate, aka make things up. This is not a temporary concern, either. We can’t “solve” the hallucination problem for one core reason. It doesn’t understand what it’s doing. There is literally zero understanding behind its answers. We (humans) use our understanding of the world around us to fact-check our assumptions. No current AI understands the world.
For example. Let’s ask ChatGPT about Ivy League schools.
This isn’t a matter of ChatGPT being “stupid”, or needing more processing, or more training. This is a fundamental aspect of how AI works. ChatGPT 8 will certainly be great at replicating patterns. It will create the best poetry, and create the best imitations of other people’s code. It will probably not be tricked by my follow-up question, either.
But it still won’t understand. It cannot understand what a school is, or letters. It is a complex algorithm, but still an algorithm. It guesses at an output, without understanding a single thing it is doing.
We have taken thousands of evolutionary steps from the first machine learning algorithms which we probably used for ad recommendations (because we love money), and gotten to LLMs. But it is still the same idea. A pattern predictor.
It will not take an evolutionary change for our AI to understand, we would need a revolutionary change. You do not take an excellent pattern recognition algorithm, give it enough processing power, and suddenly, it says, “Who am I?”.
Yes, people are searching for a path to Artificial General Intelligence (AGI). That breakthrough could take years, or may literally never come. I’m a great believer in the long-term ability for humans to solve almost any problem, but this is closer to saying “Will we solve faster than light travel?” - We don’t know. Do the laws of physics support it? We just don’t know enough about the universe to identify a roadmap to solve either problem.
Until it understands, an AI is building with a blindfold on. It doesn’t know what problems require solving (or in fact what problems are), what its outputs mean, or if it has done something valuable or not.