And is the increasing reliance by junior developers on AI coding assistants storing up a generational skills shortage for the future – ‘professional debt’, if you will?
Social media is awash with people saying how quick and easy it is to code when generative AI is writing it for you. This is opening up a battle on two fronts for the developer profession:
1. Some senior executives, who care about delivering more quickly and not at all about technical debt, will see coding assistants as a means to swap expensive coders with cheaper and progressively lower-skilled prompt engineers.
2. Coding assistants have reduced the bar to entry to the profession, leading to a sudden influx into the profession by lower-skilled prompt engineers. Being both plentiful and more affordable, they’ll increase competition for developer jobs and drive down salaries through oversupply.
These are both immediate problems. There is also a third sleeper problem caused by coding assistants. As we’ll explore shortly, using one can reduce engagement with the problem, with the result that junior coders / prompt engineers are failing to develop valuable skills and critical thinking early on in their careers. The short-term benefit of increased productivity from coding assistants is creating ‘professional debt’, a future generational skills gap.
Are coding assistants actually effective? #
This is a qualified ‘yes’. A recent study (Cui et al., 2025) showed that coding assistants do increase the productivity of developers, particularly more junior and shorter-tenured ones.
In that study productivity was defined as the number of code commits and pull requests (a request for the newly-developed code to be merged to the main code base), coupled with overall build success (errors in the code will usually prevent the build from succeeding). However, these are all measures of output and give no indication of how well the code achieves the intended design outcomes. In other words, this particular study measured quantity not quality.
Another study attempted to assess the quality of the code produced by OpenAI’s ChatGPT and Google’s Bard (now Gemini):
“The results of this empirical evaluation suggest that GPT-3, GPT-4, and Bard can generate the same functional and quality code for coding problems that have available solutions online.”
(Tosi, 2024, p. 12)
While the coding assistants were able to create code to a given specification that (mostly) passed tests and adhered to coding standards, the solutions generated were not novel. I do concede that a lot of coding can involve combining solutions to previously-solved problems, and coding assistants are going to be great for reducing that drudgery. But on the rare occasions when an entirely novel problem presents itself, for the time being at least, the spark of creativity needed to conjure up a solution will still need to come from human ingenuity.
Will anyone care why their code works anymore?
You’ve probably read how people will often over-estimate the capabilities of their driver assistance systems (Tesla’s Autopilot being one of the most well-known). Although such systems are only partially automated, meaning the driver should be actively monitoring at all times with their hands on the wheel in case they need to take over, drivers are often lulled into complacency and divert their attention elsewhere.
This brings me to the more pernicious side-effect of coding assistants. It’s clear that they can generate passable code in response to well-crafted prompts, which for many situations is perfectly adequate. What they’re also doing is discouraging learning. We expect junior developers to be less productive precisely because making mistakes is part of the journey from understanding the problem, to solving it, to solving it elegantly.
Just as with the driver aids, no matter how much developers protest that they’re using coding assistants only for learning, inspiration and prototyping, if the coding assistant is doing a good enough job, people will become complacent and focus their attention elsewhere. In the quest to increase productivity, we’re potentially sacrificing acquisition of skills and understanding.
The effect of this abstraction will be a shift in where the skills lie. It creates a greater divide between the specialisms of ‘classically trained’ coders who can write code unassisted, and of prompt engineers who can only wrangle generative AI to create functional, quality code. In doing so, we’re creating a generation of prompt engineers who have no clue how or why the resulting code works. However you look at it, the bar to entry to ‘write’ software has lowered significantly. More on this later.
This abstraction is nothing new. I’m sure many elder developers often channel Monty Python’s ‘Four Yorkshiremen’ sketch to bemoan how easy younger generations have it. (“Memory-safe languages? Pah! We had to track memory allocation on our fingers and toes…”)
Does it matter that kids today can’t write code? Not until the software goes wrong. Then we’ll end up paying the premium to hire the remaining people on the planet who remember how stuff works.
Are developers vibe coding themselves out of a job? #
There are plenty of arguments justifying continued human developer presence: that an experienced coder is adding value higher up the chain by shifting role from writing code to writing effective prompts; or that the human spark of creativity remains necessary to inspire truly innovative code; or that coding assistants remain error-prone and need more human oversight than a junior intern (no longer necessarily the case).
All these arguments rely on senior executives – the ones hiring and firing developers – valuing these same things. If less skilled prompt engineers can churn out adequate code more cheaply and quickly, and ship so-so software to the satisfaction of senior management, then why pay the inevitable premium on salaries to hire expert coders? (Because coding is perhaps as little as 20% of the actual work, and technical debt, that’s why.)
When everything looks easy to an outsider, and when productivity is measured in lines of code rather than the quality of outcome achieved by that code, it’s not too much of a stretch to see that the developer profession has a potential crisis on its hands.
Let’s look at some potential trends:
Better code outputs without reprompting #
Up to a point, the success rate of prompt to shippable code will increase over time. Studies already show this (Tosi, 2024), although current models will likely reach an upper ceiling of effectiveness. Also, current models have no creativity. The code they create is always going to be derivative of known solutions in the training data.
Better design outcomes without reprompting #
I would speculate that coding assistants will become more adept at inferring the desired design objective from the prompt provided. In other words, prompts producing a good result first time (or requiring minimal refinement) will become simpler to construct, so even prompt engineering will become less valuable as a skill over time.
Thanks to coding assistants, the bar to entry to the development profession has already been lowered, and it’s only going to keep moving in that direction. The need for experienced developers actually writing and reviewing code will likely diminish in line with these two trends, allowing less experienced developers to tackle more complex tasks, and potentially reaching the point where human intervention is barely needed at all.
A profession under attack #
The developer profession is therefore facing an onslaught on two fronts:
1. From narrow-minded executives who will simply see the short-term benefit of swapping expensive coders with cheaper and progressively lower-skilled prompt engineers without caring about any longer-term downside to code quality; and
2. From a sudden influx into the profession by lower-skilled prompt engineers, who are vacuuming up all the available jobs and drive down developer salaries through oversupply. (Maybe they’ll also begin to regret devaluing their own skill set by broadcasting how easy and effort-free it is for anyone to create code with a coding assistant.)
While the population of true native coders won’t vanish overnight, they will become more scarce by proportion. Whether that scarcity translates into higher salaries or day rates rather depends on whether any technical debt from AI-generated code becomes enough of a problem for companies to create sufficient demand for their skills, and on whether the companies then survive long enough to do something about it.
Crack dealer economics #
Right now in 2025, all indications are that generative AI companies such as OpenAI and the respective divisions at Microsoft and Google are not directly profitable. They will remain so unless operating costs reduce dramatically (unlikely), the price goes up significantly, and/or the inclusion of genAI drives up demand for higher order paid-for services. Google and Microsoft are betting big on this latter approach, while I would guess that OpenAI will be acquired by Amazon*, which will then monetise it more effectively through Amazon Web Services.
A cynic like me would observe that each of the big players is getting us hooked on free or cheap genAI – operating costs be damned. When we’re all irretrievably dependent on genAI for our products and business workflows, it’s not out of the bounds of reality that they’ll all start to hike prices upwards. And let’s not forget that our ‘supply’ (and ability to write code) becomes dependent on cloud-based services being available all the time. Even the largest providers have outages.
Should the price hikes happen – and my guess is that they will have to – perhaps the economics will eventually make it more attractive to hire humans again to write code. Assuming anyone remembers how.
It’s not out of the bounds of possibility for organisations to create their own in-house coding assistants. Open source large language models do exist, after all. However, the skill sets needed and the ongoing operating and maintenance costs would probably render this option impractical. And if you have the skills in-house to build an AI coding assistant from scratch, you probably don’t need a coding assistant in the first place.
* wild speculation here
What does this mean for our products? #
It is incredibly tempting to look at the promise that coding assistants offer now and studiously ignore the potential downsides that would kick in years later. For rapid prototyping and genuinely throwaway code, by all means, fire up those coding assistants and reap the productivity benefits. But we all secretly know that prototype code has an annoying habit of sneaking into production.
Given the general tendency to over-trust only partially capable automation, blindly accepting or fast-tracking new code created with the help of coding assistants will run the risk of introducing unwanted problems into our products. After all, the large language models underpinning coding assistants were not trained exclusively on elegant code examples, free from all bugs and security vulnerabilities. The genAI tech will undoubtedly mature in time, but for now any perceived productivity gains will need to be offset by more rigorous peer review and potentially rework of generated code by experienced, native developers. There are going to be some grumpy developers stuck doing the bit of the job they hate.
There may also be value in anticipating the professional debt problem. Assuming junior developers become more plentiful albeit lower-skilled, future value for the organisation could be created by actively investing in their professional development with a modern apprenticeship scheme or in-house academy. Although costly, establishing a programme that teaches classical coding and critical thinking skills to bolster their understanding of prompt engineering may serve to retain talent. This investment would help the organisation to avoid becoming wholly dependent on third-party coding assistants whose costs will likely spiral upwards in time.
Final thoughts #
When presented with functional automation, people become complacent with its use and misplace trust in it to perform better than its capabilities. When applied to generative AI coding assistants, complacency and over-reliance is already discouraging developers from learning their craft. Should this trend continue, we’ll find ourselves in a generational skills shortage that will be difficult to reverse, right at the point we’re most likely to need native coders to solve the problems emerging in our products.
Wholesale reliance on coding assistants will also mean that organisations’ ability to code becomes dependent on third parties, which can easily raise prices or suffer disruptions to service.
One way to anticipate this problem now would require investment in continued professional development for junior developers, the cost of which would likely offset any savings from hiring more junior developers using coding assistants in the first place.
Further reading #
‘AI Coding Assistants: Empowering or Replacing Developers?’ (2025) Medium, 12 February. (Accessed: 8 June 2025).
Banks, V.A. et al. (2018) ‘Is partially automated driving a bad idea? Observations from an on-road study’, Applied Ergonomics, 68, pp. 138–145.
Biondi, F. (2022) ‘Drivers of self-driving cars can rely too much on autopilot, and that’s a recipe for disaster’, The Conversation, 16 June. (Accessed: 7 June 2025).
Bowley, R. (2025) ‘A plea to junior developers using GenAI coding assistants’, 3 February. (Accessed: 8 June 2025).
Clark, A. et al. (2024) ‘A Quantitative Analysis of Quality and Consistency in AI-generated Code’, in 2024 7th International Conference on Software and System Engineering (ICoSSE). 2024 7th International Conference on Software and System Engineering (ICoSSE), pp. 37–41.
Cui, Z. (Kevin) et al. (2025) ‘The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers’. Rochester, NY: Social Science Research Network.
Devonshire, T. (2025) ‘Vibe Coding Explained: A Revolution Or Just A Trend?’, Blank Slate, 12 March. (Accessed: 8 June 2025).
Elango, V. (2024) ‘How AI Coding Assistants are Impacting Software Developers’, Code Like A Girl, 11 December. (Accessed: 8 June 2025).
Gonen, H. et al. (2024) ‘Demystifying Prompts in Language Models via Perplexity Estimation’. arXiv.
‘Is AI Going to be the End of Junior Developer Roles?’ (2024) Antler Digital, 29 September. (Accessed: 8 June 2025).
Kim, H., Song, M. and Doerzaph, Z. (2022) ‘Is Driving Automation Used as Intended? Real-World Use of Partially Automated Driving Systems and their Safety Consequences’, Transportation Research Record, 2676(1), pp. 30–37.
Leidinger, A., van Rooij, R. and Shutova, E. (2023) ‘The language of prompting: What linguistic properties make a prompt successful?’, in H. Bouamor, J. Pino, and K. Bali (eds) Findings of the Association for Computational Linguistics: EMNLP 2023. Findings 2023, Singapore: Association for Computational Linguistics, pp. 9210–9232.
Mann, T. (2025) ‘Not even OpenAI’s $200/mo ChatGPT Pro plan can turn a profit’, The Register, 6 January. (Accessed: 8 June 2025).
‘New Junior Developers Can’t Actually Code’ (2025) N’s Blog, 14 February. (Accessed: 8 June 2025).
Nordhoff, S. and Hagenzieker, M. (2024) ‘“I will raise my hand and say ‘I over-trust Autopilot’. I use it too liberally” – Drivers’ reflections on their use of partial driving automation, trust, and perceived safety’, Transportation Research Part F: Traffic Psychology and Behaviour, 107, pp. 1105–1124.
‘Remember when developers reigned supreme? The market for software coding goes soft’ (2025) CIO, 1 April. (Accessed: 8 June 2025).
‘Report: Developers and AI Coding Assistant Trends’ (2025) CodeSignal, 11 March. (Accessed: 8 June 2025).
Sharwood, S. (2024) ‘Microsoft remains massively profitable, AI payoff awaited’, The Register, 31 July. (Accessed: 8 June 2025).
Speed, R. (2024) ‘Millions forced to use brain as OpenAI’s ChatGPT takes morning off’, The Register, 4 June. (Accessed: 8 June 2025).
Tornhill, A. (2025) ‘Skills Rot At Machine Speed? AI Is Changing How Developers Learn And Think’, Forbes, 28 April. (Accessed: 8 June 2025).
Tosi, D. (2024) ‘Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation’, Future Internet, 16(6), p. 188.
Valenzuela, A. (2024) ‘Prompt Engineering for Coding Tasks’, Towards Data Science, 12 April. (Accessed: 8 June 2025).
‘We’re trading deep understanding for quick fixes’ (2025) IT Pro. (Accessed: 8 June 2025).
Yetiştiren, B. et al. (2023) ‘Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT’. arXiv.

