The Generative AI (GenAI) zeitgeist continued to ramp up in 2025 with significant disruption across all IT sectors. This has been felt most acutely in discussions about entry-level roles, where hiring, role design, and expectations are all being re-examined. GenAI tools allow IT specialists to quickly rough out solutions and even produce complete products to implement well-known solutions. Alongside a stuttering economy, increased taxation, and the continued drive for efficiency, it has been widely reported that roles in entry-level jobs in the tech sector have declined.
GenAI offers a certain kind of magic. Since the inception of software engineering, practitioners have tried to find ways to capture, reuse, and maintain libraries of code. This has always been a thorny problem—both because of intellectual property concerns and because organisations tend to fund new projects while under‑investing in long‑term maintenance. GenAI allows the collective wisdom of past development to be harvested efficiently and redeployed in a new kind of ‘clean‑room’ environment. Practitioners are already seeing reductions in routine errors, faster iteration, and higher individual productivity in problem domains.
If you look to the early Industrial Revolution—and later the information revolution—for comparison, the benefits were first to capital and then to experienced practitioners able to adapt quickly. Evidence shows that such periods are often marked by labour‑market turbulence rather than immediate aggregate growth. However, workforce design typically lags technological innovation. In the later stages of the Industrial Revolution it was recognised that skilled workers were still needed. Even when roles were broken down into easily learned steps, knowledge gaps emerged and quality declined. Because this recognition came decades later, retraining staff who had not been able to learn through doing proved difficult. Through this lens, GenAI may not eliminate the need for skilled labour so much as shift the locus of skill—from production to supervision, integration, and judgement.
Human intelligence versus AI
Entry-level IT roles have never just been about production. They are how organisations learn how their systems behave through the messy reality of experimentation, failure, and repair. While GenAI can review and reproduce work, its strengths lie in predictability and repeatability rather than in responding to novel situations. Human intelligence retains an edge in recognising patterns outside the norm, unintended interactions between systems, and emerging risks. These skills cannot be learned in classrooms alone; they develop through real-world experience—reading logs, living with the consequences of workarounds, and managing technical debt in systems never originally designed for current use. Entry-level practitioners often occupy the roles that deal with these operational realities.
Timing and context matter. We have seen this repeatedly in major IT failures where human oversight was missed: Therac‑25, the Post Office Horizon system, the Boeing 737 MAX crashes, and more recently the Maven targeting system. Across very different sectors, the pattern is consistent: when automation removes time and space for human oversight, early warning signals are lost. If organisations automate away the roles that detect anomalies, validate outputs, and encounter friction, it should not be a surprise when both models and institutions begin learning primarily from assumptions rather than realities.
So where does that leave us? Entry-level roles have the time and curiosity to maintain epistemic contact with organisational systems – to ask the difficult questions and challenge assumptions, both machine and human. GenAI can make organisations faster and more confident. It can scale competence. But only organisations that preserve early technical experience retain judgement. History suggests that confidence grows quickest when learning is thinned out and that the cost appears later, in places no dashboard quite captures.
Questions to consider
Here are some questions for you to consider in your workplace.
- Where does learning from friction happen in your organisation?
Who encounters raw system behaviour, incomplete data, edge cases, and operational messiness before it is abstracted into reports or dashboards? - Which roles still interact directly with logs, errors, and anomalies?
Are these interactions treated as valuable learning opportunities, or simply as problems to be escalated or automated away? - How are new technical staff expected to develop judgement?
Is experience gained through gradual exposure to real systems, or primarily through reviewing AI-generated outputs and summaries? - What assumptions are embedded in your AI-supported workflows?
And who has both the time and permission to question those assumptions as tools, data, and contexts change?
Opportunities to develop entry-level roles
One-way organisations can preserve early technical experience while adopting AI tools is through structured workplace learning. The Open University’s digital apprenticeships are particularly valuable for entry‑level roles, embedding learning directly into the workplace. Apprentices apply a unique, reflective lens to organisational practice, combining creativity with sector‑leading knowledge.
Rate and Review
Rate this article
Review this article
Log into OpenLearn to leave reviews and join in the conversation.
Article reviews