Beyond ERP: Why the Real AI Challenge of 2026 Is the Operating Model
Human-in-the-loop is an Architectural Signal, Not an Ethical Strategy
Everyone is racing toward an AI-enabled future state, but most organisations are working on the wrong problem first. The real challenge of 2026 is not model capability, agent maturity, or responsible AI frameworks. It is the far harder work of changing the ERP-centred operating model that still sits underneath almost every enterprise. AI will arrive regardless. The question is whether organisations will be structurally ready for it. And who is accountable for making them so.
For decades, ERP has been the gravitational centre of enterprise technology. Not just as a system of record, but as the implicit organiser of work, authority, and accountability. Processes were designed around its constraints. Humans filled the gaps. Control lived in people as much as in systems. That model worked until intelligence began to move into the machine.
AI does not fit neatly into an ERP-centred world. It does not tolerate implicit intent, fragmented state, or human-held orchestration. When dropped into these environments, AI does not transform them, it compensates for them. So humans remain in the loop not because judgement is required, but because the operating model cannot yet stand on its own.
That is why the real AI task in front of us is architectural, not algorithmic. To reach a future where AI can act, not just advise, organisations must first decentre ERP as the organising principle of work and reframe it as a still critical but bounded system of record within a broader platform architecture. This is not about replacing ERP. It is about relieving it of a role it was never designed to play.
The inevitable AI future state cannot be achieved by layering more intelligence on top of yesterday’s operating model. It can be achieved by doing the harder, less glamorous work of redesigning how intent, flow, and authority are expressed across the enterprise. That is the real AI challenge of 2026. So let’s get busy changing the world.
One of the quiet failures of 21st-century enterprise technology so far is that we keep using 20th-century language to describe it. “Human-in-the-loop” is the latest perfect example. It is usually presented as wisdom. A familiar sign that an organisation understands the limits of automation and prudently respects the role of its people. In reality, I think it really signals something far less noble.
When AI strategy or product selection depends on humans remaining in the loop to make the system work, the problem isn’t ethics or caution, it’s architecture. What you’ve really done is extend an ERP operating model, wrapped it in AI, and moved it onto modern infrastructure.
AI has exposed the limits of application sprawl, outsourced responsibility, and ERP-centred thinking, all relics of a 20th-century operating model that no longer holds. The irony is that the one construct we should have preserved to manage this complexity, the CIO as a true technology leader, has been steadily dismantled at the very moment it is most needed.
For most of the modern enterprise era, systems were built around transactions, not flow. Work happened because a human initiated it, pushed it forward, interpreted exceptions, and absorbed ambiguity. Technology recorded outcomes. But people carried the intent of the transaction. ERP systems formalised this worldview.
Byut they were never designed to run organisations autonomously. They were designed to support human-led execution. Roles, approvals, screens, handoffs, and controls all assumed that a person sat at the centre of the loop.
When AI is layered onto this environment, it does exactly what you would expect. It becomes expontentially smarter at the edges. It summarises, explains, recommends, and reassures better than we can. It improves interaction without changing the underlying structure. So humans stay in the loop. They approve what the system cannot approve. They escalate what the system cannot interpret. They stitch together what the architecture cannot orchestrate.
This becomes a serious problem at scale. Boards, whether public or private, exist to govern systems that must operate reliably, repeatedly, and under pressure. Not once. Not in a pilot. Not in a demo. At national or enterprise scale, with failure modes that are political, legal, financial, and human. We’ve seen this before as part of every significant 21st century change.
Rolling out a national vaccination program is not a communications exercise. It is a system-of-systems problem involving supply chains, identity, eligibility rules, appointment scheduling, exception handling, adverse event reporting, and public trust, all operating under intense scrutiny and time pressure. Boards were not asking whether the intent was ethical. They were asking whether the system could withstand volume, variation, and failure without collapsing.
The same is true of the shift to digital driver licences. Once a digital licence becomes a primary credential, it must work everywhere, all the time. Offline, across jurisdictions, under enforcement, and in edge cases the designers never anticipated. A board does not care that the user experience is elegant if the system fails during a roadside stop, a disaster response, or a court proceeding. Reliability and accountability are non-negotiable.
Or take the current nationwide KYC and identity reforms in Australia. These are not digital initiatives. They are foundational trust infrastructures. When onboarding fails, payments stall, benefits are delayed, or fraud scales faster than control, the consequences are immediate. Bank boards are not reassured by good intentions. They want to know where authority sits, how decisions are enforced, and who is accountable when the system makes, or enables, a mistake.
In all of these cases, humans are involved, but critically, not because the system would otherwise fall apart. They are involved because judgement, discretion, and oversight are genuinely required at the edges. The core system is designed to stand on its own. Human involvement enhances it; it does not prop it up. That is the standard boards apply. Which is why this matters so much in AI discussions.
When AI-enabled systems are presented to boards with the reassurance that humans are in the loop, but without clarity on whether those humans are adding judgement or simply preventing failure, boards are being asked to underwrite risk without being shown the architecture. That is not how large-scale systems are governed. Boards understand the difference between human oversight and human dependency. One is a strength. The other is a liability. But the distinction is not always made explicit in how AI is presented to them. And at scale, that distinction is everything.
At scale, boards care less about intent than about exposure. When something goes wrong, they actively look for where control actually sat and who carried the consequences. If a system cannot function without humans, accountability does not sit with the technology, it sits with the people. And boards rarely tolerate hidden accountability.
Hidden accountability is risk. Diffuse accountability is risk. Implicit accountability is risk. This is why failing technology initiatives so often become personal at board level. Failure exposes whether control ever truly existed. Boards do not fund virtue. They fund capability, control, and outcomes.
On the factory or the trading floor, responsibility has very little to do with whether a system needs human intervention to function. Architecture does. And so this brings us to where platforms fundamentally break with ERP thinking.
A real platform is not defined by UI, licensing model, or cloud credentials. It is defined by where control lives. In a platform environment like ServiceNow, control is designed to live in the architecture, not in people’s heads or inboxes. Workflow is explicit, not implicit. State is durable. Authority is delegated, not implied. Policy is externalised from process. Integration is transactional and observable. The truth only lies at a modular level.
Every serious platform converges on the same capabilities, and notably, these are platform-as-a-service constructs, not ERP ones. Explicit workflow orchestration (e.g. Servicenow’s Flow Designer). Transactional integration (e.g. IntegrationHub). A trusted model of operational reality (e.g. CMDB). Enforceable policy and security controls. Not because vendors agree, but because autonomy demands that policy and decision logic exist so rules stop being folklore and start being code. Without these foundations, AI can reason, but it cannot be trusted to act.
These are not additional products or services to be included in an ERP roadmap. They are the structural foundations on which autonomy depends. And when they are absent or immature, humans must step in to compensate. Humans become the workflow engine. Humans become the integration layer. Humans become the permission model. Humans become the exception handler.
The real outcome is that humans in the loop has never been about man-machine collaboration. It’s a cover for unpaid architectural debt. And this is why so many early conversations about agentic AI have felt underwhelming.
The agent can look and sound intelligent. It can analyse, categorise, plan, and make suggestions. But when it comes time to act, it often can’t. The ERP-centric environment it lives in doesn’t give it real authority, clear boundaries, or end-to-end accountability. So the work stops at the screen. A human has to step in and finish the job. The demo looks impressive, but the client operating model underneath hasn’t actually changed.
For much of last year this was explained away as ethical restraint, and while it is true that reality is never black and white, in many cases, that language is masking a more uncomfortable truth. The architecture (of some solutions) and many organisations simply isn’t ready yet.
When presented to customers by a technology provider early in their AI maturity curve, human-in-the-loop is not a sign of wisdom. It’s a sign of platform limitation. Only later, when the system can operate reliably on its own, as it adopts more PaaS-centric models, does keeping humans in the loop become a genuine design choice, and a source of strength.
In immature environments, humans are in the loop mostly because the system cannot be trusted. In mature environments, humans are in the loop because their judgement is genuinely valuable. Those two states look identical on a governance slide whereas they could not be more different in reality. Most of the high-profile failures of 2025 sit squarely in that gap when organisations convinced themselves they were in the second state while still operating firmly in the first.
One means humans are propping up brittle systems or solutions. The other means humans are governing autonomous ones. Which brings me back to the beginning. The entire purpose of a platform is to enable the once in a generation transition wea re living through in real time.
This is not about removing humans. It never was. It is about relocating humans to where 21st-century organisations actually need them. Not as workflow glue. Not as integration buffers. Not as permission proxies. But as stewards of intent, ethics, accountability, and outcomes.
When platforms mature, humans move out of execution and into governance. They become orchestrators. They stop making the system work and start deciding how it should behave. That is also when AI stops being dangerous. Not because it is constrained, but because it can be contained by architectural design.
The opportunity is to move beyond an ERP-centric operating model, not to abandon ERP altogether. ERP isn’t going away. But it is being moved out of the centre, because organisations are now dealing with a genuine three-body problem between ERP, PaaS, and AI. Each exerts force on the others, and none can dominate without creating instability. ERP’s role is to remain a system of record, not the system of control. The centre of gravity is shifting toward platforms that can express intent, orchestrate work, and delegate authority across the estate, with AI operating inside those boundaries.
Regardless of which industry you operate in, crossing that threshold does not start with more AI. It starts with an honest reassessment of the operating model itself and a willingness to accept that the future of ERP is as a critical component of the value chain, but not its organising principle. That is not a loss of control. It is how control is regained for the next era and it should rewrite most procurements from this year forward.


