Councilio

Councilio

PaaS and Platforms

Agentic AI Changes the Game for Software TCO

New PaaS rules for enterprises, partners, SIs, and BPOs...well, everyone

Peter Carr's avatar
Peter Carr
Feb 04, 2026
∙ Paid

Enterprise technology buyers are being told a comforting story. Artificial intelligence, we are assured, can be bought, licensed, and governed much like the systems that came before it. Enterprise licence agreements, flexible consumption pools, and familiar commercial constructs promise to make AI feel legible and safe. Salesforce’s Agentforce Enterprise Licence Agreement (AELA) is a good example of this intent. It is not a gimmick. It is a solid and sincere attempt to meet customers where they are.

The problem is not that these models are wrong. They are right for now. But that is also precisely the issue. They are transitional. They wrap a fundamentally different economic creature in the language of a world that no longer exists. Traditional total cost of ownership (TCO) was built for software that behaves like an asset. Agentic systems behave like labour. And that distinction matters more than any pricing model.

Thanks for reading Councilio! This post is public so feel free to share it.

Share

For decades, TCO has been an exercise in containment. Organisations estimated licence costs, infrastructure, integration, and support, then spread that cost across users or transactions. Performance improvements were welcomed because they reduced run costs. Efficiency meant savings. Usage broadly tracked value, and behaviour was bounded by human attention, working hours, and organisational friction. Agentic systems break every one of those assumptions. How and why?

An agent does not wait to be used. It acts. It retries. It escalates. It triggers other systems. It improves over time and continuously operates. Cost is no longer driven by access but by behaviour. In traditional systems, better performance was efficiency driven to reduce cost. Whereas in agentic systems, the more capable an agent becomes, the more it does. So better performance often increases use. This is where familiar TCO models quietly fail.

An organisation may deploy an agent to reduce case handling time, improve resolution rates, or increase customer satisfaction. All of those outcomes can be achieved. Yet the same improvements can also drive higher system interaction, greater orchestration complexity, more monitoring, and increased downstream activity. The agent becomes productive, but not necessarily cheap.

That creates the new TCO question. Are we paying for fewer actions or for better outcomes? The difficulty is that even this framing assumes a level of comparability that rarely exists. I’m sure the model enthusiasts reading along have already quietly objected to the efficiency argument.

But I agree. Two organisations can deploy the same agentic process, on the same platform, under the same commercial model, and experience materially different economics. One may operate with clean data, minimal workarounds, and low regulatory friction. Another may require multiple exception paths, manual validations, and heavy oversight. The agent performs the same “work,” but with very different efficiency, risk, and cost profiles. That doesn’t change the argument that traditional TCO has no way to account for this because it was designed to price systems, not organisational complexity.

This is why agentic cost benchmarking remains elusive. What looks like an AI efficiency problem is as much an operating-model problem in disguise. That is a management consulting problem and the result is the ensuing economic fog.

Customers want AI. Boards expect it. Executives feel pressure to adopt it. Yet few organisations truly understand how to model its long-term cost, let alone govern it. Everyone is struggling to determine whether to buy capacity, consumption, outcomes, or something in between. The language of tokens, actions, and autonomy feels abstract and risky.

To this point vendors have only been able to respond rationally. They have to offer what sounds safe and familiar. Enterprise licence agreements. Flex pools. Commitments that resemble what procurement teams already know how to approve. These constructs are not cynical. They are necessary bridges. They allow organisations to step into agentic territory without immediately confronting the fact that the ground rules have changed. But the ground rules have changed.

In many ways, this challenge should feel familiar. It is the same problem organisations have wrestled with for decades in business process outsourcing and managed services. Cost outcomes were never determined solely by the contract rate card. They were shaped by process maturity, regulatory burden, exception volume, and the level of oversight the organisation itself required. Two customers could outsource the same process and experience radically different economics. Agentic TCO brings that same reality into software.

At this point, it is important to be precise. Not every layer of an AI-enabled platform behaves the same economically. Foundational PaaS capabilities like workflow engines, data models, and integration layers can still be governed and amortised like traditional software assets. Their cost curves are familiar, and their value can be planned. The discontinuity appears at the point where agency is introduced. Once software is empowered to act, decide, and initiate work, cost becomes behaviour-driven rather than asset-based. This is why vendors are carving out separate commercial constructs for agentic capability. It is not an admission that platform pricing is broken, but an acknowledgement that agency cannot be priced in the same way as software.

Agentic TCO is not about ownership. It is about economic governance. It shifts the focus from controlling access to controlling autonomy, from budgeting per user to budgeting per action, and from minimising cost to constraining behaviour. That shifts the dominant cost drivers from licences and infrastructure to decision loops, orchestration logic, exception handling, guardrails, and oversight. These become operating costs, not implementation artefacts, and they tend to grow with success rather than diminish over time.

Let me beat my old drum again. This is why platform architecture matters. Agentic systems do not sit neatly inside application silos. They require a platform layer that can observe, govern, throttle, and evolve behaviour across systems. This is not an ERP problem. It is a platform-as-a-service problem. PaaS becomes the economic trunk through which agentic activity flows, whether organisations acknowledge it or not.

So what we are witnessing is a market in transition. Customers are being sold AI using commercial models designed to reduce anxiety, while vendors quietly pivot toward a future where agency, not software, is the unit of value. Enterprise agreements like Salesforce’s AELA are part of that necessary bridge. They help organisations begin the journey by uncapping some of that agency. But even they will admit that they do not yet resolve the underlying economic ambiguity at free agency scale. No one has.

The bottom line is that agentic AI cannot be priced and governed like software. It must be governed the way work is. And until organisations accept that shift, total cost of ownership will remain an estimate rather than an insight. Is it still worth pursuing? Absolutely! The shift is simply this. Where traditional TCO asked what a system costs to own, agentic TCO asks what it costs to let software act on your behalf.

I think we all get that part of the discomfort to this point comes from a reluctance to talk about AI taking jobs. Framed that way, the conversation becomes politically charged and emotionally loaded. In practice, agentic AI is not taking jobs. It is taking work. The tasks and decisions that were previously performed by people. That distinction matters, because work can be delegated, reallocated, and governed in ways that jobs cannot.

Agentic AI is not difficult to price because vendors lack models, but because enterprises have not yet accepted that they are buying delegated work rather than tools. So try reframing the way you are approaching those business cases. Everything else is just a negotiation over how long we pretend those two questions are the same.

Share


The Top 10 Questions to Start Costing Agentic AI

Costing agentic AI is not difficult because the technology is new, or because vendors lack pricing models. It is difficult because organisations are trying to apply asset-based thinking to something that behaves like delegated work.

Traditional TCO models assume comparability, predictability, and efficiency gains that reduce cost over time. Agentic systems violate those assumptions. Their costs are shaped as much by organisational complexity, governance choices, and operating discipline as by software consumption itself.

The questions below are not designed to produce a perfect number. They are designed to surface where agentic AI will behave unlike traditional software, where costs will scale with success, and where economic exposure is likely to appear. Without answering them, any TCO model will be incomplete, regardless of how familiar the commercial wrapper looks.

They focus on the agentic (probablistic and variable) layer of the platform, not the underlying PaaS foundations, which can still be governed and amortised using traditional asset-based TCO models. They are based on the premises that you can’t cost what you can’t see, you can’t budget what you can’t limit, and you can’t govern delegated work without ownership.

At scale, organisations are likely to require a control layer that can observe, constrain, and govern agentic behaviour across systems. Such control mechanisms should not be viewed as a product trend, but as a structural response to the economic realities of delegated work operating at machine scale.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Peter Carr Advisory Pty Ltd · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture