Why Build When You Can Buy?
Rethinking the AI Centre of Excellence in a Platform World
The race to adopt generative AI is pushing every organisation to ask difficult questions about capability, risk, and control. Few of those questions are as thorny as the one that inevitably arises when the hype meets internal planning: Do we need to build an AI Centre of Excellence?
The concept of the AI CoE is appealing. In theory.
A dedicated team to guide use cases, build policy, develop prototypes, and scale responsibly. A lighthouse for ethical, sustainable AI. But in practice, it’s a construct that can feel heavy and misaligned. Especially for organisations that are still getting the basics of automation and data maturity right.
What’s often underestimated is the true organisational transformation required to make a Centre of Excellence (CoE) not only function, but deliver real value. You are not simply appointing a team. You are reengineering the machine. Then infusing it with fresh capability, structure, and innovation to drive sustained impact.
That includes new processes for decision-making, oversight, and exception handling. New governance models to ensure ethical use, data safety, and accountability. New roles that didn’t exist three years ago, from prompt engineers to AI compliance officers, on top of the design thinkers and business excellence leaders that may or may not have survived the last attempt at a CoE. New architectural frameworks to host and scale AI services. New business rules, policies, training programmes, and methods for cross-functional collaboration.
And, perhaps most elusive of all: new ways of thinking.
Even the simplest part like training staff to use new tools is rarely simple. Teaching every employee how to use AI responsibly is one challenge. Teaching them how to design new processes or think like product strategists or prototype like innovation gurus is something else entirely. And doing that at scale, beyond the bounds of a small innovation lab or tiger team, is what usually breaks the model.
This is where the traditional CoE model begins to feel top-heavy. It assumes an organisation can grow these muscles organically, at pace, while still keeping the business running. And while many try, and some succeed, most stall somewhere between strategy and execution.
And yet, buried within that debate is a quiet revolution happening on platforms many already use every day. Two of the most compelling, and perhaps under-recognised, are ServiceNow and Kore.ai. These platforms are certainly strategic. But they may, in fact, be something much more. Like maybe even pre-packaged Centers of Excellence.
I think this idea changes the conversation. Because maybe the real question isn’t whether to build an AI CoE. It’s whether you even need to.
What makes platforms like ServiceNow and Kore.ai so powerful in this context is not just their AI capabilities, but the way those capabilities are structured, then embedded, governed, and operationalised across real business use cases.
ServiceNow, for example, offers generative AI directly within the workflow, writing code, generating actions, summarising requests, and surfacing insights across IT, HR, customer service, and software development. Kore.ai, by contrast, leads with enterprise-grade conversational AI, building rich voice and digital assistants that connect to processes, data, and enterprise context.
Both offer something remarkably similar at their core: AI that doesn’t sit off to the side, but lives within the operational centre of gravity.
They don’t require you to build separate governance frameworks or technical stacks. They come with the guardrails. They don't expect you to redesign your org chart. They already support existing operating models. They don’t sell you an idea, they give you a product that works today.
This isn’t just a new feature set. It’s a new model for how AI enters the enterprise. That is, as a native capability within a platform you already trust.
Objection #1: OK, But What About the Hyperscalers?
You can’t talk about AI strategy in 2025, or the future of AI Centers of Excellence, without acknowledging the gravitational influence of the hyperscalers: Microsoft, Amazon, and Google. These aren’t just cloud providers anymore. They are the dual engines of hybrid infrastructure and language model intelligence.
On one side, they power the foundational architecture of the modern enterprise. Azure, AWS, and Google Cloud are where data lives, moves, and scales across containers, workloads, and compliance domains. On the other, they now offer some of the most powerful LLMs in the world (OpenAI, Claude - via Bedrock, and Gemini), available as services that plug directly into applications and platforms.
For many organisations already embedded in these ecosystems, the question is no longer if they should adopt AI, but which pre-integrated AI capabilities they should adopt first. The raw power is there. But using it meaningfully is still the hard part.
That’s where platforms like ServiceNow and Kore.ai come in. They aren’t trying to compete with hyperscaler models but rather trying to make them usable.
ServiceNow integrates directly with OpenAI and Azure OpenAI to deliver embedded intelligence inside workflows. Kore.ai layers on top of multiple LLMs to provide a dialogue orchestration framework to enable nuanced, context-aware virtual agents across voice and digital channels. Both platforms provide a business-facing layer over raw model APIs. One is grounded in workflow intelligence, the other on enterprise-grade conversational interaction.
This isn’t just access to an LLM. It’s access to a managed AI experience, designed for real business interaction complete with policy controls, testing frameworks, observability, and deployment governance. In that sense, these platforms don’t replace the hyperscalers. They operationalise them.
They abstract the complexity. They embed the governance. And they do it in the language of business: workflows, tickets, channels, cases, and conversations. Not just compute, endpoints, and tokens.
Objection 2: Sure, But What About Mainframe?
For more complex enterprises, those still running critical workloads on mainframes, moving to cloud at a deliberate pace, and layering platform-as-a-service (PaaS) solutions over hybrid foundations, the appeal of a built-in AI CoE is even stronger.
These environments are not simple. Mainframe platforms remain essential for high-throughput, high-reliability systems in banking, government, insurance, and logistics. Meanwhile, Red Hat and OpenShift are increasingly used to connect legacy systems with modern PaaS environments and cloud-native applications. It's a hybrid cloud strategy that reflects the real-world compromises of scale, compliance, and legacy.
In this context, the goal isn’t just transformation, it’s co-existence. And that’s where a model like HYPA, Hybrid Platform + AI, begins to emerge.
HYPA is not a product or a framework. It’s a design reality. It acknowledges that AI won’t live in isolation. It will ride on top of platforms, across integration fabrics, linked by APIs, containers, and event-driven architectures.
ServiceNow and Kore.ai also fit into this architecture quite neatly because they don’t demand a reinvention of the stack. They leverage the service platform already in place. They speak the language of workflows and tickets, not batch jobs and transaction logs. And they can integrate with automation orchestrators, like Redhat, that span both the cloud-native and mainframe worlds.
The idea that you could “buy” your AI CoE, embedded inside a platform that already connects people, processes, and data is not just appealing. It’s practical.
Objection 3: The Hidden Cost of an AI CoE
What’s often overlooked in the AI Centre of Excellence conversation is just how much needs to be created before the first line of AI-generated value appears.
Not just infrastructure. But operating structure.
Building an internal AI CoE requires designing new governance frameworks. It demands cross-functional alignment on data access, model safety, change management, and ethical use. New roles and responsibilities must be defined, from prompt engineers to AI product managers, none of which exist in most org charts today. It means training workforces to use AI tools responsibly, while also upskilling leadership to understand their risks and limitations.
And underneath all of that sits the quiet, unglamorous labour of redesigning business processes. Organisations can’t layer AI onto legacy workflows and expect magic. They must rethink how decisions are made, how exceptions are handled, and how humans stay in the loop. They must codify new rules, test new patterns, and align their architectural frameworks to support outcomes they’ve never had to measure before.
It is, quite literally, a rebuild. And the rebuild is always bigger than the technology.
That’s why the platform model is so powerful. Not because it avoids the need for change, but because it concentrates and accelerates it. Instead of standing up a new team to reinvent the operating model, platforms like ServiceNow and Kore.ai bake those patterns in, based on thousands of implementations and best practices. It lets you start operating the future while others are still writing white papers about it.
The Right Platform is the AI CoE
So perhaps the time has come to reframe the conversation. Rather than asking how do we stand up an AI CoE from scratch, organisations might ask which of our strategic platforms already gives us the capability we’re trying to build? For many, the answer might be sitting in plain sight.
Not every organisation needs to become an AI lab. Most just need to use AI responsibly, strategically, and at speed. That doesn’t always require a new department. Sometimes, it just requires choosing the right platform. Because in an age where platforms are intelligent by default, and architecture is hybrid by necessity, the smartest CoE may be the one you didn’t have to build at all.
Picture the scene. You are a Servicenow account manager. Your client sits across the table, nodding politely but hesitating. Maybe they’re experimenting with generative AI at the margins. A chatbot pilot here, a bit of low-code automation there. Or maybe they’re taking the “structured” path, laying plans for a formal AI Centre of Excellence with all the policy scaffolding, process committees, and working groups to match.
In either case, they’re doing the same thing: delaying the real work. One strategy is slow because it’s small. The other is slow because it’s structurally heavy. So pivot the conversation. Challenge what they are trying to achieve. Things like scale, safety, skill uplift, and strategic alignment, already exist in the mature platform that sits behind the license or solution you might already be trying to sell.
If the PaaS is well-designed, enterprise-grade, and service-aware, then those guardrails and accelerators are already baked in. Not in theory, but in practice. Not in slide decks, but in functionality.
What platforms like ServiceNow and Kore now offer is the human-led, platform-integrated nature of a real AI CoE. Not as a lab or a robot pit, but as a nerve centre for modern digital operations.
They are where people, processes, policy and AI capability converge. And they resonate equally well with technologists, strategists, and business leaders because they move the conversation beyond experimentation into enterprise-wide execution. So whether the client wants to dabble or design from scratch, the answer may be the same: You already have what you're trying to build.
Why am I saying all this?
Because there’s a rising tide of questions about the AI Centre of Excellence and I think many of them are missing the mark. They are rooted in the way we’ve always approached technology: start with governance, design the architecture, then slowly build.
But here’s the shift. The strongest platforms don’t just give you AI. They create the conditions for it to thrive: responsibly, repeatably, and at scale.
The platform is the real Centre of Excellence. And the best part? You don’t need to build it. You just need to turn it on.