Inside AI - Ep 2: The Agent Problem Nobody’s Ready For

CTO Consulting’s Jan Esman (Head of Enterprise Strategy) and Jeroen Bolluijt (Lead AI Specialist) explore the rise of agentic AI and what it means for organisations moving beyond chat-based tools to systems that can take action, make decisions, and operate with increasing autonomy. Using analogies from organisational design and sociology, they examine how AI agents will work in coordinated groups, reshaping how technology is structured and managed.

Drawing on real-world government and enterprise contexts, the conversation looks at multi-agent environments, interoperability, and the shift from human-led processes to AI-assisted and eventually AI-directed systems. Key themes include culture as a control mechanism, the importance of observable behaviours in governing AI, and the growing challenge of trust as systems become more autonomous.

The discussion offers practical insights into how organisations can prepare for agentic AI by rethinking governance, control frameworks, and enterprise architecture to balance innovation, accountability, and performance at scale.

Runtime [27:45]

Inside AI - Ep 2: The Agent Problem Nobody’s Ready For.

Our Speakers

Jan Esman

Jan Esman is a seasoned digital advisory leader with extensive experience in IT strategy, enterprise architecture, and digital transformation. His expertise in aligning technology solutions with business objectives ensures that CTO Consulting clients benefit from strategic insights and effective digital initiatives.

Jeroen Bolluijt

Jeroen Bolluijt is an experienced AI and innovation leader with strong expertise in artificial intelligence strategy, digital transformation, and emerging technologies. He focuses on translating advanced technologies into practical outcomes by aligning innovation initiatives with organisational priorities, governance, and delivery capabilities, so clients can realise measurable value from AI and digital initiatives.

  • Jan Esman (Host, Head of Enterprise Strategy, CTO Consulting)
    Welcome to Inside AI—our unfiltered insights from the people shaping AI programs that are actually happening and matter. I’m Jan Esman, Head of Enterprise Strategy at CTO Consulting, and today I’m joined by Jeroen Bolluijt, Head of Innovation and AI at the Australian Government Department of Health, Disability and Ageing.

    Before we begin, a quick note: the views shared here are our own and not those of our organisations.

    Today, we’re exploring agentic AI—what it is, what’s changing, and how it’s reshaping the way organisations think about technology, goals, and operating models.

    What is Agentic AI?

    Jan Esman
    The term “agentic AI” is being used everywhere right now. Jeroen, what does it actually mean in practical terms?

    Jeroen Bolluijt
    It’s a bit of a marketing term at the moment—like “AI” was a few years ago. Everyone suddenly has something “agentic”. But if we break it down, the simplest way to think about it is this:

    We’ve come from a world of chat interfaces—where you ask a question and get an answer. Agentic AI goes beyond that. It takes action.

    Instead of just responding, it executes. It might generate a report, update a system, or produce an output directly.

    But the real shift is autonomy. Agentic systems don’t just execute instructions—they can make decisions. They have a degree of decision rights.

    And when you extend that, you move into systems of agents—groups of agents working together, even creating new agents. At that point, you’re no longer dealing with simple tools—you’re dealing with dynamic, evolving systems.

    From Software Development to Organisational Design

    Jan Esman
    That’s where it becomes really interesting. We’re no longer talking about traditional development cycles—define a use case, build it, test it, deploy it.

    We’re talking about something closer to organisational design. You might have a lead agent, sub-agents, testing agents—almost like a team structure.

    Why don’t traditional lifecycles work in this world?

    Jeroen Bolluijt
    Because once you introduce autonomy and decision-making, you’re effectively asking AI to behave like people.

    Six or twelve months ago, we were still thinking in terms of coordination—agents doing defined tasks. But now, if you combine agentic AI with edge AI, robotics, or wearables, you move even further.

    You’re no longer in traditional IT. You’re closer to sociology—how systems behave, interact, and adapt in real environments.

    Decision-Making and AI Culture

    Jan Esman
    At the core of that is decision-making. We’re asking AI systems to operate within thresholds, interpret data, and make choices.

    What does that look like in practice?

    Jeroen Bolluijt
    We often focus on individual decisions, but I think we need to step back.

    In organisations, we don’t just define decision rights—we define culture. Culture shapes behaviour in ways that are hard to codify but critical to outcomes.

    The same applies to AI. It’s not just about rules—it’s about defining consistent, observable behaviours.

    Jan Esman
    Exactly. Culture is a pattern of behaviour you can rely on. It aligns with your brand and your outcomes.

    We now need AI systems to behave consistently with that culture—not just follow instructions.

    Jeroen Bolluijt
    And that’s where governance becomes interesting. Instead of controlling individual actions, we need frameworks that guide behaviour across agents and systems.

    From Assistants to Autonomous Systems

    Jan Esman
    Where are we today on that journey?

    Jeroen Bolluijt
    We’ve moved beyond simple chat. The next stage is AI assisting humans—helping with drafting, analysis, coding, and more.

    In government, for example, we’re seeing early use cases around policy design, briefings, and testing.

    But these are still assistants.

    The real shift comes when systems move towards autonomy—where agents coordinate, adapt, and act with less direct human input.

    The Multi-Agent Challenge

    Jan Esman
    As we move into multi-agent systems, what are the key risks?

    Jeroen Bolluijt
    One challenge is transparency. End users won’t know whether they’re interacting with one agent or many.

    But does that matter?

    Most people don’t understand how software works under the hood—they just trust it.

    The real issue is for those responsible for the system. How do you ensure quality, governance, and reliability across multiple interacting agents?

    Jan Esman
    It’s similar to managing teams. You can have breakdowns, misalignment, or hidden errors.

    And we know AI systems tend to “please”, which can reduce transparency around failures.

    So the question becomes: how do you design governance and quality assurance for systems that behave like teams?

    Interoperability and the Future of AI Systems

    Jeroen Bolluijt
    This becomes even more complex when you consider interoperability.

    In the future, organisations will have multiple platforms, applications, and AI systems—all interacting.

    At some point, you need a control layer that manages how these systems work together.

    Jan Esman
    We’re already seeing that emerge—platforms focused on interoperability, orchestration, and vendor-agnostic integration.

    It’s a new layer in the enterprise stack.

    Trust: A Sociological Challenge

    Jan Esman
    Let’s talk about trust. Is it a technical problem or an organisational one?

    Jeroen Bolluijt
    It’s largely sociological.

    Trust depends on perceived risk and urgency. If the need is high, people will accept lower accuracy. If the risk is high, expectations are much higher.

    It’s similar to a sales dynamic—you either need significantly more benefit or significantly less perceived risk.

    Jan Esman
    And we see that clearly with things like self-driving cars. Our tolerance for machine error is much lower than for human error.

    That means trust frameworks for AI need to be stronger, more explicit, and more reliable.

    The Future of Governance

    Jan Esman
    So where do governance frameworks sit today?

    Jeroen Bolluijt
    They’re evolving.

    For chat-based systems, we have frameworks and controls. For AI-assisted systems, we’re starting to see more structured approaches—like recent government frameworks.

    But for fully agentic systems, the frameworks don’t really exist yet.

    We’re entering new territory—governing not just decisions, but the systems that make those decisions.

    And when systems operate across organisational boundaries, accountability becomes even more complex.

    Closing Thoughts

    Jan Esman
    This feels like the start of a new layer in enterprise technology—one focused on orchestration, interoperability, and control.

    And underpinning all of it is trust—linked closely to culture, behaviour, and outcomes.

    If AI systems don’t align with the experience and values organisations want to deliver, trust breaks down.

    There’s clearly more to explore—especially around value creation, workforce impact, and what organisations can now do that wasn’t previously possible.

    Jeroen, thank you for the conversation.

    Jeroen Bolluijt
    Thanks, Jan. Great discussion.

Next
Next

Inside AI - Ep 1: Governance is Not the Brakes