Inside AI - Ep 1: Governance is Not the Brakes

CTO Consulting’s Jan Esman and AI specialist Jeroen Bolluijt explore how organisations can harness the immense power of artificial intelligence while maintaining the governance and control needed to use it responsibly. Using the analogy of a race car, they discuss why many organisations focus on the “brakes” of AI—risk, policy, and regulation—rather than the opportunities ahead.

Drawing on real-world experience from government and enterprise transformation programs, the conversation examines practical issues such as enterprise AI controls, federated operating models, AI agents, and the importance of transparency in building trust. The discussion offers candid insights into how organisations can balance innovation with accountability while preparing for the next generation of AI-enabled services.

Runtime [25:00:00]

Inside AI - Ep 1: Governance is Not the Brakes.

Our Speakers

Jan Esman

Jan Esman is a seasoned digital advisory leader with extensive experience in IT strategy, enterprise architecture, and digital transformation. His expertise in aligning technology solutions with business objectives ensures that CTO Consulting clients benefit from strategic insights and effective digital initiatives.

Jeroen Bolluijt

Jeroen Bolluijt is an experienced AI and innovation leader with strong expertise in artificial intelligence strategy, digital transformation, and emerging technologies. He focuses on translating advanced technologies into practical outcomes by aligning innovation initiatives with organisational priorities, governance, and delivery capabilities, so clients can realise measurable value from AI and digital initiatives.

  • Welcome to Inside AI, the podcast where we bring you our unfiltered insights from people who are shaping AI programs today.

    We have Jeroen Bolluijt with us. He is our leading consultant in artificial intelligence. He is currently working with the Australian Government Department of Health, Disability and Ageing, and he has over 20 years of experience across transformation and digital delivery.

    My name is Jan Esman. I lead CTO Consulting’s enterprise strategy consulting division, and I’m here to explore Jeroen’s outlook on some of the important questions facing all of us in AI today.

    Before we start, it’s worth noting that what you’re hearing from Jan and Jeroen today reflects our opinions and views, and not necessarily those of the organisations we work for. This is our conversation that we’re sharing with you.

    Jeroen and I talked about a really interesting perspective on AI. It feels like AI is creating the new race car—enormous amounts of power and great possibilities for the future. Yet it seems that everyone is more concerned about the brakes rather than the path ahead.

    Today we want to talk about what that looks like in terms of AI controls, and why alignment and vision matter so much. The tools we’re bringing to bear are not ones that respond well to simply applying brakes.

    I can see the desire, particularly in government agencies, given the policies, audit responsibilities, transparency requirements, and legislative obligations that are not going away. It becomes highly attractive to establish enterprise policies and rules that can be consistently applied across all systems being developed.

    That clearly seems like a strong growth area and a very valuable one as well. Once those policies are in place, they can also deliver efficiency gains. For example, you might look at a request from an agent or a user and decide not to route it to one model, but instead to another model that provides a more efficient use of tokens.

    Or you might detect that an agent is trying to perform a task in one way and redirect it in a more effective direction. There are many efficiency gains that can be realised through what could be considered a more advanced model gateway.

    Yes, I can see that. There’s definitely a tension between enterprise-level thinking and, at the same time, the value of pushing capabilities as close as possible to the front line—where transactions and interactions actually happen.

    Exactly. But if you can design controls transparently at the enterprise level, that should enable experimentation at the edges, even by end users.

    In fact, you may not even want your staff to be the ones experimenting—you might want your clients to do it. Imagine a scenario where a client can simply describe what they want to see, and the system builds it almost instantly.

    That raises interesting questions. Do you still need business analysts in the same way? Of course you do—but in that example you might begin to question the traditional role of analysis and how work gets structured.

    You can start to see a completely different world emerging, where controls actually enable outcomes. Because the controls are transparent and designed at the right level, they allow innovation rather than restricting it.

    I’m a huge fan of that concept. But it also implies something strategically important: there needs to be an emergent federated structure.

    In that structure, you can articulate enterprise-level controls, but still distribute capabilities and tools as close as possible to where interactions are happening.

    For example, I’m currently working with a federal agency that operates a very extensive and complicated business network. It has many partners orchestrating their own businesses and integrating with the agency to create the outcomes the agency seeks.

    In that type of environment—almost a marketplace-like environment—you can start coupling microservices as ingredients that combine with AI tools. That allows organisations to essentially “cook their own” service offerings within the agency’s parameters.

    That opens up powerful use cases made possible through data and the tools that allow organisations to articulate those services.

    Yes, and that leads to a whole different topic for another conversation: how you actually design that.

    For example, can we create agents as a service? I often challenge software providers on this point. Instead of every organisation building its own agents, why not build an agent once for a hundred clients and allow them to consume it as a service?

    Maybe it becomes something you select from a marketplace. Perhaps you lease it rather than buying it outright. These kinds of models could completely reshape how organisations adopt AI capabilities.

    But that’s a whole separate conversation—and this is only our first episode. Today we’re just scratching the surface.

    I really like the race car analogy, and we’ll keep coming back to it. We want the power, the precision of steering, and the incredible cornering ability that the race car gives us. But we also want to make sure we don’t end up back in the pits too quickly.

    Before we wrap up, Jeroen, if you could leave listeners with one key takeaway from this conversation, what would it be?

    To summarise the discussion, I would say: transparency first, confidence second.

    Focus on building trust in AI as a concept and in the technologies that support it. If organisations do that, they create the foundation needed to deliver better outcomes.

    Brilliant. Thank you, Jeroen. Great conversation.

    Thanks so much for having me.

    See you again soon.