Building AI Into Your Organisation
The previous articles in this series dealt with AI adoption at an individual level and then at a team level. Both of those are challenging in their own ways, but at least they share something in common: they can, to a meaningful degree, happen from the bottom up.
An individual can change their own practice.
A team can make a collective decision.
Neither requires the whole organisation to move in the same direction.
This article is about that fifth category, ‘Building AI Into Your Organisation’, and that’s a different kind of undertaking entirely. It requires programme-level coordination, sustained leadership sponsorship, defined governance, and a level of measurement discipline that most organisations have not previously needed around a technology.
The reason that discipline has not been needed before is worth calling out. Previous waves of tool adoption, when a new design tool emerged and designers switched overnight, when a project management platform replaced spreadsheets, affected specific functions or roles.
💡
“ AI is different because it is impacting every single role simultaneously, from engineering and product to finance, legal, HR, compliance, and customer support. “
There is no team that can reasonably sit this one out and wait to see how it lands for everyone else. That breadth of impact is new, and it is part of what makes the organisational level so genuinely complex to coordinate.
Organisational AI Programmes
In a recent conversation with someone running an AI programme at a mid-sized tech company, it was clear just how broad the remit has to be to do this properly.
The programme they described covered five distinct work streams: AI skills and upskilling across all functions, tracking the impact of AI initiatives on customer value, operational efficiency, AI governance and compliance, and product and engineering productivity.
Each of those is a significant body of work in its own right. Together they represent an attempt to move an entire organisation toward meaningful AI adoption, not just in one team or function, but across engineering, product, marketing, finance, legal, and operations simultaneously.
The questions being asked here are not just about which tools to use or how to structure a shared process. They are about how you define AI capability consistently across functions that have entirely different working contexts, how you measure whether any of it is making a difference, and how you create enough shared infrastructure to make progress coherent without centralising so much that you slow everything down.
🤔
From what you have seen in your own organisation, where does the biggest gap sit between what your AI programme says it is doing and what is actually happening on the ground?
Measuring Maturity
Something they introduced to me that I thought was quite powerful, because of my love for words and their shared meaning, rather than tracking whether people were using AI as a binary ‘yes’ or ‘no’, they had developed a maturity model that categorised employees:
Explorer - Occasionally tries AI tools, low and inconsistent usage, still figuring out where it fits in their workflow.
Collaborator - Uses AI regularly for specific tasks, integrates it into daily work but in a supporting role.
Contributor - Actively uses AI to produce output, relies on it for a meaningful portion of their work.
Delegator - Delegates whole tasks or workflows to AI, with themselves reviewing and directing rather than doing.
Orchestrator - Designs and manages multi-step AI workflows or agents, uses AI to coordinate other AI tools.
This taxonomy is really great, it’s giving the organisation a common language for talking about where people are, making it possible to set meaningful targets, and is allowing them to track movement over time rather than just taking a snapshot.
The engineering teams had been the first focus, and the results there were ahead of target. The majority of engineers were using AI in their daily work, and the organisation was using that as a proof of concept for extending the same approach to non-technical functions in the second half of the year.
That sequencing, starting with engineering, demonstrating what is possible, and then building the case for broader adoption, is a sensible approach. It means the programme can show something concrete before asking functions like finance or marketing to significantly change how they work.
To make adoption visible and to keep momentum across the organisation, each engineering area had been asked to include an AI adoption section in their regular monthly reporting. This covered where they sat on the maturity curve, what percentage of their people were actively using AI, and one worked example of how AI had been used that month and what the outcome was.
That last piece is important as they needed to share real concrete examples. Reported regularly, it does more to shift culture than metrics alone.
Governance Necessity
AI governance is a significant body of work in its own right, and one that is easy to underinvest in when a programme is moving quickly.
In a conversation with someone who had spent six months building a governance framework from scratch, the scope of what that actually involves came through.
It means working through the implications of the EU AI Act, which classifies AI systems across four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk, each carrying different compliance obligations.
It means reviewing every AI tool or agent for data privacy implications under GDPR, understanding what data it can access and under what conditions, and making sure contracts with AI vendors include appropriate protections around IP and data use.
It means building a process for teams to submit AI projects for review before they go live, not to slow things down, but to catch problems early when they are still cheap to fix.
The practical intake approach described to me used a traffic light classification system directly informed by that EU AI Act risk logic. Projects assessed as low risk, using internal non-sensitive data for straightforward tasks, could proceed without formal review.
Projects involving customer data, automated decision-making, or anything touching sensitive processes required closer scrutiny. The goal here is to govern what actually matters rather than creating a process that applied the same level of scrutiny to everything regardless of risk.
Regulation Gaps
What also came through clearly in multiple conversations I had was how early most organisations are still building this capability.
Having a responsible AI policy, a governance framework, and a process for intake and review is meaningful progress. But defining the technical standards and controls that sit beneath those policies, and building the tooling to apply them consistently at scale, is a longer journey.
Most organisations are somewhere in the middle of that, and being honest about where you are is more useful than overstating maturity.
There is a further complication that makes this harder still.
💡
“ The pace at which people are adopting AI tools inside organisations, and the pace at which those tools themselves are evolving, is significantly outrunning the pace at which governance frameworks can be built and embedded. “
A policy written six months ago may not adequately cover tools that did not exist at the time. A risk classification done last quarter may need revisiting regularly because the system being assessed has been substantially updated.
Governance teams are, in many cases, chasing a moving target, and the gap between what employees are doing with AI and what the organisation formally understands or has approved is wider than most leadership teams realise.
That gap is not just a compliance risk, it is an operational one.
Where Product Ops Fits
At the organisational level, Product Ops is probably not going to be the owner of the AI programme or the governance framework. Both of those require dedicated resource, technical knowledge and leadership sponsorship that sits elsewhere.
However, Product Ops can have a clear role in making both work in practice.
That means helping to define how AI use cases are identified and prioritised across functions, ensuring that the tools and processes being introduced actually get adopted rather than just launched, connecting the programme-level ambition to the team and individual-level practice covered in the previous articles, and making sure the governance requirements are embedded into how work gets done rather than existing as a separate compliance exercise that people quietly work around.
The organisations making the most genuine progress at this level are not the ones making the boldest announcements.
They are the ones that have done the less glamorous work of building shared measurement, investing in data quality, developing real governance capability, and creating the conditions for adoption to spread rather than stay concentrated in a single function.
That work is slower, less visible, and harder to attribute than shipping a new AI feature. It is also what actually moves the organisation.







