Building AI Into How Work Gets Done
The previous article in this series was about individual AI practice, what it takes to build AI into the way you personally work, and why that is harder than it first appears.
This one is about what happens when you try to take that a step further, moving from something one person does in their own workflow to something a group of people do together as part of how they operate.
It sounds like a natural progression, but the challenges involved are different in kind, not just in scale.
The shift from individual to team AI adoption is where a lot of organisations start to stall. Individual practice can happen without coordination. Someone decides to change how they work, they experiment, they iterate, and if it does not work they adjust without much consequence for anyone else.
The moment you try to build AI into a shared process, all of that changes.
You are no longer making decisions for yourself. You are proposing changes to how a group of people work together, which requires agreement, trust, and usually some redesign of how work flows between people. There is a coordination cost to all of that, and underestimating it is one of the most common reasons team focused AI initiatives don’t go anywhere.
When a Personal Tool Meets a Team Problem
One example I described in the previous article was a product manager who had built a personal knowledge base over several years, structured notes on his domain linked to data sources and written in his own voice.
When he started using AI with it, the quality of the outputs improved noticeably. But when he started thinking about how to make something similar available to his team, the problem shifted immediately.
His personal system lived locally on his machine. There was no shared space where others could access it, query it, or benefit from it. He described having tools that could do useful things but nowhere to host them so that colleagues could actually use them. There is a gap, and it is not a small one, between what worked for him individually and what would work for a group.
This is a pattern that comes up repeatedly, where individual AI practice, when it works well, tends to be quite personal. It is shaped by how one person thinks, what they already know, and how they have chosen to organise their information.
Making that available to others requires abstraction, documentation, and shared infrastructure, none of which come for free.
💡
“ Once you introduce other people, you also introduce the question of whether they trust the outputs of the AI tool, whether they understand how to use it, and whether they feel any ownership over maintaining it. “
There is also an interesting question of perception that tends to get surfaced at this point. When an AI tool is built well and works reliably for a group of people, it actually very quickly stops feeling like AI. People stop noticing the technology and just use it.
Which is, in a sense, the goal. But it also means that a well-executed AI tool can look, from the outside, exactly like any other internal tool that works well. If the measure of success is people remarking on how impressive the AI is, that bar will rarely be met. If the measure is whether the tool actually makes the work better, that is a different conversation entirely, and a more honest one.
🤔
If your team has tried to adopt a shared AI tool or process, what got in the way of it actually sticking, and how much of that was about the technology versus how the team was working around it?
A Data Quality Problem
Another conversation illustrated a different version of the exact same challenge. A team had built an internal AI agent to help people query information about their experimentation platform.
There was a very practical intent to this: make it easier for anyone in the organisation to find out what experiments were running, what the results were, and what the relevant context was. They built it, restricted it to their own documentation, and opened it up.
Then the problems started to surface as their documentation was spread across multiple places. Some of it was in their internal platform, some in a shared knowledge base, some in a code repository, and the analysis and results that managers actually produced tended to live on individual pages that were not part of the curated set the agent had access to.
When they tried to open the agent to broader documentation to compensate for this, it started pulling in outdated, irrelevant, or contradictory information and producing unreliable answers. They pulled it back to the restricted set, which meant it could not answer the questions people actually wanted to ask.
And, this is not a problem with the agent. It is a data quality and information architecture problem that only became visible when they tried to make AI work for a group.
Individual AI use can tolerate a certain amount of information disorder because the person using it already has the context to know what to trust and what to ignore. A shared tool does not have that context, and neither do all the people using it.
The underlying information has to be in better shape before the tool can reliably serve a group.
Introducing the Human Factor
What both of these examples point to is something that goes beyond tooling or data. Building AI into how a team works requires changing human behaviour, and that is a different kind of problem entirely.
In the experimentation agent example, one potential solution was to ask everyone to centralise their analysis and results in a single place so the agent could find them. That sounds reasonable on paper. In practice it means asking people to change how they already work, in ways that add effort for them individually, for a benefit that accrues to the group.
Without a strong reason to do it and someone actively driving that change, most people will not.
This is the core coordination challenge and why ‘Building AI Into How Work Gets Done’ is such a huge leap from ‘Building AI into the Way You Work’.
The people who would benefit from better shared AI tooling are often not the same people who need to change their behaviour to make it possible. Getting those two groups aligned, and keeping them aligned as the tools and the work evolve, is an organisational design problem as much as a technical one.
This requires clarity about ownership, incentives for participation, and usually someone whose job it is to hold the thing together. Without that, even well-built AI tools tend to degrade back to individual use fairly quickly.
What Team AI Actually Requires
Building AI into how a team works is genuinely new territory for most people and organisations. The tools are moving quickly, the pressure to show progress is real, and everyone is figuring it out as they go.
In that context, it is understandable that some of the foundational work gets pushed back. But from what I have seen, there are three things that tend to make the difference between a team focused AI initiative that holds together and one that fades back to individual use:
Shared information in good enough shape to be useful. This usually means some investment in data quality and information architecture before the AI layer goes on top. The tool will only ever be as good as what it has to work with.
A process decision, not just a tool decision. The group needs to agree on how work will flow differently, not just which tool they are going to use. Without that agreement, the tool sits on top of unchanged behaviour and adds friction rather than reducing it.
Ongoing ownership. Someone needs to be accountable for maintaining the quality of the shared resource over time, because without that it will drift back toward disorder relatively quickly.
None of these are the glamorous AI work we might all be expect to take on. They sit below the surface of whatever the AI tool actually does. But they are what determines whether an AI initiative holds together after the initial enthusiasm fades.
This is exactly where Product Ops can add value. Product Ops can and should be involved in building AI tools for teams, and where it does, it carries a share of the accountability for how well those tools land, not just whether they work technically, but whether people actually use them, understand them, and continue to get value from them over time.
That means being involved in adoption, enablement, and ongoing maintenance in the same way Product Ops would be with any significant tooling change. Beyond the build itself, Product Ops is also well placed to create the conditions that allow those tools to work in the first place: defining how information should be structured and maintained, working through the process changes a shared tool requires, and making sure ownership is real rather than nominal.
These are operational problems before they are AI problems, and they are the kind of problems that tend to fall through the gaps when no one has an explicit remit to solve them.
Building AI into how work gets done is, at its core, an operational design challenge. The technology is the easier part.






