Finding My AI Flow State
What does a genuinely useful relationship with AI tools actually looks like in practice?
Not the version you hear about at conferences or read about in newsletters, but the version that exists when you’re sat at a computer that’s struggling to cope, trying to get through a normal working week.
The honest starting point is that I came into this without a clear approach or use case. I started picking up tools like Gemini, Claude and similar built-in AI assistants more regularly, running multiple things at once.
What I was trying to get to, and still am trying to get to, is something I’d describe as a productivity flow state.
The idea being that instead of working through tasks in small disconnected pieces, or grinding through things manually, you get to a place where there’s a more systematic rhythm to the routine stuff. Checking on data quality, pulling together recaps, surfacing things that need attention, it’s the kind of work that takes up real time but doesn’t require your full cognitive load.
🤔
Where are you in this?
Are you still figuring out where AI actually fits into your day-to-day, or have you landed on something that genuinely works. And if so, what did it take to get there?
The other thing I’ve been using AI for, and I think this is probably the more interesting part for me at least, is as a thought partner when I’m in the early stages of a project or idea. The alternative is just spilling thoughts out organically and then backtracking to reorganise them, which is fine, but it’s slow and it tends to mean you get a decent way down one path before realising there was a better framing available earlier. Having something to think against, even imperfectly, changes that a little.
But I want to be clear about where it’s been harder than expected, because I think that’s the more useful thing to talk through. The issue I keep running into is that AI tools are built around natural language conversation, and they look like natural conversation in the interface, but they don’t really behave like one.
When I was trying to find out something specific, for example whether teams in our Notion instance were embedding Tableau dashboards in their pages, I couldn’t just ask the question the way I’d ask a colleague. The tool kept giving me back the general answer: “Yes, teams can do that.” Which I already knew. What I needed was the specific, factual answer about what was actually happening in our environment.
And of course, what you actually have to do is strip away the softness out of the question, ultimately changing your conversational behaviour.
Natural language has a lot of indirection built into it. When I say “Are people doing this?”, I’m actually asking for an audit of a specific dataset, filtered a certain way, compared against something else.
The AI handles the hard, concrete, calculable version of that question quite well. It handles the softer, more implicit version less well. That’s not because it can’t parse the words, but because the ambiguity in the question doesn’t give it enough to work with. So you end up having to think more precisely about what you’re actually asking, which is a slightly different skill to how most of us naturally communicate at work.
💡
“ Are we starting to think and communicate more like robots,
while AI is trying it’s best to imitate humans? “
There’s something interesting in that, I think. And its about what skills are becoming more valuable to organisations. I’ve been thinking about this in the context of people; talent acquisition and retention. For a while the strong signal from the market was a favour of deep specialism. If you had substantial, demonstrable expertise in a specific domain, industry and toolset, that was a clear qualifier for the suitability of the role.
Now I think the picture is shifting back because AI is quite good at that deep, narrow focus execution work. What it’s less good at is generalising, handling ambiguity, connecting things across different contexts. Which means the ability to operate across multiple domains, to be useful in different kinds of conversations, to act as connective tissue between functions, which feels like it’s starting to matter more … again.
Less the ‘T-shaped specialist’, more what I’d call a ‘broken comb’. Where you’ve got some depth in certain areas, you might have gaps in others, but you can pick things up and contribute across a wider range. People like this, those people that can connect the dots and be creative with solutions, were highly favoured prior to the mass tech layoffs we’re still experiencing in the industry. I’ve shared my thoughts on that previously here:
So what do I want from AI?
What I want, practically, is to use AI to recover my own headspace. Not as a replacement for thinking, but as a way of reducing the volume of lower-order decisions and tasks so that there’s actual capacity left for the work that requires sustained attention.
Right now, if I want to spend twenty hours or more going deep on a single strategic problem, the opportunity cost feels very high. There’s just too much other noise, if the routine parts of the job can be handled more automatically, the opportunity cost of the time spent going deep would change significantly.
You can produce more considered work on the things that genuinely need it, and you also start building up a queue of smaller ideas and experiments that you can test incrementally, rather than everything feeling like it has to compete for the same scarce thinking time.
I’m aware this is still early stage for me and most of this won’t be new for you. I also know broadly what I want my workflow to look like, but I haven’t fully built it yet. The learning curve is real, and that isn’t necessarily technical, it’s more about changing habits that have worked well enough for a long time.
And … that’s slower than installing a new tool.







