KPMG says enterprise AI spend nearly doubled this year — from $114 million to $207 million average. Agent deployment went from 11% to 54% in four quarters. WalkMe put a number on where that money goes: 93% to tools and infrastructure, 7% to the people using them.
I read those numbers last week and nodded along. Then I lived them.
I run a multi-agent AI system for my business. Different AI tools handle different roles — one does strategy and coaching, one handles scheduled operations, one does technical builds. It's the kind of setup that sounds impressive in a LinkedIn post and mostly works well in practice.
Last week, I migrated the whole system to a new file structure. The architecture was sound — role-based folders, clear ownership rules, version control, documented procedures. Every file had exactly one home. On paper, this was the right move.
The quality of the output dropped immediately. I'm talking 30-sessions-backwards kind of drop. A LinkedIn comment draft came back sounding like a consulting textbook. Statistics that had already been corrected showed up wrong again. A system that had been producing sharp, personalized work started producing generic, unreliable work.
Nothing was technically broken. Every file was where it was supposed to be. Every tool had the access it needed. The architecture was clean.
The problem was me. I had designed myself out of the process.
Before the migration, I manually handled the file saves between sessions. I uploaded every updated file myself, reviewed what changed, and kept a mental map of where everything lived. It was extra work and I wanted to automate it away.
When I did, I lost something I didn't realize I had: a clear picture of my own system. I couldn't tell you which version of a file was current. I didn't know if the AI had read the right reference documents before drafting content. I didn't catch that the same attribution error I'd already fixed had been regenerated — because the new setup didn't have the correction rules in context.
The tools worked fine. The infrastructure worked fine. The person who was supposed to be running the system had no idea what was happening inside it.
Sound familiar?
This is the 93/7 problem at a scale of one. I had the tools. I had the architecture. What I didn't have was the human-side work — the part where someone understands how the pieces connect, checks that the right context is loaded, and catches when the output doesn't match what it should.
That work isn't glamorous. It's reviewing outputs before they go live. It's knowing which reference documents need to be read before a draft gets written. It's building checklists into the process so the AI doesn't skip steps that matter. It's the kind of work that looks like it could be automated — right up until you automate it and everything quietly degrades.
Writer's survey found that 75% of executives say their AI strategy is more for show than actual guidance. 29% of employees admit to actively working against their company's strategy — 44% among Gen Z. KPMG found that the most-cited barrier to AI agent rollout is employee skill gaps, at 76%.
I keep coming back to the same place when I read this data. The problem between "we set up AI" and "AI is actually working" is not a technology problem. It's an attention problem. Someone has to be paying attention to what the AI is reading, what it's skipping, and whether the output matches reality. Right now, almost nobody is paying for that attention — 7 cents on the dollar.
I fixed my system this week by asking myself the same questions I'd ask if I were auditing someone else's setup. They're not complicated. They're just the questions that don't get asked when everyone is focused on the tools.
If your organization is running AI tools and the results feel inconsistent — or people just aren't using them — run through these before you open another vendor demo:
Who owns the context the AI is working from? In my system, nobody owned it after I automated the handoff. The AI was reading files, but nobody was checking which version, whether the files were current, or whether critical reference documents got loaded. In an enterprise setting, this looks like a team using a Copilot or internal AI tool with no one maintaining the knowledge base, prompt templates, or source documents it pulls from. The AI is only as good as what it reads. If nobody owns that input layer, the output degrades quietly and nobody notices until something goes wrong publicly.
Is there a verification step between "AI generated this" and "this goes live"? I had a correction that was marked as done in one file but never actually applied to the output file. The status label said fixed. The content still had the wrong numbers. In an org, this is a report that gets generated, approved based on the summary, and published — without anyone checking the underlying data against the source. It's not a trust problem with the AI. It's a process gap where the human review step is either missing or treated as a formality.
Are your people building checks into the workflow, or are they expected to just "use AI"? My system broke because I had documented procedures but no built-in checkpoints that forced me to stop and verify. The procedures existed on paper. The enforcement didn't exist in the actual workflow. At scale, this looks like an AI policy that says "review all AI outputs before sending" without telling anyone what to review, what to check it against, or what "good" looks like. A mandate without a method is just noise.
Do the people using the tools actually understand what the tools are doing? Not at a technical level — at a workflow level. Do they know what data the AI is pulling from? Do they know what it's not seeing? Can they tell when an output is wrong, or does everything look equally polished and equally trustworthy? My system produced a LinkedIn comment that sounded professional and authoritative. It was also generic, off-voice, and missing every reference document that makes my content mine. It looked fine. It wasn't. That gap between "looks fine" and "is fine" is where AI adoption stalls, because people either trust outputs they shouldn't or stop trusting the tools entirely.
Is anyone tracking what happens to AI outputs after they're generated? I had a recurring task producing weekly reports that went into a folder and never got reviewed. The reports existed. Nobody looked at them. The value was zero. In most organizations, this is happening right now with AI-generated summaries, draft documents, research outputs, and analytics — they're being produced on schedule and ignored on arrival. If the output doesn't have a defined next step, owner, and review trigger, you're paying for AI to fill up folders.
None of this requires a platform upgrade. It requires someone to sit between the tool and the workflow and ask whether the pieces are actually connected. That's the work that's getting 7 cents of every dollar — and it's the work that determines whether the other 93 cents produce anything useful.
Find one person on your team who's already using AI in their daily work, even informally, even without official approval. Ask them three questions: What are you using? What's working? What's frustrating?
You'll learn more about where your organization actually stands from that conversation than from any vendor dashboard.
Sources: KPMG Q1 2026 Quarterly Pulse Survey (2,110 respondents across 20 markets), WalkMe/SAP Enterprise AI Research 2026, Writer/Workplace Intelligence 2026 Enterprise AI Adoption Survey (2,400 respondents). Episode synthesis by Nathaniel Whittemore on the AI Daily Brief.
The Implementation Lane is a weekly newsletter about making AI work inside real organizations. Written by Amanda Crawford, an AI Implementation Specialist who builds systems in the gap between configuration and engineering. If someone forwarded this to you, subscribe here.

