Most decision-makers reading this have tried AI in the last eighteen months and quietly given up.

Not all the way. The subscription is still active. The browser tab gets opened occasionally. But the serious work, the work that actually moves the company forward, still happens the same way it did three years ago. In Excel. In meetings. In documents that take three people and four days to produce.

The conclusion most people draw from this experience is that AI isn't ready yet.

The conclusion that's actually correct is that AI without your data isn't ready yet.

Those are very different problems with very different solutions. The difference is what this newsletter exists to talk about.

What I watched happen

I work in a sophisticated environment. The people around me are smart, technical, well-resourced, motivated to find any edge they can. When the firm decided to seriously integrate AI a few months ago, the announcement landed with the kind of energy you'd expect: this is going to change how we work.

It did, eventually. But not the way the announcement implied.

What I noticed first was who walked away. The decision-makers. The senior people whose hours actually move the firm. They tried it, briefly, and went back to their existing systems. Their reasoning was always some version of the same sentence: this is impressive, but it doesn't know what I need it to know. It couldn't see their data. It didn't understand the firm's context. So they left, and the daily work continued the way it always had.

The younger employees kept tinkering. They still are. But the way they use it doesn't translate into executive-level decisions. They're drafting emails, summarizing documents, automating small tasks. Useful, but not the kind of work that changes how a firm makes money.

The result was a company-wide AI rollout that looked active on paper and produced almost no measurable impact on senior decisions. The people who could have deployed it for serious work were the first to walk away.

If you've watched some version of this at your own company, you're not alone. It's universal right now. Almost every meaningful AI rollout I've seen at companies between $50M and $1B is producing the same outcome: initial excitement among the people who matter most, rapid disillusionment, quiet retreat to the existing way of working.

This is happening not because AI failed, but because most rollouts are missing a specific piece. The piece nobody talks about because it's the unglamorous part. The piece every vendor pitch carefully steers around because they don't sell it.

The experiment that proved the pattern

A few weeks into watching this play out, I tried something on my own.

I picked a question I knew the AI would fail at. Not because the model wasn't smart enough, but because the answer required information the model couldn't possibly have. Information that lived only in our internal systems. The kind of question every business has dozens of.

It told me, predictably, that it couldn't help.

Then I built a small piece of infrastructure. A connector that wired the model into the source where the data lived. After that, the same question came back answered in seconds. The same model. The same interface. But now it had access to what it needed.

I want to be honest about how fast this was. The connector itself took a couple of hours. It only took a couple of hours because my team had spent years building the underlying systems that made our data accessible in a structured, reliable way. The plumbing was already there. The model just needed to be wired into something we'd already built.

That's worth naming, because it captures the entire pattern. A few hours of integration work, sitting on top of years of foundational infrastructure work. That ratio is the precondition almost every serious AI deployment depends on. The integration is fast when the data foundation is mature. The integration is slow, or impossible, when it isn't. Most companies are about to discover which category they're in.

But that's a topic for another issue. Back to what happened next.

The reaction that proved the thesis

The first experiment was a private validation. It told me the pattern worked.

The next move was applying it somewhere senior decision-makers were already feeling pain. A function where executives wanted answers faster than the firm could produce them, where the data existed but lived in systems people had to ask analysts to query for them.

I built another connector. Wired the model into the right systems. Then I showed it to upper management.

The reaction was immediate, unanimous, and slightly impatient: how do I get this on my computer.

These were the same people who had politely shrugged off the AI conversation for months. The same people who had tried the standalone version of the tool and concluded it wasn't useful for their work. The moment they saw it answering the questions they actually had, in the context of the data they actually cared about, the conversation changed completely.

Nothing about the model had changed. Everything about its surroundings had.

The mistake almost everyone is making

Here is what I want to argue, and what every issue of this newsletter for the next year is going to keep returning to in different forms.

The model is not the product.

The model plus your data is the product. The connective tissue between the two is where the value lives, where the moat lives, where the actual transformation happens. And almost nobody is investing there.

Look at what most companies are doing right now. They're picking a vendor. They're negotiating a license. They're rolling out a chat interface. They're running pilots that consist of employees opening that chat interface and asking it questions it has no way to answer because it has no access to anything the company actually knows.

Then everyone is surprised when the pilot underperforms.

The vendors selling these tools aren't lying. They're selling something incomplete. The model is real, the capability is real, but the model alone is a brilliant brain with no eyes, no memory, no awareness of the business it was hired to help. Of course it disappoints in production. It was always going to.

The companies that figure out the integration layer first are going to compound an advantage that's very hard to catch up to, because integration is patient work. It's not a procurement decision. It's not a vendor evaluation. It requires people who understand the data, people who understand the model, and a few months of unglamorous engineering on top of however much foundational data infrastructure already exists. While the rest of the market spends another quarter debating which model to use, the firms that solved the integration layer six months ago are quietly making decisions in minutes that used to take weeks.

The conversation worth having this week

If you take one thing from this issue, let it be this: the question your AI committee should be asking is not which model are we using. That conversation is already commoditized. The frontier models are all good enough.

The question is: where are we on connecting our model to the data the business actually runs on.

Which CRM. Which financial systems. Which databases. Which documents living in shared drives nobody has touched in a year. Which of these data sources matters most for your team. Which questions could you finally answer if the model could see them.

The honest version of that conversation usually surfaces a second, harder problem. Some of the questions you most want answered cannot be answered with any AI, because the data was never captured in the first place. The information lives in someone's head, or in a Slack thread from eight months ago, or in a process that nobody documented. When the AI fails to answer those questions, the failure is not the AI's fault. It's pointing at a data capture problem the company has been quietly avoiding for years.

This is, in some ways, the most useful thing AI does. It exposes which questions you can answer with what you've captured, and which questions you've been pretending to answer with intuition. The right response to that exposure is not to give up on AI. It's to fix the capture, then come back.

The thirty-minute taste

If you want to test a small piece of this thesis with your own eyes before you commit any budget to it, here's an experiment that takes thirty minutes.

Export a CSV of something substantial from your business. Last quarter's expenses. Your sales pipeline. Your customer activity log with whatever metadata you have. Open a current AI tool. Upload the file. Then start asking the questions you would normally hand to an analyst.

Find the anomalies. Tell me the story of where this money went. Show me the customers we've been neglecting. Break down the activity by team and tell me who's on track and who isn't.

Most people who actually do this are surprised. The capability has compounded since you last seriously tested it. The opinion you formed about these tools eighteen months ago is probably out of date by an order of magnitude.

But understand what this experiment is and what it isn't. This is the smallest possible version of the pattern: a static file, uploaded once, queried in a sandbox. The real version is the model connected to the live source. To the actual database. To the system of record. So that the answers update as the business updates and the analysis is always current.

The gap between the Excel test and that live integration is exactly the gap most companies are failing to close right now. The Excel test gives you a taste. Closing the gap is the work.

What this newsletter is

The Knowledge Layer is a weekly read about closing the gap between the data your company already has and the decisions you're trying to make. Each week, one specific way to close it.

Not predictions. Not vendor reviews. Not commentary on which model is winning the benchmarks war. Concrete patterns of what's actually working when companies do this right, what fails when they don't, and how to tell the difference before you commit budget you can't unspend.

I'm a practitioner. The things I write about are things I'm building, watching, breaking, and rebuilding in production. The frame of this newsletter is what an experienced operator notices on Tuesday morning that other people might not have noticed yet.

Issue 2 is the case study behind the moment I just described. The function I picked. The questions that suddenly became answerable. What changed in how decisions got made. What the pattern looks like generalized to a business shaped differently than mine. The lessons are concrete and the structure is replicable across industries.

Until then, run the thirty-minute test. Notice what surprises you.

Forward this to one person who'd find it useful.

— Artiom

P.S. Replies to this email reach me directly. If you have a specific question
about your own company's data-to-decision gap, I read everything that comes in.

Keep Reading