The Knowledge You Can’t Outsource

Share:

It seems natural enough when trying to identify how AI can assist with consulting or creative work: first identify repetitive tasks, hand them off to an LLM, and then reclaim the organization’s time for higher-order thinking. This order of process steps is fundamentally flawed.

Understanding how you get there is crucial because your organization’s expertise and judgment are what truly drive differentiation.

The false promise being sold is that AI lets you skip the “apprenticeship.”

This is the thing it cannot do. And, it can be a consequential mistake in how these tools are being deployed within professional services companies right now.

I’ve spent years doing process and taxonomy work for knowledge workers: mapping what people actually do to proprietary knowledge categories, identifying mechanisms for storing and retrieving collective organizational wisdom, and identifying where judgment lives and where it doesn’t in the collection of repeatable processes of consulting companies. Recognizing the limits of description underscores why expertise and judgment are irreplaceable.

What that work reveals is that most people cannot accurately describe their own processes from memory. They skip internalized steps. They conflate two decisions into one. And more often than not, they describe the ideal, not the actual. The gap between what a professional thinks they do and what they actually do is where most automation failures are born. It takes observation, or, better yet, doing, to close it.

This isn’t a flaw. It’s a property of expertise. 

The philosopher Michael Polanyi described it as “tacit knowledge” – we know more than we can tell. It’s the creative director who senses a campaign is off-brief before she can say why. The BI analyst who knows which number doesn’t smell right. The consultant who knows what their client’s reaction will be before the client does.

Expertise lives in the doing, not in description. You cannot extract it through an interview. 

When working with prompt engineering, the real deliverable isn’t the prompt. It’s the success criterion. What does a good output actually look like? What distinguishes a plausible result from a correct one? That standard can only be written by someone who has done the task enough times to recognize subtle failure. Without this experience, you risk automation producing outputs that no one is qualified to evaluate.

The professionals who get genuine leverage from AI tools have almost always done the work first. They’ve built the judgment. They can supervise the automation the way a skilled editor supervises a junior writer.  Not rewriting every sentence, but knowing immediately when the voice is wrong, when the logic skips, when the conclusion doesn’t follow from the premise. 

That capacity isn’t something AI provides. It’s what earns you the right to use AI well.

The promise of these tools is real. But it is not that you stop doing things. It’s that doing them well, for long enough, gives you the judgment to delegate them intelligently.

The organizations getting real leverage from AI are the ones that treat expertise as a prerequisite, not an obstacle. 

The apprenticeship isn’t a barrier. It’s the asset.

author avatar
Andy Roach