In my AI for Business Owners Cohort, we typically select a few ai projects to implement for an owner as examples of what these tools can do when you learn to work with them.

I wrote about one of these projects that we did for a marketing agency owner who wanted to a new lead generation system.

The mechanics were straightforward: scrape a lead database, enrich each record with data from the web, score them against her ideal client profile, and deliver qualified prospects to her inbox every morning.

Within two weeks, she'd closed several new clients. The system worked exactly as designed.

And that's when I saw the problem.

The Untapped 100-300x

That system we built? It's operating at maybe 10% of its potential.

I could tune it to deliver 100-300x more qualified leads. The technical capability is there. The data sources exist. The enrichment logic can scale.

The issue is, the project will never end. It should never end. Working with agents and ai like this is a continuous process. It’s essentially just, “Running the company.”

For example, I’d guess that our marketing agency owner’s definition of what makes a "good lead" changes as data comes in. She wins a healthcare client and will suddenly want more healthcare leads. She loses a pitch to a company that was too small, so the minimum revenue threshold needs adjustment. She launches a new service and the entire ideal client profile shifted.

The AI could adapt to all of this. That's what these systems are good at.

The One-and-Done Fallacy

This is going to change the traditional consulting model and challenge what I see emerging from companies like McKinsey in response to AI.

Traditionally you billed by the hour or by the project. Many people see AI leading us away from the hourly billing cycle and to outcome based pricing. If I can get you x result by y date, you pay me z amount. AI can make outcomes more measurable and definable.

That’s true but it misses the iteration and learning aspects of AI. I see AI systems more as continuous collaborations/conversations, not fixed configurations. They need ongoing dialogue about what matters, what's changed, and what "better" looks like this week.

The agency owner I worked with didn't need me to build a bigger system. She needed to learn how to talk to the one she had.

If she has to renegotiate every iteration with me or if the Fortune 500 has to do that with McKinsey, I think the transaction costs escalate to the point where you’d rather just pass.

The Knowledge Programmer Gap

This is why I'm shifting my focus to training.

Companies don't just need more AI implementations. They need people who can work WITH AI systems over time. People who can:

  - Translate shifting business outcomes into system instructions

  - Recognize when the AI's output has drifted from what's valuable

  - Tune, adjust, and evolve the system as the business changes and the technology changes

I'm calling these people "knowledge programmers." Not because they write code—most won't. But because they program knowledge into systems that can act on it.

Every company is going to need to develop this capability internally. You can't outsource all of it affordably and who wants to outsource your key competitive advantages. You can't buy it as an off the shelf product. It's a capability your company has or doesn’t

  What I'm Watching

The companies that figure this out first will have a compounding advantage. Their AI systems will get better every week because someone is actively teaching them what "better" means.

he companies waiting for turnkey solutions will keep buying implementations that work great for 90 days, then slowly drift into irrelevance.

The question isn't whether you're using AI. It's whether anyone on your team knows how to keep the conversation going.

Reply

or to participate

Keep Reading

No posts found