I keep running into the same conversation with legacy business operators.

They watch AI generate something. Code, content, designs. Doesn't matter. They immediately start cataloging what's wrong with it.

"See? It's only 75% of the way there. Still needs a human to get it to 100%. AI can't replace real expertise."

They're technically correct. And missing the point.

Three years ago, getting anything to 75% in ten seconds with AI was science fiction. Now it's boring. The question isn't whether AI reaches 100% today. It's how fast that gap closes.

And even on the way there, it’s: “what are all the things that changes now that the cost of the 75% is practically zero for some?”

This is confirmation bias in action. When you're sitting on something valuable, your brain actively hunts for reasons the new thing won't work.

It's defensive pattern matching dressed up as critical thinking.

The established SaaS company looks at AI-generated code and focuses on the bugs. Not the fact that a junior dev with Claude just built in two days what used to take their team two weeks.

The content agency sees AI writing that needs editing. Not the fact that their production costs just got compressed by 70%.

The consulting firm spots strategic gaps in AI analysis. Not the fact that research that cost $50k last year now costs $500.

They're measuring AI against perfection instead of measuring it against what was possible 18 months ago.

Here's what this perspective completely misses.

The rate of improvement isn't linear. It's not like AI gets 5% better each year and we can all adjust slowly.

The gap from 75% to 90% might take six months. The gap from 90% to 95% might take another six. And that last 5%? Turns out most customers don't actually need it.

By the time you finish explaining why AI can't do what you do, someone just shipped a product that's good enough for 80% of your market.

This is happening in real-time across industries. Some are leaning in heavy, others are hiding the fact that they don’t have any AI strategy behind the argument of “it’s not good enough”.

Design tools that were "obviously insufficient" two years ago are now genuinely threatening mid-tier agencies.

Code generation that was "just autocomplete" eighteen months ago is now replacing entire categories of development work.

Content that "clearly needed human polish" last year is now passing editorial review at scale.

The people who focused on the 25% gap missed the fact that the 75% kept getting cheaper, faster, and better.

The truth is that defensive pattern recognition feels like wisdom but functions like blindness.

You're not wrong that AI has limitations. You're wrong that those limitations are permanent moats.

The move isn't to pretend AI is perfect. It's to assume those gaps close faster than you're comfortable with and position accordingly.

Because the companies disrupting your space aren't waiting for AI to hit 100%. They're shipping at 75% and iterating from there.

And they're moving faster than you are.

If you have ideas you want me to write about, hit reply!

Until next week,

Martin

PS: Go connect with me on LinkedIn and say hi! It’s always fun to chat with readers

PS2: Want to kickstart you business’ AI transformation? Fill out this interest form and my consultancy will be in touch asap for a free discovery call.

Keep Reading

No posts found