Every executive I speak with right now has a version of the same story: we invested in AI, we ran the pilots, and somehow the results didn’t justify the hype. The instinct is always to blame the technology — wrong model, wrong vendor, wrong timing. I’ve come to believe that instinct is almost always wrong.
The real failure happens before a single line of code is written. It happens in the room where leaders define what they’re trying to solve — or more precisely, where they fail to define it with any real precision.
What the Research Shows
Peter Arvai, CEO of Prezi, makes an argument in this Inc. piece that cuts through a lot of the noise: AI projects don’t fail because the technology is immature. They fail because organizations bring poorly formed questions to capable tools. The model isn’t the bottleneck. The thinking is. Arvai’s position is that most companies are asking AI to do things before they’ve clearly articulated what success even looks like — and then they’re surprised when the output doesn’t move the needle.
This isn’t a technical critique. It’s a leadership critique. And coming from someone running a company whose entire product is built around how humans communicate and structure ideas, it carries particular weight.
Why This Changes the Playbook
Here’s what I think this really means for organizations: we’ve been treating AI adoption as an engineering problem when it’s actually a strategic clarity problem. The companies getting real returns from AI aren’t necessarily the ones with the biggest budgets or the most sophisticated infrastructure. They’re the ones who spent time upstream — defining the decision they’re trying to improve, the workflow they’re trying to compress, the outcome they’re willing to be held accountable for.
Most leaders get this wrong in a few predictable ways:
- Vague mandates produce vague results. “Use AI to improve customer experience” is not a brief. It’s a wish. Teams will build something, demo something, and then nothing changes operationally.
- Question-framing gets delegated to the wrong people. Technical teams are excellent at answering questions but were never trained to interrogate whether the right question is being asked in the first place.
- Activity pressure overrides strategic discipline. There’s enormous pressure to show AI activity — pilots, announcements, budget allocations — which creates incentives to start building before the thinking is done.
- Success metrics get bolted on after the fact. Teams end up measuring effort and output rather than actual business impact.
The organizations that will win with AI are not the fastest adopters. They are the most disciplined questioners.
The second-order effect of this is significant. If your AI projects are consistently underdelivering, your organization starts to develop learned helplessness around the technology. The cynicism compounds. Talented people who could drive real transformation start to disengage. You end up spending more on AI and trusting it less — a genuinely dangerous position to be in as the competitive landscape shifts.
Key Takeaways for Leaders
- Before any AI initiative gets a budget, require the team to articulate the specific decision or outcome it will improve — in one sentence.
- Treat question formulation as a senior leadership responsibility, not something to delegate entirely to data science or IT.
- Audit your current AI projects against concrete business metrics, not activity metrics like “models deployed” or “prompts processed.”
- Build in a pre-mortem practice: before launch, ask what a failed version of this project looks like and whether you’d actually know the difference.
- Recognize that AI fluency for executives means knowing how to frame problems precisely, not knowing how the models work under the hood.
- <a href="https://hbr.org/2019/02/companies-are-failing-in-their-efforts-to-become-data-driven" target="_blank" rel
Interesting Articles to Read
-
“`html
- Companies Are Failing in Their Efforts to Become Data-Driven (HBR) — Explores why most organizations struggle to translate data and AI investments into real business outcomes, pointing to strategy and culture over technology as the root cause.
- Why Do Most Machine Learning Projects Fail? (McKinsey) — McKinsey analysts examine how misaligned problem framing and unclear success metrics derail AI initiatives long before deployment.
- How to Avoid an AI Implementation Failure (MIT Sloan Management Review) — Outlines the organizational and strategic conditions that determine whether AI projects deliver value or quietly disappear after the pilot phase.
“`
