AI for the Over 40 – Week 24: Why Your AI Project Should Start in Chat

AI for the Over 40 – Week 24: Why Your AI Project Should Start in Chat

In Week 20, I shared MIT’s finding that 95% of enterprise AI projects fail to reach production. The explanation was the learning gap: organizations try to build solutions they do not fully understand for problems they have not clearly defined. I have been thinking about that statistic ever since—not because it surprised me, but because I keep watching it happen in real time.

As we have started helping clients move from AI literacy to organizational transformation, I have seen the same trap appear again and again. It is easy to fall into, and we nearly fell into it ourselves. I call it the solutioning trap.

The solutioning trap

Here is how it usually works. A client identifies a process they want to automate with AI. They describe the outcome they want, and sometimes they even specify the technology they think should be used. The consultant, eager to help, starts building toward that specification.

The problem is that nobody has yet validated whether the proposed approach is the right one. Nobody has tested whether the AI can actually do what everyone assumes it can do. Nobody has uncovered the blind spots in how the challenge was originally framed.

So the team starts building, and once that happens, they are committed. Time and money get invested before the approach has been proven. If the solution turns out to be wrong, the work has to be discarded and the process starts over. That is what the failure rate looks like in practice. It is not usually incompetence. It is building too early.

The moment I pumped the brakes

A few weeks ago, a client approached us about automating their quoting process. They had done a lot of homework and had even used AI to generate a detailed requirements and solution document.

Our consultant did what many good consultants naturally do: he started testing approaches to validate the client’s vision. By the time I found out, we were already moving toward proving the client’s proposed solution before we had even signed a statement of work. That is when I stepped in and pumped the brakes.

The pushback was understandable. The client had told us what they wanted. They had handed us a spec. Why not just build what they asked for?

Because with AI projects, that frame is usually wrong. Our job is not simply to build what the client requests. Our job is to determine whether what they are asking for is actually what they need. That means validating multiple approaches, surfacing blind spots, and resisting technology lock-in before we know what works.

That conversation pushed me to formalize something I had been learning from Dr. Jules White at Vanderbilt University. He calls it Conversation First Prototyping. I have adapted it to what I think most people will understand more clearly: Chat First Prototyping.

The chat first prototyping framework

The core idea is simple: before you build anything, manually prototype the solution inside a chat interface. The consultant acts as the orchestrator, moving step by step through the workflow with AI, validating each part before anyone writes code or builds automation.

This approach breaks into three phases.

Phase 1: discovery and validation

This begins like a traditional discovery effort. You observe the current process, document how work actually happens, and understand the real workflow instead of the idealized version.

But then you add something critical: chat-based testing. The consultant manually runs the proposed steps through an AI chat interface to see what actually works. In the quoting example, that might include testing whether the LLM can extract the right information from customer service transcripts and contracts, transform that information into something usable, and match customer requests against contract terms to generate options and pricing.

Each step is validated in chat before anyone writes a line of code. That process exposes issues early—security and permission gaps, bad data formats, unrealistic expectations about model capability, or process steps that do not work the way people assumed. The deliverable from this phase is not software. It is validated knowledge: what works, what does not, and what should actually be built.

Phase 2: vibe code prototype

Only after Phase 1 do you build anything. And even then, the first build is a prototype. Using natural language and AI-assisted coding, you create a working proof of concept based on what has already been validated.

This keeps the work fast and relatively low-cost. The goal is not production readiness. The goal is to prove that the validated workflow can function when automated.

Phase 3: refinement and production

This is where traditional development begins. Features are expanded, systems are hardened, integrations are finalized, and the solution is prepared for real use.

But by the time a team reaches this phase, the biggest risks have already been removed. The idea has been tested in chat. It has been tested again in prototype form. The remaining task is not discovery. It is productionizing what already works.

Why this changes everything

This framework solves several problems at once.

Pricing becomes more realistic. Instead of trying to estimate a full solution full of unknowns, you can scope discovery and validation first. That is far easier to define and price.

Risk stays lower. A short validation phase and a prototype phase cost far less than diving directly into a full build that may need to be abandoned.

Technology lock-in is reduced. You do not commit to a specific model, platform, or architecture before you have seen what actually works.

Clients build literacy. When clients participate in chat-first testing, they see AI’s strengths and limitations for themselves. They become better decision-makers.

The process becomes repeatable. Different clients and different use cases can still follow the same pattern: discovery, validation, prototype, production. That is how you build a practice instead of improvising every engagement from scratch.

What would have happened otherwise

If we had followed the client’s AI-generated specification without this framework, I believe we would have spent significant time trying to force the wrong approach to work. We would have been locked into assumptions before validating them. And if the approach failed, we would have thrown away both time and effort before starting over.

That is what failure often looks like in enterprise AI. Not a lack of intelligence. Not a lack of effort. Just moving from idea to build too fast.

How this connects to the rest of the journey

This framework is not separate from the first 24 weeks of this series. It is the organizational application of the same ideas that have been showing up all along.

Week 9 was about literacy before agency. Clients cannot delegate decisions about AI if they do not understand what AI can and cannot do.

Week 15 highlighted barriers to adoption, including the tendency to assume the first idea is the right one because we have not imagined alternatives.

Week 18 was about the shift from consumer to creator—stopping the habit of accepting whatever the tool gives you and starting to architect what you actually need.

Week 20 emphasized build literacy, then buy. You cannot make good platform or vendor decisions until you understand the problem and the range of viable approaches. Chat First Prototyping helps build that understanding before major commitments are made.

Your week 24 challenge: question your next AI project

If you are considering an AI implementation—whether you are building internally or working with a consultant—start with a few hard questions.

Are you solutioning too early? Have you already locked onto a technology or architecture before validating that the approach works?

Could you test the workflow in chat first? Before automation, could you manually walk the process through an AI interface and see what happens at each step?

What assumptions are you making that you have not yet validated? About the data, the workflow, the model, or the outcome?

Who would act as the orchestrator? In a chat-first prototype, someone needs to connect the steps manually and learn from the process. What would that person discover before any code gets written?

The bottom line

Twenty-four weeks ago, I started this series to document my own AI transformation. At first, the goal was personal literacy. Over time, the challenge became larger: how do we help organizations navigate AI projects without becoming part of that 95% failure rate?

Chat First Prototyping is my current answer. Not because it is the only framework, but because it addresses the problems I keep seeing: projects that start building before validating, technology decisions made too early, consulting efforts that cannot be scoped well, and clients who never build enough literacy to make smart decisions.

The principle is simple: do not build until you have proven it works in chat. Let a human orchestrate before automation takes over. Validate first. Build second.

Start in chat. Validate everything. Then build what you know works.

This post is part of my “AI Over 40” series. It first appeared on LinkedIn: AI for the Over 40 [Week 24]: Why Your AI Project Should Start in a Chat Window

Read more AI and Copilot blogs.

Stay Informed

Choose Your Preferences

"*required" indicates required fields

This field is for validation purposes and should be left unchanged.
Subscription Options
By subscribing you are consenting to receiving emails from ArcherPoint and agreeing to the storing & processing of your personal data as described in our Privacy Policy. You can can unsubscribe at any time.