You've learned to write clear prompts, iterate on outputs, assign effective roles, and provide strong examples. Your results have improved considerably.
Yet something still feels off. The blog post reads generically. The report covers all the points but misses the insight your manager would catch. The analysis could belong to any company in your industry.
The output is technically correct. It's just not good.
The AI does what you asked, but results feel interchangeable. You can iterate endlessly, but you're polishing something built on a shaky foundation. The problem isn't your prompting technique. It's what you're asking the AI to do in the first place.
The Hidden Complexity in Simple Requests
It's 3 PM on a Thursday. Your manager stops by: "Hey, can you quickly put together a preliminary analysis of our Q3 results by end of day? Nothing formal, just something we can discuss in tomorrow's leadership meeting."
You nod. She walks away. And you immediately start thinking through what needs to happen, not consciously, just as automatic decisions that unfold in your mind.
Who's the audience? Leadership team. What do they care about? Big-picture trends and action items, not detail. How polished? Preliminary means rough edges are fine, but the logic needs to be sound. What format? Probably a one-pager with bullets. What's the real question? Not "what happened in Q3" but "what does this mean for Q4 planning?"
By the time you open your laptop, you've made a dozen invisible decisions that will shape everything you produce. Your manager's twelve-word request contained almost none of this context, but you filled it in automatically because you know your organization, your role, and what "preliminary analysis for leadership" actually means in your workplace.
Now imagine asking an AI to do the same thing.
Why AI Produces Generic Outputs
When you prompt an AI with "Write a preliminary analysis of our Q3 results for tomorrow's leadership meeting," the model can't make those invisible decisions. It doesn't know your CFO hates lengthy prose, that leadership meetings run exactly 30 minutes, or that "preliminary" in your organization means "defensible but not perfect."
So it produces the most statistically average version of "a preliminary analysis." The output is grammatically correct, reasonably structured, and generic. This isn't a prompting failure. It's a decomposition failure.
What Humans Know That AI Doesn't
You don't consciously realize you're breaking complex requests into smaller decisions. It happens automatically, informed by professional experience and organizational context.
Humans carry invisible knowledge that shapes every work product: your company's SOPs, preferred templates, unstated norms, political sensitivities, and the specific meaning of terms like "preliminary" or "strategic" in your context. You know that your CEO prefers data visualizations, that leadership is sensitive about customer retention this quarter, and that this analysis exists to justify a budget decision next month.
AI has none of this. It defaults to the most probable version across all possible organizational contexts, which means it's necessarily generic. The gap isn't in the AI's capabilities. It's in the context you haven't provided.
The John vs. Alice Test
Consider writing the same Q3 analysis for two different audiences. Notice that the difference isn't just tone or format but the sequence of decisions you make before writing.
For John (your peer in operations):
- Start by identifying operational bottlenecks that affected the quarter
- Frame metrics around his department's contribution
- Use conversational language he'd use in planning meetings
- Structure as a narrative that shows cause and effect
- Include specific team examples he'll recognize
For Alice (external client consultant):
- Start by establishing strategic context within the industry
- Frame metrics around competitive positioning
- Use formal language that signals professional credibility
- Structure as a framework that shows systematic thinking
- Keep internal details vague to protect proprietary information
Same data. Same task label. Completely different sequences of decisions made before you write a single word.
This process of breaking a complex request into an ordered sequence of discrete decisions is task decomposition. It's what separates mediocre AI outputs from genuinely useful ones.
What Task Decomposition Actually Means
Task decomposition is making your invisible decisions visible, and sequencing them, before you prompt.
Instead of asking the AI to "write a Q3 analysis," you first identify what that request requires and in what order:
- What's the purpose? (Inform budget planning)
- Who's the audience? (Leadership with limited technical background)
- What's the real question? (Are we on track for annual targets?)
- What level of detail? (Trends, not line items)
- What decisions does this support? (Q4 resource allocation)
- What format? (One-page executive summary)
- What tone? (Direct and actionable)
- What context matters? (Previous quarter showed concerning churn)
Each decision changes what you'd produce. None are obvious from the original request. Once you've decomposed the task, you're no longer asking the AI to make judgment calls it can't make. You're giving it a specification of exactly what success looks like in your context.
The Q3 Analysis, Decomposed
Original request: "Can you put together a preliminary analysis of our Q3 results by end of day?"
After decomposition:
- Real question: Are we on track for annual revenue targets despite Q3's slower growth?
- Audience: Five executives deciding on Q4 marketing spend tomorrow
- Scope: Revenue trends and customer acquisition costs, ignore operational details
- Format: One-page memo with three sections: What happened, What it means, What we recommend
- Tone: Candid and solution-oriented, acknowledge the challenge, emphasize the path forward
- Constraints: Defensible with data, no detailed methodology needed
- Success criteria: Leadership can make a confident spending decision after reading this
The twelve-word request became seven explicit decisions in a clear sequence. You haven't written a prompt yet. You've made visible what was previously implicit.
What Changes When You Decompose First
Less generic content. The AI works from your specific requirements instead of averaging across all possible Q3 analyses. Output reflects your organization's priorities, not statistical averages.
No "AI voice." That distinctive flatness comes from the model hedging across multiple contexts. Explicit context through decomposition lets the AI commit to a specific approach.
Fewer revisions. Most revision stems from misaligned expectations. Decomposition aligns them upfront. You're refining something fundamentally sound, not fixing the wrong solution.
Common Decomposition Mistakes
Decomposing too granularly. Identify decisions, not scripts. "Determine audience needs" is useful. "Write an opening sentence acknowledging Q2 challenges" removes the AI's strengths.
Skipping "obvious" context. Your industry terminology, leadership's preferred frameworks, what "preliminary" means in your organization: none of this is obvious to an AI. Make it explicit.
Decomposing format instead of thinking. "Three paragraphs with bullets" is formatting. "Identify the strategic implication of each metric" is the decision that shapes meaningful output.
Forgetting the why. Each decision should connect to what makes this task succeed. If you can't explain why a decomposition step matters, skip it.
The Foundation for Everything Else
Task decomposition isn't separate from what you've learned. It's the missing step that makes everything more effective.
Roles work better when you've decomposed enough to know which role helps. Examples hit harder when you've identified what aspect of execution needs demonstration. Iteration becomes faster when you're refining a well-decomposed foundation.
Most importantly, decomposition shifts your relationship with AI from "assistant who should figure this out" to "tool I'm directing precisely." The AI does the heavy lifting, but you do the thinking that determines whether that lifting produces something genuinely useful.
What Comes Next
Once you've broken a complex task into discrete decisions, you can turn those decisions into a sequence of connected prompts, each building on the last, rather than accomplishing everything in a single interaction.
In the next chapter, we will learn how to apply decomposition to get AI responses that are more aligned with your desired output and interestingly, feels less "AI Generated".
Comments
Post a Comment