Skip to main content

Chapter 7: Task Decomposition

You've learned to write clear prompts, iterate on outputs, assign effective roles, and provide strong examples. Your results have improved considerably.

Yet something still feels off. The blog post reads generically. The report covers all the points but misses the insight your manager would catch. The analysis could belong to any company in your industry.

The output is technically correct. It's just not good.

The AI does what you asked, but results feel interchangeable. You can iterate endlessly, but you're polishing something built on a shaky foundation. The problem isn't your prompting technique. It's what you're asking the AI to do in the first place.

The Hidden Complexity in Simple Requests

It's 3 PM on a Thursday. Your manager stops by: "Hey, can you quickly put together a preliminary analysis of our Q3 results by end of day? Nothing formal, just something we can discuss in tomorrow's leadership meeting."

You nod. She walks away. And you immediately start thinking through what needs to happen, not consciously, just as automatic decisions that unfold in your mind.

Who's the audience? Leadership team. What do they care about? Big-picture trends and action items, not detail. How polished? Preliminary means rough edges are fine, but the logic needs to be sound. What format? Probably a one-pager with bullets. What's the real question? Not "what happened in Q3" but "what does this mean for Q4 planning?"

By the time you open your laptop, you've made a dozen invisible decisions that will shape everything you produce. Your manager's twelve-word request contained almost none of this context, but you filled it in automatically because you know your organization, your role, and what "preliminary analysis for leadership" actually means in your workplace.

Now imagine asking an AI to do the same thing.

Why AI Produces Generic Outputs

When you prompt an AI with "Write a preliminary analysis of our Q3 results for tomorrow's leadership meeting," the model can't make those invisible decisions. It doesn't know your CFO hates lengthy prose, that leadership meetings run exactly 30 minutes, or that "preliminary" in your organization means "defensible but not perfect."

So it produces the most statistically average version of "a preliminary analysis." The output is grammatically correct, reasonably structured, and generic. This isn't a prompting failure. It's a decomposition failure.

What Humans Know That AI Doesn't

You don't consciously realize you're breaking complex requests into smaller decisions. It happens automatically, informed by professional experience and organizational context.

Humans carry invisible knowledge that shapes every work product: your company's SOPs, preferred templates, unstated norms, political sensitivities, and the specific meaning of terms like "preliminary" or "strategic" in your context. You know that your CEO prefers data visualizations, that leadership is sensitive about customer retention this quarter, and that this analysis exists to justify a budget decision next month.

AI has none of this. It defaults to the most probable version across all possible organizational contexts, which means it's necessarily generic. The gap isn't in the AI's capabilities. It's in the context you haven't provided.

The John vs. Alice Test

Consider writing the same Q3 analysis for two different audiences. Notice that the difference isn't just tone or format but the sequence of decisions you make before writing.

For John (your peer in operations):

  1. Start by identifying operational bottlenecks that affected the quarter
  2. Frame metrics around his department's contribution
  3. Use conversational language he'd use in planning meetings
  4. Structure as a narrative that shows cause and effect
  5. Include specific team examples he'll recognize

For Alice (external client consultant):

  1. Start by establishing strategic context within the industry
  2. Frame metrics around competitive positioning
  3. Use formal language that signals professional credibility
  4. Structure as a framework that shows systematic thinking
  5. Keep internal details vague to protect proprietary information

Same data. Same task label. Completely different sequences of decisions made before you write a single word.

This process of breaking a complex request into an ordered sequence of discrete decisions is task decomposition. It's what separates mediocre AI outputs from genuinely useful ones.

What Task Decomposition Actually Means

Task decomposition is making your invisible decisions visible, and sequencing them, before you prompt.

Instead of asking the AI to "write a Q3 analysis," you first identify what that request requires and in what order:

  • What's the purpose? (Inform budget planning)
  • Who's the audience? (Leadership with limited technical background)
  • What's the real question? (Are we on track for annual targets?)
  • What level of detail? (Trends, not line items)
  • What decisions does this support? (Q4 resource allocation)
  • What format? (One-page executive summary)
  • What tone? (Direct and actionable)
  • What context matters? (Previous quarter showed concerning churn)

Each decision changes what you'd produce. None are obvious from the original request. Once you've decomposed the task, you're no longer asking the AI to make judgment calls it can't make. You're giving it a specification of exactly what success looks like in your context.

The Q3 Analysis, Decomposed

Original request: "Can you put together a preliminary analysis of our Q3 results by end of day?"

After decomposition:

  1. Real question: Are we on track for annual revenue targets despite Q3's slower growth?
  2. Audience: Five executives deciding on Q4 marketing spend tomorrow
  3. Scope: Revenue trends and customer acquisition costs, ignore operational details
  4. Format: One-page memo with three sections: What happened, What it means, What we recommend
  5. Tone: Candid and solution-oriented, acknowledge the challenge, emphasize the path forward
  6. Constraints: Defensible with data, no detailed methodology needed
  7. Success criteria: Leadership can make a confident spending decision after reading this

The twelve-word request became seven explicit decisions in a clear sequence. You haven't written a prompt yet. You've made visible what was previously implicit.

What Changes When You Decompose First

Less generic content. The AI works from your specific requirements instead of averaging across all possible Q3 analyses. Output reflects your organization's priorities, not statistical averages.

No "AI voice." That distinctive flatness comes from the model hedging across multiple contexts. Explicit context through decomposition lets the AI commit to a specific approach.

Fewer revisions. Most revision stems from misaligned expectations. Decomposition aligns them upfront. You're refining something fundamentally sound, not fixing the wrong solution.

Common Decomposition Mistakes

Decomposing too granularly. Identify decisions, not scripts. "Determine audience needs" is useful. "Write an opening sentence acknowledging Q2 challenges" removes the AI's strengths.

Skipping "obvious" context. Your industry terminology, leadership's preferred frameworks, what "preliminary" means in your organization: none of this is obvious to an AI. Make it explicit.

Decomposing format instead of thinking. "Three paragraphs with bullets" is formatting. "Identify the strategic implication of each metric" is the decision that shapes meaningful output.

Forgetting the why. Each decision should connect to what makes this task succeed. If you can't explain why a decomposition step matters, skip it.

The Foundation for Everything Else

Task decomposition isn't separate from what you've learned. It's the missing step that makes everything more effective.

Roles work better when you've decomposed enough to know which role helps. Examples hit harder when you've identified what aspect of execution needs demonstration. Iteration becomes faster when you're refining a well-decomposed foundation.

Most importantly, decomposition shifts your relationship with AI from "assistant who should figure this out" to "tool I'm directing precisely." The AI does the heavy lifting, but you do the thinking that determines whether that lifting produces something genuinely useful.

What Comes Next

Once you've broken a complex task into discrete decisions, you can turn those decisions into a sequence of connected prompts, each building on the last, rather than accomplishing everything in a single interaction.

In the next chapter, we will learn how to apply decomposition to get AI responses that are more aligned with your desired output and interestingly, feels less "AI Generated".

Comments

Popular posts from this blog

Chapter 1: What Are LLMs?

Imagine you're typing on your phone, and before even finishing a sentence, your phone suggests the next word.  You type "I'm running late for." and your phone readily suggests "work," "dinner," or "the meeting." This everyday technology that saves seconds from your day is actually a window into one of the most significant developments in artificial intelligence: Large Language Models, or LLMs. Auto Complete LLMs are like autocomplete on steroids. While your phone's autocomplete may provide some words in anticipation that are pulled from your recent messages and trending phrases, LLMs are working with an entirely different order of knowledge and sophistication. They're like having an autocomplete function that's read nearly everything human beings have ever written and can predict not just the next word, but entire paragraphs, stories, code, and intricate explanations. From Basic Autocomplete to AI Powerhouses To understand LLM...

Chapter 8: Prompt Chaining

Why Single Prompts Fall Short You've likely noticed that when you ask a language model to handle something genuinely complex, the results disappoint. Ask it to analyze customer feedback, generate recommendations, and write a marketing pitch all in one prompt, and you get something that does none of them well. The analysis stays shallow. The recommendations feel generic. The pitch misses the mark. This isn't a limitation of the model itself. It's a design problem. Here's what happens: when you compress multiple objectives into a single request, the model spreads its attention across all of them simultaneously. It cannot deeply understand your audience while simultaneously crafting persuasive language while simultaneously extracting insights from raw data. The cognitive load is too high, and the output suffers. Think of how a marketing director actually works. She doesn't attempt market research, message development, creative execution, and channel strategy in paralle...

Chapter 4: Evaluate and Iterate

You've learned to structure clear prompts using the TIP framework—defining your Task, providing necessary Information, and specifying your desired Product. That's the foundation. But effective prompt engineers don't expect perfection on the first try—they iterate,  just like how we take several photos from different angles before picking the best one. This chapter will teach you how to systematically evaluate outputs and refine your prompts to get consistently better results. Why Outputs Vary: The Probabilistic Nature of LLMs Remember the autocomplete analogy from Chapter 1? LLMs make probabilistic predictions—they don't pick the single "correct" next word, but calculate probabilities for thousands of options and sample from those possibilities. This means even identical prompts might yield slightly different responses. One time an LLM might say "Remote work has revolutionized workplace dynamics," another time "Leading remote teams requires ...