Skip to main content

Chapter 3: Prompting Basics

You now understand what LLMs are and how they work—sophisticated prediction machines that generate responses based on patterns learned from vast amounts of text. You also know that prompt engineering is the intentional crafting of instructions to guide these models toward desired outcomes.

This chapter bridges the gap between understanding and application. You'll learn a practical framework that transforms vague requests into clear, effective prompts that consistently deliver better results.

The Taxi Driver Analogy

When you interact with an LLM, it's like hailing a taxi driver in an unfamiliar city.

The driver is highly competent—they know every street, shortcut, and traffic pattern in town. They can navigate complex routes, handle unexpected road closures, and get you where you need to go efficiently. Their skills aren't in question.

But when they pull up to the curb, they're waiting for one crucial piece of information: where exactly do you want to go?

If you hop in and say "downtown," you might end up somewhere downtown—but probably not where you intended. The driver knows how to get to downtown, but which building? Which entrance? Are you in a hurry, or do you prefer the scenic route?

The more specific you are—"Take me to the main entrance of the Marriott Hotel on Fifth Street, and I need to be there in fifteen minutes"—the better your experience will be.

LLMs work the same way. They have remarkable capabilities, but they need clear direction about your specific destination. Without it, they'll take you somewhere reasonable based on limited context, but it might not be where you actually wanted to go.

This is why clear prompting matters—and why you need a systematic approach to giving those directions.

The TIP Framework

The most effective prompts contain three essential elements, which I call the TIP framework:

  • T = Task (what the model should do)
  • I = Information (context the model needs)
  • P = Product (the desired output or format)

Think of TIP as your prompt checklist. Before sending any request to an LLM, ask yourself: Have I clearly specified the Task, provided necessary Information, and defined the Product I want?

Let's walk through TIP using a single example to see how each element builds on the others.

Imagine you need: "Write a short article introducing remote work best practices for new managers."

Task: What Should the Model Do?

The Task specifies the action you want the model to perform. In our example:

Task: Write a short article introducing remote work best practices for new managers.

This is clear and specific. The model knows to write (not analyze or summarize), create an article (not a list or email), focus on remote work best practices (not general management), and target new managers (not experienced leaders).

Compare that to a vague prompt like: "Help me with remote work stuff." The model has no idea what you actually need.

Takeaway: Use specific action verbs like "write," "analyze," "summarize," or "compare" rather than vague terms like "help" or "work on."

Information: What Context Does the Model Need?

Information provides the context that shapes how the model approaches the task. Building on our example:

Information: The audience is newly promoted managers who are leading remote teams for the first time. They need practical, actionable advice that addresses common challenges like building trust, managing productivity, and maintaining team culture. The tone should be friendly and encouraging, acknowledging that remote leadership can feel overwhelming initially.

This context tells the model about the audience's experience level, specific pain points to address, and the appropriate tone to use.

Without this context, the model might write for experienced remote managers or focus on technical setup rather than leadership challenges.

Takeaway: Provide relevant context and constraints. Think about what a human expert would need to know to do the task well.

Product: What Should the Output Look Like?

Product defines the format, length, style, and structure of the desired response. Completing our example:

Product: Create a 300-word article with a brief introduction, three main best practices (each with a short explanation and practical tip), and a concise conclusion that encourages confidence. Use subheadings for each best practice and write in a conversational but professional tone.

Now the model knows exactly what to deliver: specific length, clear structure, formatting requirements, and style guidelines.

Takeaway: The more clearly you define what success looks like, the more likely you are to achieve it.

Once you've practiced the TIP mindset on one task, it becomes easier to apply it across a range of use cases.

Full TIP Examples

Now that you've seen how TIP applies in a real use case, here are a few more examples across different domains:

Example 1: Content Creation

Task: Create a weekly team update email
Information: Our marketing team completed three campaigns this week: the spring product launch (generated 500 new leads), the customer testimonial series (increased engagement by 30%), and the partnership announcement with TechCorp (gained 200 new followers). Next week we're focusing on the summer campaign strategy and Q2 performance analysis.
Product: Write a 150-word email in a positive, professional tone that celebrates this week's wins and previews next week's priorities. Use bullet points for the achievements and a conversational closing.

Example 2: Data Analysis

Task: Analyze customer feedback trends
Information: I have feedback from 200 customers over the past quarter. The main topics mentioned are pricing (mentioned 89 times, mostly concerns about cost), customer service (mentioned 156 times, mostly positive), and product features (mentioned 134 times, mixed feedback). Our customer service team recently implemented a new response system, and we launched two new features in February.
Product: Provide a 250-word executive summary with three key insights, each supported by specific numbers from the data. Include one actionable recommendation for each insight. Use a professional, analytical tone suitable for senior leadership.

Example 3: Learning and Development

Task: Create a learning plan for improving presentation skills
Information: I'm a project manager who gives monthly updates to stakeholders but struggles with public speaking anxiety. I have about 2 hours per week to dedicate to improvement, prefer online resources over in-person classes, and my next major presentation is in 6 weeks. My main challenges are organizing thoughts clearly and managing nervousness.
Product: Design a 6-week structured plan with specific weekly goals, recommended resources (videos, articles, practice exercises), and measurable milestones. Format as a weekly schedule with 2-hour time blocks. Include practical tips for anxiety management.

Notice how each example clearly states what to do (Task), provides necessary context (Information), and specifies exactly what the output should look like (Product).

Putting TIP to Work

When you sit down to write a prompt, work through each element:

  1. Task: What specific action do I need?
  2. Information: What context would help the model do this well?
  3. Product: What should the final result look like?

Some prompts will emphasize one element more than others, depending on your needs. A creative writing request might focus heavily on Task and Product specifications, while a data analysis request might require extensive Information.

Takeaway: TIP ensures your prompts contain the essential elements for success. Use it as a checklist to improve your results.

The Foundation for Better Results

Every technique you'll learn in the following chapters builds on this foundation. Whether you're refining prompts, handling complex tasks, or troubleshooting unexpected results, you'll return to these basics: clear Task definition, sufficient Information, and specific Product requirements.

The TIP framework transforms prompt writing from guesswork into systematic thinking. With TIP as your foundation, you're ready to move beyond basic prompting.

In the next chapter, you'll learn how to evaluate outputs, fix what isn't working, and iterate toward consistently better results.

Comments

Popular posts from this blog

Chapter 1: What Are LLMs?

Imagine you're typing on your phone, and before even finishing a sentence, your phone suggests the next word.  You type "I'm running late for." and your phone readily suggests "work," "dinner," or "the meeting." This everyday technology that saves seconds from your day is actually a window into one of the most significant developments in artificial intelligence: Large Language Models, or LLMs. Auto Complete LLMs are like autocomplete on steroids. While your phone's autocomplete may provide some words in anticipation that are pulled from your recent messages and trending phrases, LLMs are working with an entirely different order of knowledge and sophistication. They're like having an autocomplete function that's read nearly everything human beings have ever written and can predict not just the next word, but entire paragraphs, stories, code, and intricate explanations. From Basic Autocomplete to AI Powerhouses To understand LLM...

Chapter 4: Evaluate and Iterate

  Chapter 4: Evaluate and Iterate You've learned to structure clear prompts using the TIP framework—defining your Task, providing necessary Information, and specifying your desired Product. That's the foundation. But effective prompt engineers don't expect perfection on the first try—they iterate,  just like how we take several photos from different angles before picking the best one. This chapter will teach you how to systematically evaluate outputs and refine your prompts to get consistently better results. Why Outputs Vary: The Probabilistic Nature of LLMs Remember the autocomplete analogy from Chapter 1? LLMs make probabilistic predictions—they don't pick the single "correct" next word, but calculate probabilities for thousands of options and sample from those possibilities. This means even identical prompts might yield slightly different responses. One time an LLM might say "Remote work has revolutionized workplace dynamics," another time ...

Chapter 2: What is Prompt Engineering?

For most of computer history, people had to speak the machine’s language to get anything done. We had to adapt to computers—not the other way around. But that’s changed. Now, you can achieve complex tasks just by telling the computer what you want—in plain English. This is one of the biggest leaps in how humans interact with technology since the graphical user interface. The Evolution of Human-Computer Communication To understand how we got here, let’s take a quick journey through how people have talked to computers—starting with the classic example: “Hello, World!” Back in the 1940s, programmers had to write in pure binary—long strings of ones and zeros. To make the computer display “Hello, World!” meant manually translating each letter into machine code. Then in the 1950s, assembly language arrived, giving us slightly easier commands like MOV and INT . They were more human-readable but still required  deep understanding of computer architecture. Later, high-level programming lan...