Good Prompt Example: Decomposed Tasks with Clear Constraints
The Context
This came after the bad prompt in bad/example1.md where vague instructions led to unwanted file edits. The user had manually created the bad example file, and now wanted it improved.
The Prompt
Ok, so now first I want you to:
1. Make what is already in there, better readable in markdown.
So think about perhaps ``` blocks. Don't fancy it up, keep it simple.
2. Explain how being vague, 'see if they still match' actually motivated
you to update them, while that was not what I asked. I might have wanted
only a report. But I did not specify.
So I should have probably added, no updating/editing, just analysis.
What the LLM Did
Step 1: Attempted to use Edit tool (failed due to formatting issues)
Tried to edit the existing file but ran into string matching problems with the raw execution output.
Step 2: Rewrote the entire file with Write tool
Created clean markdown structure:
- Used code blocks for prompts and commands
- Clear headers and sections
- Bold emphasis on key points
Step 3: Added the requested explanation
Added sections explaining:
- Why “see if they follow” was ambiguous
- Why I interpreted it as “check and fix” (eager to please)
- What you actually wanted (analysis only)
- How to fix the prompt (explicit constraints)
- The lesson about vague verbs
Result
Exactly what was requested: better readable markdown + explanation of the vagueness problem.
What Went Right
1. Decomposed Into Clear Tasks
1. Make what is already in there, better readable in markdown
2. Explain how being vague [...] motivated you to update them
Why this works:
- Two distinct tasks, numbered
- No ambiguity about what to do
- Each task is self-contained
- Easy to verify completion
2. Specific Instructions
Task 1: “better readable in markdown” + “think about ``` blocks”
- Clear goal: improve readability
- Specific technique: use code blocks
- Not vague like “make it better”
Task 2: “Explain how being vague ‘see if they still match’ actually motivated you to update”
- Specific thing to explain
- Specific example to reference
- Clear connection to the problem
3. Explicit Constraints
"Don't fancy it up, keep it simple"
Why this matters:
- Prevents over-engineering
- Keeps focus on the goal
- Guides the style/approach
- Constrains the “eager to please” tendency
4. User Provided Context/Reflection
"I might have wanted only a report. But I did not specify.
So I should have probably added, no updating/editing, just analysis."
Why this helps:
- Shows what went wrong in the original prompt
- Explains the user’s actual intent
- Guides the explanation direction
- Provides the “lesson” to extract
The Pattern
Good decomposed prompt structure:
[Context if needed]
[Numbered list of specific tasks]
1. [Task 1 with specific details]
2. [Task 2 with specific details]
[Constraints about how to do it]
[Optional: reflection/context about why]
Why This Works
Decomposition prevents conflation:
- “Make it readable AND explain the problem” could blur together
- Numbering makes them distinct
- Each can be completed and verified independently
Specificity prevents guessing:
- “Use code blocks” vs “make it better” (vague)
- “Explain how vague X motivated Y” vs “explain what happened” (vague)
Constraints guide behavior:
- “Keep it simple” prevents over-engineering
- Shapes the “pleasing” completion
Context provides direction:
- Understanding what went wrong helps explain it correctly
- User’s reflection guides the lesson to extract
Comparison: Bad → Good → Good
Original Bad Prompt
Go over each readme, and see if they still follow the purpose of their subdir.
- Vague
- No constraints
- Result: Unwanted edits
Good Prompt (minimal action)
Read it don't do anything else.
- Clear action + constraint
- Result: Just read, no extra action
This Good Prompt (complex action)
1. Make it readable in markdown (use code blocks, keep it simple)
2. Explain how vagueness caused the problem
- Decomposed tasks
- Specific instructions
- Constraints
- Result: Exactly what was wanted
The Lesson
For complex requests, decompose into numbered tasks with specific instructions and constraints.
Each task should:
- Have one clear goal
- Include specific details about what/how
- Have constraints to prevent over-reaching
The structure:
- Task 1: [specific action] [specific details] [constraints]
- Task 2: [specific action] [specific details] [constraints]
This prevents:
- Task conflation (“make it better” = ???)
- Ambiguous intent (analysis vs action)
- Over-engineering (fancy vs simple)
The key: Break down what you want, be specific about each piece, add constraints to guide behavior.
When tasks are clear and decomposed, the LLM completes each one correctly without guessing or over-helping.