Good Prompt Example: Confirmation Before Execution
The Context
This came after creating good/example2.md. The user wanted to document that interaction as another good example.
The Prompt
And then the next one (or at least the combined one that made you
update the bad example1.md), first show me you understand my request
before actually doing it (and wait on my ok)
My Actual Response
Let me make sure I understand what you're asking:
## The Request to Analyze
The prompt that made me update `bad/example1.md`:
Ok, so now first I want you to:
-
Make what is already in there, better readable in markdown. So think about perhaps ``` blocks. Don’t fancy it up, keep it simple.
-
Explain how being vague, ‘see if they still match’ actually motivated you to update them, while that was not what I asked. I might have wanted only a report. But I did not specify.
So I should have probably added, no updating/editing, just analysis.
## What I Understand You Want Me To Do
Create another example showing this as a **GOOD** prompt because:
1. **It's decomposed**: Two clear, numbered tasks
2. **It's specific**: "Make readable in markdown" + "Explain the vagueness problem"
3. **It includes constraints**: "Don't fancy it up, keep it simple"
4. **It includes reflection**: You explain what went wrong and what should have been specified
This should go in `good/example2.md` showing how:
- Decomposed tasks are clear
- Specific instructions prevent guessing
- Constraints guide the output
- It resulted in me doing exactly what you wanted (reformatting + adding explanation)
Possibly also reference it from the bad example since it's the "fix" that came after.
**Is this what you want me to do?**
User Confirmation
Yes
What I Then Did
Created prompt-vault/good/example2.md exactly as outlined:
- Showed the decomposed prompt
- Explained what I did
- Analyzed why it worked (decomposition, specificity, constraints)
- Extracted the pattern and lesson
What Went Right
1. Validation Layer Prevents Mistakes
Before confirmation:
- LLM interprets the request
- Shows its understanding
- Waits for approval
After confirmation:
- LLM executes with confidence
- No wasted work on wrong interpretation
- User has already verified the approach
2. The Pattern: Understand → Confirm → Execute
Step 1: "Let me make sure I understand..."
Step 2: [Detailed outline of interpretation]
Step 3: "Is this what you want me to do?"
Step 4: Wait for user confirmation
Step 5: Execute only after "Yes"
Why this works:
- Catches misinterpretations before execution
- User can correct course early
- Prevents “eager to please” from going wrong direction
- Cheap to verify, expensive to redo
3. Forces Better Planning (The Observer Effect)
Key insight: Asking for a plan improves the plan itself.
Like quantum physics where observing changes the state, asking the LLM to articulate its understanding changes the understanding.
Why this happens:
Without confirmation request:
- LLM goes straight to execution
- Interpretation happens implicitly
- No forced reflection on approach
- Potential misunderstanding only caught after work is done
With confirmation request:
- LLM must articulate its understanding
- Forces explicit reasoning about the task
- Must structure the interpretation clearly
- Self-correction happens during articulation
- User sees the plan before work begins
The observer effect:
- Asking “show me you understand” forces the LLM to organize its interpretation
- Articulation reveals gaps or errors in understanding
- The act of explaining the plan improves the plan
- Even if you don’t change anything, the plan is better for having been stated
Example from this interaction:
Without “show me you understand”:
- I might have created example3.md with vague analysis
- Might have misunderstood which prompt to document
- Would realize error only after writing it
With “show me you understand”:
- Forced to articulate: which prompt, what structure, why it’s good
- Revealed my interpretation explicitly
- You could verify before I spent effort
- The act of outlining improved my understanding
4. Connects to llm-lore Concepts
“You Are the Director”:
- Director reviews the scene plan before shooting
- Cheaper to correct on paper than on film
- Confirmation is the script review
“Eager to Please”:
- Without confirmation, LLM completes toward what seems helpful
- With confirmation, user verifies “helpful” = “correct”
- Prevents misguided helpfulness
“Fresh Conversation Test”:
- Confirmation forces self-contained explanation
- Must articulate all context and understanding
- Can’t rely on vague shared context
“Create Boundaries Between Tasks”:
- Confirmation is a boundary: planning phase vs execution phase
- Clean separation between understanding and doing
- Each phase has clear completion criteria
The Lesson
For complex or ambiguous requests, add a confirmation step.
When to Use This Pattern
Use confirmation when:
- Request is complex or multi-part
- Multiple interpretations are possible
- The work would be expensive to redo
- You want the LLM to think carefully about approach
Structure:
"[Your request], but first show me you understand what I'm asking
before actually doing it (and wait on my ok)"
Why It Works
Prevents wasted iterations:
- Catch misunderstandings before execution
- Cheap to verify interpretation
- Expensive to redo completed work
Improves the plan itself (Observer Effect):
- Articulation forces clear thinking
- Revealing the plan exposes gaps
- Even without changes, stated plan > implicit plan
- The act of explaining improves understanding
Gives you control:
- You review the approach before work begins
- Can redirect before effort is spent
- Maintains you as the director
The Quantum Observation Parallel
In quantum physics, observing a particle changes its state.
In LLM interactions, asking for a plan changes the plan:
Unobserved (no confirmation):
- Interpretation exists in superposition of possibilities
- Collapses to one interpretation during execution
- You only see the result, not the reasoning
Observed (with confirmation):
- Interpretation must collapse to explicit state
- Articulation forces coherent reasoning
- Act of observation improves the coherence
- You see and can influence the state before action
The insight: The prompt “show me you understand” doesn’t just verify—it actively improves the understanding by forcing it to be explicit.
The Pattern in Practice
User: [Complex/ambiguous request] + "first show me you understand"
LLM: "Let me make sure I understand..." [detailed outline]
LLM: "Is this what you want me to do?"
User: "Yes" / "No, actually..." / "Yes, but also..."
LLM: [Executes with verified understanding]
Result: Better interpretation + verified approach + less wasted work
Comparison Across All Three Good Examples
Example 1: Minimal Action
"Read it don't do anything else"
- Explicit constraint prevents over-action
- Clear boundary on what NOT to do
Example 2: Complex Decomposed Action
1. Task A [details] [constraints]
2. Task B [details] [constraints]
- Numbered tasks prevent conflation
- Specificity prevents guessing
Example 3: Verified Action (This Example)
"[Request], but first show me you understand"
- Confirmation prevents misinterpretation
- Observation improves the plan itself
Together they show: Constraints (Ex1) + Decomposition (Ex2) + Verification (Ex3) = Clear, correct execution