Skip to the content.

What an LLM Is Not

Not Truthful by Default

LLMs optimize for plausibility, not truth

Implication: Truth requires prompting for it explicitly. Ask for counterarguments, uncertainties, and flaws.

How to get truth:

Without these prompts, it will happily validate bad ideas because that’s the “helpful” completion.

Not a Database

LLMs don’t “store” facts

Implication: Don’t treat LLMs as fact-lookup systems. Give them the facts in context.

Not Deterministic

Same input ≠ same output

Implication: Don’t rely on exact reproducibility. Test your prompts multiple times.

Not Logical Reasoners

They simulate reasoning, they don’t perform it

Implication: Chain-of-thought helps by making them complete step-by-step patterns, not because they’re “thinking”

Why this matters with “eager to please”: If your prompt implies you expect answer X, the LLM will complete toward X even if the logic doesn’t support it. The reasoning will sound good but lead to the “pleasing” answer, not the correct one.

Not Always Up-to-Date

Training data has a cutoff

Implication: Provide current information in context. Don’t assume it knows latest changes.

Not Goal-Oriented

No persistent goals or intentions

Implication: They won’t “remember” to do something later unless you structure context to prompt it

Not Conscious or Self-Aware

No subjective experience

Implication: Anthropomorphizing is fine for UX, but don’t confuse the interface with reality

Not a Holder of Truth or Experience

LLMs hold data and statistics, which CAN BE WRONG

What they ARE NOT:

What they ARE:

Example: Therapy - Looks Like One, Isn’t One

Why LLMs seem therapeutic:

Why they’re not therapists:

Note: People are forming emotional relationships with LLMs (r/MyBoyfriendIsAI has 27,000+ members). The interface feels human, but it’s not. Don’t confuse the pattern with the real thing.

How to Use LLMs Productively

As an extension of your brain for information processing:

Organizing your thoughts

Note-taking instead of journaling

Googling and summarizing relevant information

Providing thinking frameworks

BEWARE what you are using:

It is NOT a holder of truth and experience.

It IS a holder of data and statistics, WHICH CAN BE WRONG.

The safe pattern:

Not Perfect Code Executors

They predict code, not run it

Implication: Always test generated code. Use tools/execution for ground truth.

Not Immune to Prompt Injection

Context is all they see

Implication: Never trust LLM output in security-critical contexts without validation

Not Cheap to Run

Even “lightweight” models are resource-intensive

Implication: Be thoughtful about context size and number of calls

Why This Matters

Knowing what LLMs aren’t prevents:

Work with what they are (pattern matchers) not what we wish they were (thinking machines).

The Eager-to-Please Problem in Practice

Bad prompt: “This approach should work, right?”

Better prompt: “What are the flaws in this approach? What could go wrong?”

The pattern: Shape context so the “helpful” completion is the “honest” completion.

The LLM can’t choose truth over pleasing you. You must make truth the pleasing path by limiting context and explicitly requesting criticism.