llm-lore
graph TD
A[What is an LLM?] --> B[Pattern matcher]
A --> C[Autocomplete]
A --> D[Eager to please]
E[What isn't an LLM?] --> F[Not truthful by default]
E --> G[Not a database]
E --> H[Not conscious]
B --> I[How to work with it]
C --> I
D --> I
F --> I
G --> I
H --> I
I --> J[Direct them]
I --> K[Decompose tasks]
I --> L[Create boundaries]
Overarching knowledge ABOUT LLMs as a technology. Not lessons from using them, but fundamental understanding of what they are and aren’t.
Contents
Root level - Conceptual explanations
- llm-is.md - What LLMs fundamentally are
- llm-is-not.md - What LLMs are not (and why that matters)
examples/ - Real-world illustrations of concepts
- forgetting.md - LLMs forget context, context management is paramount
What goes here
- Meta-knowledge about what LLMs fundamentally are
- What they are NOT (and why that matters)
- How they actually work at a conceptual level
- Core principles that explain their behavior
- The mental models you need to work with them effectively
Examples
- What is an LLM? (pattern matcher, autocomplete, eager to please)
- What isn’t an LLM? (not truthful by default, not a database, not conscious)
- How to work with these realities (direct them, decompose tasks, create boundaries)
Format
Clear explanations of overarching concepts. This isn’t “I tried X and it failed” - it’s “Here’s what LLMs fundamentally are and how that shapes everything.”