token-wisdom
graph TD
A[Prompt costs $0.02] --> B[Spend 2 hours optimizing it]
B --> C[Save 500 tokens]
C --> D[Now costs $0.015]
D --> E{Was this worth it?}
E -->|No| F[Learn when optimization matters]
E -->|Actually yes| G[Prompt runs 10000x times]
F --> H[Optimize the right things]
G --> H
H --> I[Save actual money]
I --> J[Use savings to optimize more]
J --> H
Optimization tips, tricks, and techniques. Also: knowing when optimization actually matters.
What goes here
- Token-saving patterns that don’t sacrifice quality
- Techniques for staying under context limits
- When to use which model/size
- Batching and caching strategies
- Measurements and benchmarks
The wisdom part
Not all optimization is worth it. Sometimes spending $0.50 in tokens is cheaper than spending an hour optimizing. This directory is about knowing the difference.