Skip to the content.

auto-slacker

graph LR
    A[Do manual work] --> B[Spend time automating it]
    B --> C[Save time]
    C --> D[Use saved time to automate more]
    D --> C
    C --> E[Do less work]
    E --> F[Have time to automate other things]
    F --> B

What is this?

A collection of LLM wisdom, prompts, and techniques that actually work. Also a template for setting up and managing your own LLM workflows.

This repo is maintained entirely by LLMs through prompts and scripts. No direct human editing. Every change is an example of LLM-driven development, which means the repo itself demonstrates the patterns it documents.

The meta part: you’re looking at both the product (LLM knowledge base) and the process (how to build one with LLMs).

The Prime Directive

Everything in this repository is created, modified, and maintained through LLM prompts or scripts. This ensures:

To change anything here, prompt an LLM. The repo is both product and process.

Structure

auto-slacker/
├── prompt-vault/      # Battle-tested prompts that actually work
├── llm-lore/          # Wisdom, lessons learned, and war stories
├── context-garden/    # Reusable context snippets and templates
├── token-wisdom/      # Optimization tips, tricks, and techniques
├── rubber-duck-brain/ # Problem-solving patterns and debugging approaches
├── script-kiddies/    # Meta-scripts and recursive automation patterns
└── distillery/        # Refined workflows and polished techniques

Each directory represents a different facet of LLM mastery, from raw prompts to distilled workflows.

Table of Contents

Core Directories

Key Documents in distillery/

Foundational patterns for organizing your LLM-driven development environment:

Key Documents in llm-lore/

Lessons learned the hard way:

Philosophy

Why “auto-slacker”?

The subdirectories (prompt-vault, llm-lore, context-garden, etc.) are the components. Together they create an automated system that does the thinking so you can slack off. The name captures the irony: spend effort now to do less later, recursively, forever.


This README was written by an LLM, naturally.