Back to Blog
[01] BLOG
Behind the ScenesJan 10, 20268 min read

How We Built a Library of 100+ Expert Frameworks

Building ClawFu started with a simple observation: every time I loaded Claude with a specific framework, the outputs got dramatically better.

Position a product using April Dunford's method? Way better than generic positioning advice. Write copy using Eugene Schwartz's awareness levels? Actually converts. Run a negotiation using Chris Voss's techniques? Tactical and specific.

So we started building. One framework at a time. Here's what we learned.

Phase 1: Just Summarize the Book (Didn't Work)

Our first attempt was obvious: summarize the key concepts from great books and feed them to the AI.

The results were... fine. Better than nothing. But not actually useful.

The AI would parrot back concepts without applying them. "According to Cialdini, reciprocity is one of six principles of influence..." Great, but what do I actually do?

Lesson: Concepts aren't enough. You need methodology.

Phase 2: Add Step-by-Step Instructions (Better)

Version two included explicit instructions: "First do this, then do that, then synthesize..."

This helped. The AI started producing structured outputs that actually followed a process.

But something was still missing. The outputs were mechanical. They followed the steps but missed the judgment calls that make frameworks actually work.

Lesson: Process without examples produces mechanical output.

Phase 3: Include Real Examples (Much Better)

The breakthrough came when we added extensive examples. Not hypothetical examples — real ones.

"Here's how a SaaS company used this framework." "Here's a positioning statement for a B2B tool." "Here's what a bad cold email looks like and why it fails."

Suddenly the AI understood not just what to do, but what good looked like. The outputs had judgment.

Lesson: Examples teach quality. Instructions teach structure. You need both.

Phase 4: Add Anti-Patterns (The Secret Sauce)

The final evolution was adding what NOT to do.

"Don't start with features." "Avoid these weak words." "Never use this structure because..."

Anti-patterns turned out to be as important as patterns. They helped the AI avoid the common mistakes that make output feel generic.

Lesson: Knowing what to avoid is as important as knowing what to do.

The Final Skill Structure

After hundreds of iterations, we settled on this format:

  1. Methodology Overview — The mental model behind the framework
  2. When to Use — Triggers that indicate this skill applies
  3. Step-by-Step Process — Explicit instructions with decision points
  4. Examples — 3+ real examples with full outputs
  5. Templates — Fill-in-the-blank formats for common use cases
  6. Anti-Patterns — What to avoid and why

This structure consistently produces expert-level output across different AI models and use cases.

What We Learned About Scale

Building 100+ skills taught us a few things:

Quality beats quantity. One great positioning skill beats five mediocre strategy skills. We regularly cut skills that weren't producing excellent output.

Testing is everything. Every skill gets tested across multiple scenarios before release. If it doesn't consistently produce better output than generic prompting, we keep iterating.

User feedback matters. Some of our best improvements came from users who tried skills in contexts we hadn't considered. "This doesn't work well for hardware products" → iterate → now it does.

The Ongoing Work

ClawFu isn't done. We're constantly:

  • Adding new skills (user requests welcome)
  • Improving existing ones based on feedback
  • Testing against new AI models as they release
  • Exploring new categories (video, audio, automation)

The goal is simple: make expert methodologies accessible to anyone using AI.

Curious about the library? Browse all skills → or see the latest updates →