Domain Expertise as Executable Configuration: Why Knowledge That Runs Beats Docs That Rot

How I accidentally discovered that teaching AI assistants your expertise keeps knowledge fresh better than wikis ever could


About Me: I'm a business and product executive with zero coding experience. I've spent my career building products by working with engineering teams at Amazon, Wondery, Fox, Rovi, and TV Guide, but never wrote production code myself. Until recently.

Frustrated with the pace of traditional development and inspired by the AI coding revolution, I decided to build my own projects using AI assistants (primarily Claude Code, Codex, and Cursor). This blog post is part of that journey—documenting what I've learned building real production systems as a complete beginner.


TL;DR

Configuration that executes in the context of work stays fresh. Documentation that sits unused rots. I accidentally discovered this building Claude Code "skills"—structured knowledge that gets actively applied while coding instead of passively referenced.

Key Learnings:


The Problem I Didn't Know I Had

When you're a solo founder building production systems with AI assistants, you face a weird version of the "knowledge transfer" problem: you're constantly context-switching between domains.

One hour I'm debugging AWS deployments. Next hour I'm designing UX for creative tools. Then I'm analyzing calendar availability patterns.

Traditional approach? Write docs. Keep a wiki. Reference it when you need to remember.

Reality? I never read my own docs. They go stale immediately. And when I come back to a domain after 2 weeks, I've forgotten the edge cases.

Then I discovered something counterintuitive: The best way to capture domain expertise isn't documentation. It's configuration that executes.


What I Built (And Why It Worked)

Over the past month, I've built three "skills" for Claude Code—not code, but structured knowledge configurations that Claude uses when helping me work. Think of them as "here's what an expert in X would know" captured in markdown.

Example 1: Deployment Verification (My Pain Point)

The problem: I'd run cdk deploy, see "✅ Stack updated", and think I was done. Then discover the Lambda was in Failed state. Or the ECS task was stuck in PENDING. Or the image pushed to ECR but was 0 bytes.

Traditional solution: Write a deployment checklist wiki page.

What I built instead: A Claude Code skill that knows deployment verification.

It's not code—it's knowledge about what to check:

# Deployment Verifier Skill

CRITICAL RULE: NEVER claim deployment success without verification.

For Lambda deployments, check:
- Function state is Active (not Pending/Failed)
- LastUpdateStatus is Successful
- Code was recently modified (timestamp matches deployment)
- No errors in CloudWatch logs
- Event sources are Enabled

For ECR image pushes, check:
- Image with specified tag exists
- Image was pushed recently (within expected timeframe)
- Image size is reasonable (not 0 bytes)
- Image manifest is valid

...

Result: Now when I deploy, Claude actively stops me from saying "deployed successfully" until we've verified actual resource states. It's like having a paranoid DevOps engineer pair-programming with me.

Key insight: This "documentation" gets used every single deployment. It can't go stale because Claude Code is applying it in real-time. When something breaks or AWS changes, Claude Code updates the skill based on what we learn. I just review the changes—I'm not manually maintaining docs.

Example 2: Creative Tools UX (Knowledge I Keep Relearning)

The problem: I'm building creative software (audio tools, media production). I have opinions about what makes tools usable vs frustrating, but I keep reinventing the wheel.

Traditional solution: "UX Best Practices" document with screenshots.

What I built instead: UX expertise as a Claude Code skill.

Again, not code—knowledge about patterns:

# Creative Tools UX Expert Skill

When designing features for creative tools, check for:

Non-Destructive Workflows:
✅ GOOD: Adjustment layers, version history, undo at any point
❌ BAD: Overwriting originals, no undo, destructive operations
Example: Photoshop Smart Objects (non-destructive), old batch processors (destructive)

Real-Time Preview:
✅ GOOD: Live preview, scrubbing, before/after toggles
❌ BAD: Long processing with no preview, blocking UI
Example: ElevenLabs voice preview (instant), old TTS tools (batch-only)

Parameter Control vs Black-Box:
✅ GOOD: Manual override, customizable presets, visible parameters
❌ BAD: "Magic" buttons with no tuning, all-or-nothing
Example: iZotope RX "Learn" mode (suggests + allows tuning), some AI tools (no control)

...

Result: When I design new features, Claude Code actively checks them against these patterns while I'm designing. "Hey, this feature has no preview—users will hate that. Here's how ElevenLabs solved it..."

Key insight: The knowledge gets used during every design review. It stays current because Claude Code refines it based on what I learn. I review and approve, but I'm not manually editing markdown files—Claude handles that.

Example 3: Tool Usage Expertise (Using New Utility Tools)

The context: A former colleague is building Port42, a brand-new command-line tool ecosystem with a cool vision: "consciousness computing" where intent becomes executable through conversation. It's still in early development, but the concept is fascinating—your tools evolve with your thinking patterns.

The problem: I started using Port42's md-to-docx tool frequently (converts Markdown to Word documents). Later, I'm trying cal-avails for calendar availability analysis. Both have dozens of options, edge cases, and usage patterns I keep forgetting.

Traditional solution: README files with examples.

What I'm building: Skills that know how to use these tools expertly.

# Markdown to DOCX Converter Skill

When converting Markdown to Word documents:

Common usage patterns:
- Single file: md-to-docx document.md
- Batch conversion: md-to-docx *.md
- Preserve formatting: headings, bold, italic, lists, code blocks, tables

Best practices:
- Use --output/-o to organize output files
- Check for images and links before conversion
- Preview complex tables (may need manual adjustment)

Common gotchas:
- Custom markdown extensions may not convert
- Code blocks need proper language specification
- Image paths must be relative or absolute

# Calendar Availability Analyzer Skill (Experimental)

When analyzing calendar availability:

For finding meeting slots:
- Use --min-duration to filter by needed time
- Check both --work-hours and extended hours for flexibility
- Look for fragmentation patterns (many short slots = bad schedule)

Common mistakes:
- Forgetting timezone specification
- Not accounting for weekends when using date ranges
- Missing calendar source authentication

...

Result: When I need to convert a blog post to Word format, Claude knows md-to-docx flags and gotchas. I use this constantly. The calendar analyzer is more experimental, but the same pattern applies—capture tool expertise once, apply it every time.

Example 4: Tools Breeding Tools

The Problem: My availability was fragmented across Google Calendar (personal) and Prelude Studios (work). Manual cross-checking was error-prone and time-consuming.

First Attempt (Nov 2): Built three composable Port42 tools:

I thought I had a working orchestrator. I didn't.

Discovery (Nov 8): Asked Claude Code "show me my availability for Nov 10-14" → it called cal-avails → returned "No available slots found"

Why? cal-avails had prelude_url in its config but zero integration code. It was incomplete.

The Solution: Built the meta-tool using Port42's reference system:

port42 swim @ai-engineer "create unified-avails orchestrator" \
  --ref p42:/commands/cal-avails \
  --ref p42:/commands/prelude-scraper

Then applied TDD to finalize features (date filtering, slot merging, PT/ET timezone display).

The Result:

Before (manual workflow):

After (one command):

unified-avails --start 2025-11-18 --end 2025-11-18

Output:

Tue 11/18:
  - 8:30 AM-4:00 PM PT / 11:30 AM-7:00 PM ET [Calendar + Prelude]

True synthesis. 5 seconds.

Why This is Executable Configuration:

The knowledge of "how to synthesize work and personal calendars" isn't in a wiki. It's encoded in:

This knowledge stays fresh because:

Tools breeding tools: The orchestrator (unified-avails) was bred from foundation tools (cal-avails, prelude-scraper, prelude-format) through Port42's reference system. The meta-pattern itself is executable configuration.


Why This Works Better Than Documentation

1. Used Daily = Stays Current

Traditional docs rot because they're write-once, read-maybe.

These skills get used every time I work in that domain. If something's wrong, I notice immediately—Claude Code notices it breaks, updates the skill, and I review the change. The knowledge evolves with my actual workflow.

2. Active Application vs Passive Reference

Docs require me to remember to check them. Even AI assistants don't reliably reference docs.

I tried documenting everything in GitHub—microservice standards, API conventions, architectural patterns. Just like humans don't read docs, Claude Code didn't consistently reference them either.

So I moved critical knowledge into skills (like my microservices-standards). Now Claude Code applies the knowledge proactively every time it writes code. It's like having domain experts pair-programming with you who actually remember your conventions.

3. No "Code" to Maintain

This isn't code that can break or need refactoring. It's structured knowledge in markdown.

Claude Code handles updates when patterns change. I review and approve. No tests, no dependencies, no deployment pipeline.

4. Scales With Complexity

The more complex the domain, the more valuable this becomes.

Deployment verification has 20+ checks across different AWS services. Creative tool UX has dozens of patterns. Calendar analysis has edge cases I'd never remember.

All captured in ~10 pages of markdown per domain.


The Pattern: Executable Configuration

Here's what I learned: There's a category between "code" and "documentation" that's underutilized:

Configuration that executes in the context of work.

This works for:

It doesn't work for:


How This Could Apply to Your Domain

After building these, I realized the pattern is broadly applicable:

Healthcare: Compliance Verification Patterns

Instead of a 200-page compliance manual, capture verification patterns:

Finance: Calculation Verification Rules

Instead of tax manual, capture verification logic:

Legal: Process Knowledge

Instead of "how we file in X jurisdiction" docs:

Engineering Teams: Architecture Patterns

Instead of architecture wiki:


The Business Case

I'm a solo founder, so my "team scaling" is just me context-switching less painfully.

But for actual teams, the math is compelling:

Traditional documentation approach:

Executable configuration approach:

Break-even: For teams of 5+ people doing complex domain work, this typically pays for itself within 3-6 months through faster onboarding alone.

Add in error reduction, consistency gains, and knowledge retention, and ROI is 3-5x in year one.


How to Start

If you're using AI assistants (Claude Code, Copilot, Cursor), start small:

1. Pick One Pain Point

What domain knowledge do you keep re-looking-up?

2. Capture Core Patterns (With Your AI Assistant)

Work with your AI assistant to capture 1-2 pages of markdown:

Real talk: I don't write these by hand. Claude Code helps me capture patterns based on our conversations. I review and refine.

3. Configure Your AI Assistant

4. Use It Daily (And Let It Evolve)

The magic happens when you actually use it. Don't just write it and forget.

Every time you work in that domain, let the AI assistant apply the patterns. When something breaks or you learn something new, your AI assistant updates the skill. You review and approve the changes.

You're not maintaining docs manually—you're curating knowledge with AI assistance.


What I'm Building Next

I'm thinking about skills for:

Each one captures expertise I've built up but keep forgetting when I context-switch.


The Meta-Point

This blog post is about capturing domain expertise in executable forms.

But the real insight is simpler: Knowledge that gets used stays accurate. Knowledge that sits unused rots.

The form doesn't matter as much as the frequency of application.

All of these keep knowledge fresh because they can't be ignored.

Traditional documentation can be ignored. That's why it rots.


Try It Yourself

I open-sourced my skills at: github.com/sparrowfm/claude-skills

Start with creative-tools-ux-expert if you're building creative software, or use it as a template for your own domain.

The setup is dead simple: Create a markdown file in ~/.claude/skills/your-skill-name/SKILL.md with your domain knowledge. Claude Code loads it automatically.

Not using Claude Code? The pattern still applies:

The key is: make knowledge executable, not just readable.


What domain expertise are you constantly re-looking-up? I'd love to hear what knowledge you'd capture if you could make it executable. Connect on LinkedIn or open an issue on GitHub.