Writing Your First Custom Claude Code Skill

You've been using Claude Code for a while now. You've run the built-in commands, leveraged existing skills, maybe even explored the .claude/skills directory. But here's the thing—the real power unlocks when you write your own skills.
A custom skill isn't some complex magic. It's a structured set of instructions that tells Claude how to handle a specific type of task consistently and effectively. Think of it as encoding your team's expertise, your project's conventions, or a repeatable workflow into something reusable.
In this article, we're building a real skill from scratch: a "code review" skill that Claude can call whenever you need thoughtful, structured feedback on code changes. You'll see what goes into planning it, what the SKILL.md file looks like, how to test it, and how to iterate when something doesn't quite work.
Let's build it.
Table of Contents
- Understanding What a Skill Actually Is
- Planning Your Skill: What Should It Do?
- The SKILL.md Structure: Frontmatter and Sections
- Overview
- Quick Reference
- The Process
- Phase 1: Understanding (5 minutes)
- Phase 2: Analysis (10-15 minutes)
- Phase 3: Feedback (5 minutes)
- Feedback
- Correctness ✓ (Solid)
- Efficiency ✓ (Good)
- Maintainability ⚠ (Improvable)
- Consistency ✓ (Matches codebase)
- Security ✓ (Safe)
- Phase 4: Summary (2 minutes)
- Action Items (Priority Order)
- When to Stop
- Key Principles
- Testing Your Skill in a Session
- Iterating: When Feedback Tells You Something's Wrong
- Before/After: Quality Comparison
- Phase 1: Understanding
- Phase 2: Analysis
- Phase 3: Feedback
- Phase 4: Summary
- Common Skill Patterns You'll Recognize
- Advanced: Making Skills Work Together
- Phase 1: Code Review
- Phase 2: Test Generation
- Phase 3: Documentation
- Phase 4: Summary
- Building Skills in Teams
- Troubleshooting: When a Skill Doesn't Work
- Putting It All Together
- Next Steps
- Why Custom Skills Matter (And When to Use Them)
- The Multiplier Effect
Understanding What a Skill Actually Is
Before we start writing, let's be clear about what we're building.
A Claude Code skill is a collection of instructions and procedures, documented in a SKILL.md file, that tells Claude how to approach a category of task. It's not code you run. It's not a Python function or a shell script. It's structured guidance.
When you invoke a skill—either directly via /dispatch [skill-name] or implicitly when Claude detects it's relevant—the entire SKILL.md file gets added to Claude's context. Claude reads those instructions and follows them.
That's it. Simple. Powerful.
The best skills share a few traits:
- Specific scope: They solve one category of problem, not everything.
- Clear phases: They break work into sequential steps.
- Decision points: They specify when to ask questions vs. when to proceed autonomously.
- Tools and integrations: They call out other skills, built-in tools, or external resources.
- Evidence requirements: They specify what output counts as "done."
Our code review skill will have all of these. The brilliance of skills is that they're not high-tech. They're just documentation. But that documentation, when well-written, transforms how Claude approaches problems. Instead of Claude trying to figure out what you want through trial and error, the skill makes your expectations explicit upfront.
Consider the alternative: you tell Claude "review this code" without a skill. Claude does a quick pass, maybe misses something important, maybe gives vague feedback. You iterate: "Can you be more specific?" "Check for security issues too." Five exchanges later, you finally get the structured feedback you wanted. Now imagine you'd given Claude the skill upfront. Same feedback, delivered completely in the first exchange because the skill made your expectations clear from the start.
That's the multiplier effect of skills. They compress multi-turn conversations into single turns. They make expertise repeatable. They make your standards scalable.
Planning Your Skill: What Should It Do?
This is the most important step. You don't write the SKILL.md first. You think first, then write.
Let's say you want a skill that takes a code change and gives you structured, actionable feedback. Here's what you need to decide:
1. What problem does this solve?
- Currently: You review code manually, sometimes missing edge cases, inconsistent about what you check
- With the skill: Claude reviews code the same way every time, catches known patterns, suggests improvements
2. Who uses it?
- Solo developers doing self-reviews
- Small teams where one person leads code review
- Anyone who wants a consistent, structured review process
3. What's the input?
- A code change (file path, diff, or pasted code)
- Optionally: context about what this code does, what project it's in
- Optionally: specific focus areas (performance, security, readability)
4. What's the output?
- A structured review with sections: correctness, efficiency, maintainability, consistency, security
- Specific, actionable feedback (not vague praise)
- Recommendations, not demands
- A confidence level on each point
5. What are the decision points?
- Does Claude need to ask what the code does? (Yes—more context = better review)
- Does it need to ask about project standards? (Yes—reviews should align with your codebase)
- Does it suggest fixes, or just point out issues? (Both—suggest fixes where obvious, flag for human decision where subtle)
Once you've got answers to these, you're ready to write. This planning phase is where you prevent wasted effort later. A skill with unclear scope will confuse Claude and frustrate users. Vagueness in your planning becomes confusion in your skill document. Clarity in planning becomes clarity in execution.
Think about your team's pain points. What tasks are repeated constantly? What expertise gets forgotten when key people go on vacation? What standards don't get enforced consistently? Those are your skill candidates. Code review probably appears on that list because it's done frequently, it's hard to do consistently, and it's where expertise really matters.
The SKILL.md Structure: Frontmatter and Sections
Every skill starts with YAML frontmatter. Here's what we're putting at the top of our code review skill:
---
name: code-review
description: Provide structured, actionable code review feedback across correctness, efficiency, maintainability, consistency, and security—designed for solo developers and small teams
---That's it for frontmatter. Keep it short. The name should be hyphenated, lowercase, one or two words. The description should fit in one sentence and explain what the skill does and who it's for.
After that, your skill document is structured Markdown. No special format—just clear sections that guide Claude through the process.
Here's the full SKILL.md for our code review skill:
---
name: code-review
description: Provide structured, actionable code review feedback across correctness, efficiency, maintainability, consistency, and security—designed for solo developers and small teams
---
# Code Review: Structured Feedback Framework
## Overview
Provide thoughtful, structured code review feedback that catches common issues, suggests improvements, and explains trade-offs. This skill is for self-review, team review, or when you want consistent feedback criteria.
**Announce at start:** "I'm using the code-review skill to provide structured feedback on your changes."
## Quick Reference
| Phase | Key Activities | Output |
| -------------------- | ----------------------------------------------------------------------- | ----------------- |
| **1. Understanding** | Ask what the code does, what project it's in, any specific concerns | Context document |
| **2. Analysis** | Review code against five dimensions | Draft findings |
| **3. Feedback** | Present findings by dimension with specific, actionable recommendations | Structured review |
| **4. Summary** | List high-priority changes and why they matter | Action checklist |
## The Process
Copy this checklist to track progress:Code Review Progress:
- Phase 1: Understanding (context gathered and confirmed)
- Phase 2: Analysis (code reviewed across five dimensions)
- Phase 3: Feedback (findings presented with recommendations)
- Phase 4: Summary (action items prioritized)
### Phase 1: Understanding (5 minutes)
Ask three questions, one at a time:
1. **"What does this code do?"** — Get the author's explanation of intent
2. **"What project is this for?"** — Understand domain and existing patterns
3. **"Any specific concerns?"** — Surface areas to focus on
Record the answers clearly. This context shapes everything else.
**Example:**
Author: "It's a payment processing webhook handler for Stripe." Project: "SaaS billing platform, 3-year-old Ruby on Rails app" Concerns: "Make sure it handles failed webhooks gracefully"
### Phase 2: Analysis (10-15 minutes)
Review the code against five dimensions. For each, note specific findings:
**1. Correctness**: Does it do what it claims?
- Does it handle the happy path?
- What about error cases? (network failures, timeout, malformed data)
- Are assumptions documented?
**2. Efficiency**: Is it reasonably fast and resource-efficient?
- Unnecessary loops or redundant calls?
- Database queries: N+1 problems? Unnecessary fetches?
- String operations that could be simpler?
**3. Maintainability**: Will someone understand this in 6 months?
- Variable names clear?
- Logic is obvious, not clever?
- Comments on non-obvious decisions?
**4. Consistency**: Does it match your codebase's style?
- Naming conventions (snake_case, camelCase)?
- Error handling pattern (throw, return, log)?
- Test coverage expectations?
**5. Security**: Any vulnerabilities or risky patterns?
- Input validation present?
- SQL injection risks (if relevant)?
- Secrets hardcoded?
- Dependency vulnerabilities?
Write findings as you go. Be specific: point to line numbers, quote the code, explain why it matters.
### Phase 3: Feedback (5 minutes)
Present findings by dimension. For each:
1. **State the finding clearly** — Not vague, specific to their code
2. **Explain the impact** — Why it matters (performance, maintenance, security)
3. **Suggest a fix** — If obvious, show the better way; if subtle, ask clarifying questions
Format each piece of feedback like this:
Dimension: Correctness
- Finding: Line 12 assumes webhook signature is always present, but doesn't verify it before parsing
- Impact: Could crash with malformed webhook; Stripe always includes this, but defensive programming catches silent bugs
- Suggestion: Add a guard clause at line 11:
if (!signature) { throw new Error("Missing webhook signature") }
**Example Full Review Section:**
Feedback
Correctness ✓ (Solid)
- Happy path is correct: webhook received → verify signature → process event → store result
- Error handling: good use of try/catch, logs failures
- One gap: signature verification doesn't validate webhook source. Consider adding IP allowlist or webhook secret rotation tracking
Efficiency ✓ (Good)
- No obvious N+1 queries
- One minor optimization: line 34 does
User.find(user_id)twice; refactor to single query
Maintainability ⚠ (Improvable)
- Variable names are clear
- Logic at lines 18-31 is nested three levels deep; extract to separate function for readability
- Add comment explaining event type filtering on line 25—why ignore refund.created but process charge.succeeded?
Consistency ✓ (Matches codebase)
- Error handling matches payment_processor.rb pattern
- One style note: your codebase uses logger.info(), not puts()—update line 40
Security ✓ (Safe)
- Signature verification is correct
- Webhook replay protection via event_id deduplication: good
- Consider adding rate limiting if this endpoint receives high volume
### Phase 4: Summary (2 minutes)
List high-priority items. Be honest about severity:
Action Items (Priority Order)
- Must fix: Signature verification doesn't validate source (correctness risk)
- Should fix: Extract lines 18-31 to separate function (maintainability)
- Nice to have: Add rate limiting for high-volume production scenarios (scalability)
- Consider: Comment on event filtering logic (knowledge sharing)
## When to Stop
A code review is done when:
- Author understands all feedback
- Suggestions are actionable (not vague)
- Priorities are clear (what must change vs. what's optional)
- Author can decide: accept, refactor, or override
Don't overcomplicate. Your goal is helpful feedback, not perfection.
## Key Principles
| Principle | Application |
|-----------|-------------|
| **Ask first** | Understand intent before critiquing |
| **Be specific** | Point to line numbers, quote code |
| **Explain why** | Every feedback point includes impact |
| **Suggest solutions** | Don't just flag problems |
| **Prioritize** | Mark what must change vs. what's optional |
| **Respect expertise** | Author might have good reasons; ask before assuming |
That's the full SKILL.md. Notice the structure:
- Overview: One sentence on what it does, who it's for, announcement phrase
- Quick Reference: Table showing phases and outputs
- The Process: Step-by-step with examples and output for each phase
- When to Stop: Clear definition of done
- Key Principles: Table of values and how to apply them
This structure repeats across good skills. It's predictable and easy to follow. Once you understand this pattern, you can adapt it to any workflow. The pattern itself is a skill template that you'll reuse. Different domains (security audits, architecture reviews, performance analysis) follow the same structure—Overview, Quick Reference, Process phases, When to Stop, Key Principles. Only the content changes.
Testing Your Skill in a Session
You've written the SKILL.md. Now you need to test it. Here's how:
1. Save it to the right location
.claude/skills/code-review/SKILL.md
2. Create a test file
Put some sample code in a file you can review. Here's a good test case—intentionally imperfect code:
# payment_handler.py
def process_webhook(request):
data = json.loads(request.body)
user = User.objects.get(id=data['user_id'])
for event in data['events']:
if event['type'] == 'charge.succeeded':
user.balance += event['amount']
user.save()
if event['type'] == 'charge.failed':
print("Payment failed for", user.email)
return "OK"This code has issues we want the skill to catch:
- No error handling
- Prints instead of logging
- Assumes event structure without validation
- No signature verification
- Updates database in a loop (inefficient and risky)
3. Invoke the skill
In Claude Code:
/dispatch code-review /path/to/payment_handler.py
Claude will read your SKILL.md, announce that it's using the code-review skill, then follow the four phases.
4. Evaluate the feedback
Does it:
- Ask clarifying questions? (Phase 1)
- Identify the issues you expected? (Phase 2)
- Give actionable suggestions? (Phase 3)
- Prioritize clearly? (Phase 4)
If yes on all counts, your skill works. If not, iterate. Notice which parts worked well and which felt forced. That feedback guides your next revision. The goal isn't perfection on the first try. The goal is capturing your intent clearly enough that Claude follows it reliably.
Iterating: When Feedback Tells You Something's Wrong
Let's say you tested the skill and Claude's feedback was vague: "Consider improving error handling." That's not actionable. You need to revise the SKILL.md.
Common problems and fixes:
Problem: Feedback is too vague (e.g., "Consider optimizing performance") Fix: In Phase 2: Analysis, add specific checks. Instead of "Is it fast?", ask "Are there database queries in loops? Redundant calls? String operations that could use built-in methods?" Make your checklist concrete and measurable.
Problem: Claude asks too many clarifying questions in Phase 1 Fix: Reduce from three questions to two, or make them yes/no. You want context, not a 30-minute interview. Refine the Phase 1 questions to surface exactly what you need and nothing more.
Problem: The five dimensions don't fit your domain (e.g., you only care about security and correctness for crypto code) Fix: Adjust. Rename dimensions or swap them. For security-critical code, add "Threat modeling" and "Input validation". Your skill should match your needs, not a generic template.
Problem: Feedback comes back in the wrong order (not structured by dimension) Fix: Add this explicit instruction in Phase 3: "Present findings in this order: Correctness, Efficiency, Maintainability, Consistency, Security." Explicit ordering forces Claude to follow your intended structure.
Problem: Claude stops partway through (e.g., skips the Summary phase) Fix: Each phase should have an explicit transition statement. "I've completed Phase 2: Analysis. Now moving to Phase 3: providing structured feedback." These transitions make the phases unmissable.
Here's the key: every time you test and find a gap, update the SKILL.md. Add examples, remove ambiguity, clarify decision points. After 3-5 iterations, you'll have a skill that's reliable and predictable. The iterations aren't wasted effort—they're refinement. You're encoding clarity into instructions.
Think of it like writing documentation. First draft is rough. You see what's unclear. You revise. Each revision makes it tighter. Same process here. Your skill gets better with each iteration because you're learning what ambiguities trip up Claude and removing them. Every time you tighten the instructions, you're reducing decision points for Claude. Fewer decision points means more predictable behavior.
Before/After: Quality Comparison
Here's what we're comparing: a code review without the skill vs. with the skill.
Without the skill (Claude does a quick review):
This code looks okay. It handles the webhook and processes events.
One thing—you should add error handling. Maybe log the failures too.
Also consider what happens if the webhook comes twice; I think you have
deduplication but I'm not sure. Overall solid work, ship it.
Vague. Incomplete. Not actionable. The author doesn't know what to prioritize or whether to actually ship.
With the skill (Claude follows the SKILL.md structure):
I'm using the code-review skill to provide structured feedback on your changes.
## Phase 1: Understanding
What does this code do?
> Process Stripe webhooks, verify signature, update user balances
What project is this for?
> Payment processing service, 2 years old, Python/Django
Any specific concerns?
> Make sure we don't double-process a webhook
## Phase 2: Analysis
[Detailed findings on each dimension]
## Phase 3: Feedback
[Structured feedback with line numbers, impact, suggestions]
## Phase 4: Summary
High-priority items:
1. Add webhook signature validation...
2. Extract event loop logic...
[etc]
Specific. Thorough. Actionable. The author knows exactly what to do and why. This is the power of a well-designed skill. It's not magic. It's clarity at scale.
Common Skill Patterns You'll Recognize
Once you've written one skill, you'll notice patterns in how they're structured. Recognizing these patterns makes writing your next skill faster and helps you understand which pattern fits your problem.
The Questionnaire Pattern: Gather context before acting
- Used by: code-review, design-consultation, requirements-gathering
- Structure: Phase 1 asks targeted questions to fill knowledge gaps
- Key: Each question surfaces one specific piece of context
The Analysis Pattern: Evaluate something against fixed dimensions
- Used by: code-review, security-audit, performance-analysis
- Structure: Walk through predefined categories (correctness, efficiency, etc.)
- Key: Dimensions are consistent; findings accumulate
The Transformation Pattern: Convert input from one form to another
- Used by: documentation-generation, test-writing, refactoring
- Structure: Understand input → transform → verify output quality
- Key: Clear before/after examples show what "done" looks like
The Validation Pattern: Check something against criteria
- Used by: style-guide-validation, continuity-checking, security-scanning
- Structure: Load criteria → check against each → report violations
- Key: Objective pass/fail on each criterion
The Guided Decision Pattern: Help someone choose between options
- Used by: architecture-selection, tool-comparison, priority-setting
- Structure: Present options → pros/cons for each → recommend one → get feedback
- Key: You still recommend, but the human decides
Our code-review skill uses the Analysis Pattern combined with the Questionnaire Pattern. Your next skill might use a different combination. That's fine. The patterns are flexible templates, not rigid formulas. Mix and match based on your actual workflow. The important thing is recognizing that skills follow patterns. Once you've written a few, you'll see them naturally and be able to design new skills quickly.
Advanced: Making Skills Work Together
One skill is good. Multiple skills working together? That's where Claude Code becomes truly powerful.
Let's say you have three skills:
code-review(gives feedback on code)test-writing(generates test cases)documentation-generator(writes docs)
You could create a fourth skill, code-complete, that orchestrates all three:
---
name: code-complete
description: Review code, write tests, and generate documentation in one pass
---
# Code Complete: Full-Cycle Code Quality
## Phase 1: Code Review
I'm using the code-review skill to provide feedback.
[Invoke code-review skill]
## Phase 2: Test Generation
Based on the review feedback and the original code, I'm using the test-writing skill to generate comprehensive tests.
[Invoke test-writing skill]
## Phase 3: Documentation
Using the documentation-generator skill to create docs for this code.
[Invoke documentation-generator skill]
## Phase 4: Summary
Consolidate feedback from all three phases.You invoke one skill (code-complete), and it internally calls three others. Each passes results to the next. The output is a fully reviewed, tested, documented piece of code. This is orchestration—one skill calling others to amplify its power. This is where custom skills become force multipliers. You're not just encoding one workflow—you're composing a pipeline of specialized workflows.
Building Skills in Teams
If you're working in a team, skills become even more valuable because they encode shared knowledge.
Scenario: Your team has been doing code reviews for three years. You have tribal knowledge about what to check, what standards to enforce, common pitfalls in your codebase. That knowledge lives in people's heads. When a new person joins, they either read scattered wikis or they learn by watching senior engineers.
Write that into a skill. Now it's:
- Documented: The skill document is the source of truth for your review process
- Consistent: Everyone uses the same framework
- Teachable: New team members learn by using the skill
- Improvable: Update the SKILL.md when you discover new patterns or standards
- Version controlled: Track changes to your review process in git
Here's what a team skill workflow looks like:
- Team lead writes an initial skill based on existing practices
- Team uses it for a week; feedback emerges
- Team lead updates the skill based on feedback
- Team validates the new version with a test case
- Commit to git: SKILL.md becomes part of your codebase
This creates a feedback loop. Your skills improve over time. They encode your team's growing expertise. When you're on-call and need to review code fast, you follow the skill. When someone new joins, they follow the skill. The skill becomes institutional memory. It's more reliable than relying on the senior engineer's memory, and it scales to new team members automatically.
Troubleshooting: When a Skill Doesn't Work
You've tested your skill and something's off. Here's how to diagnose and fix it.
Problem: Claude asks too many follow-up questions
- Root cause: Your phases are unclear, or Phase 1 didn't gather enough context
- Fix: In Phase 1, ask more specific questions. Instead of "Any concerns?", ask "Are you worried about performance, correctness, or maintainability?" Narrow the scope.
Problem: Feedback is generic (could apply to any code)
- Root cause: Phase 2 isn't giving Claude enough structure for analysis
- Fix: Add checklists or specific checks for each dimension. Example: "Check for N+1 queries, hardcoded values, missing error handling". Be concrete.
Problem: Recommendations are vague
- Root cause: Phase 3 isn't forcing specificity
- Fix: Require line numbers, code quotes, and concrete suggestions. Add this explicitly: "For each finding, quote the relevant code line and suggest a specific fix."
Problem: The skill isn't being invoked (you forget to use it)
- Root cause: The skill solves a real problem, but it's not convenient to remember or access
- Fix: Create a shortcut or alias. In your
.claude/commands/, add a command that automatically invokes the skill. Or put a note in your README reminding when to use it. Make it friction-free to invoke.
Problem: Claude abandons the skill's structure partway through
- Root cause: Your phases aren't clear enough, or the transitions are missing
- Fix: Add explicit transition statements. "I've completed Phase 2. Now moving to Phase 3: presenting feedback." Make the phases unmissable.
The key insight: every problem with a skill is a clarity problem in the SKILL.md itself. Skills fail because instructions are ambiguous, not because Claude is doing something wrong. Fix the clarity, and the skill works. Every bit of confusion in your instructions becomes a decision point for Claude, and decision points are where he diverges from your intentions. When you debug a skill, you're really debugging your documentation.
Putting It All Together
Here's your process for writing any custom skill:
-
Plan (10 minutes): What problem does it solve? Who uses it? What's input/output? What are decision points?
-
Write SKILL.md (30 minutes): Frontmatter, overview, quick reference, phases with examples, decision points, key principles.
-
Test (15 minutes): Create sample input, invoke the skill, evaluate output against your expectations.
-
Iterate (15-30 minutes): Identify gaps, update SKILL.md, test again. Repeat until feedback is specific and actionable.
-
Document (5 minutes): Add notes about when to use this skill and any limitations.
-
Share (5 minutes): Commit to git, tell your team, let them use it.
You're not writing code. You're encoding expertise into structured instructions. The simpler and clearer your instructions, the better Claude will follow them. Think of the SKILL.md as a conversation with Claude where you're being exceptionally clear about what you want. Skills are your way of saying "here's how we handle this" without needing to explain it every single time.
Over time, your collection of custom skills becomes a multiplier. Each one solves a category of problem reliably and consistently. Your team gets faster. Your standards get enforced. Your knowledge gets preserved in executable form (well, instruction form). Skills are institutional memory that scales with your team.
Next Steps
You've built a code-review skill. What's next?
Immediate actions:
- Create another skill for something you repeat often: documentation generation, test writing, debugging, refactoring patterns in your codebase
- Combine skills: A document-generation skill might call the code-review skill internally to review code samples
- Share with your team: Put your skill in version control. Let teammates use it. Update it based on their feedback
- Monitor and refine: Every time someone uses your skill, note what worked and what didn't. Update the SKILL.md
Skill Ideas for Different Roles:
If you're a full-stack developer, consider:
- API-design-review (validate REST/GraphQL schemas)
- database-migration-review (check schema changes)
- infrastructure-as-code-review (Terraform, CloudFormation)
If you're a security-focused engineer, consider:
- threat-modeling-framework (structured threat analysis)
- vulnerability-assessment (common CVE patterns)
- secrets-audit (find hardcoded credentials)
If you're a data engineer, consider:
- data-pipeline-review (check lineage, error handling)
- sql-optimization-framework (query performance)
- schema-evolution-planning (handle breaking changes)
If you're a team lead, consider:
- code-interview-framework (structured interview questions)
- architectural-decision-logging (document trade-offs)
- onboarding-roadmap-builder (personalized learning paths)
Why Custom Skills Matter (And When to Use Them)
You might be thinking: "Can't Claude just do this without a skill?" Yes. But there's a difference between "Claude can do this" and "Claude does this consistently, every time, the way we want."
A skill is your way of saying: "Here's how we handle this category of problem. Follow these steps. Use these criteria. Deliver results in this format."
Without a skill, you get:
- Inconsistent approaches (different review each time)
- Incomplete results (might miss something)
- Unclear expectations (the author wonders what you're actually checking)
With a skill, you get:
- Predictable output (same structure every time)
- Complete coverage (all dimensions checked)
- Clear standards (documented in the skill itself)
Use a skill when:
- You repeat this task at least weekly
- Consistency matters (your team should get the same treatment)
- You've developed a repeatable process (you know the steps)
- You want to train Claude on your specific context (your standards, not generic best practices)
Don't create a skill when:
- It's a one-off task (use regular Claude Code instead)
- The process changes every time (too unstable to encode)
- It's so simple that instructions add overhead (like "write hello world")
Most teams find 5-10 custom skills that solve their core workflows. More than that and you're maintaining too many. Fewer than that and you're leaving productivity on the table.
The Multiplier Effect
Here's what happens when your team builds and uses custom skills consistently:
Month 1: You write your first skill (code-review). It saves you 30 minutes a week.
Month 2: Your team notices and starts using it. You write a second skill (test-writing). Now you're saving 1 hour a week as a team.
Month 3: You've written five skills. Your whole workflow is encoded. A new teammate joins. Instead of a 2-week onboarding, they can start using your documented skills on day one.
Month 6: Your skills have been refined based on real usage. They capture three years of team knowledge. They're in version control. They're reviewed in PRs. They're part of your definition of "how we work."
One year later: You're not thinking about whether to use a skill—it's automatic. You invoke the right skill for the right task. Your consistency has improved. Your team's standards are enforced by instruction. New knowledge gets added to skills, not scattered across Slack or documentation.
The skills you write today become force multipliers tomorrow. They encode your knowledge. They make Claude more useful for your specific context, not just generic programming tasks. Skills are how teams scale their expertise without hiring more people. This is the power play. This is how 10 people work like 15.
Start small. Write one skill. Test it. Make it better. Then write the next one. Build momentum. After six months, you'll look back at your collection of skills and realize they've transformed how your team works.
That's how you build a custom Claude Code toolkit that actually fits the way you work.