February 19, 2026
Claude Development

The claude-code-action: Configuration and Usage

Want to automate code review, issue triage, and pull request analysis using Claude without managing servers or spinning up custom integrations? The claude-code-action is GitHub's bridge to Claude Code capabilities—and if you're not using it yet, you're manually doing work that can be delegated to AI.

In this guide, we'll walk through everything: how to install and reference the action in your workflows, what every configuration option does, how to handle inputs and outputs, how to customize triggers, and strategies for version pinning. By the end, you'll have a playbook for automating your entire CI/CD narrative and integrating Claude's intelligence into your development process.

Table of Contents
  1. What Is claude-code-action?
  2. What Can You Do With It?
  3. Installation and Setup
  4. Step 1: Understand Your Authentication
  5. Step 2: Add Your API Key to GitHub Secrets
  6. Step 3: Reference the Action in Your Workflow
  7. Complete Configuration Reference
  8. Core Inputs
  9. Event and Context Inputs
  10. Output and Result Handling
  11. Advanced Configuration
  12. Triggering Events: Customizing When the Action Runs
  13. Pull Request Triggers
  14. Push Triggers
  15. Issue Triggers
  16. Scheduled Triggers
  17. Manual Triggers
  18. Input and Output Parameters in Detail
  19. Understanding Inputs
  20. Understanding Outputs
  21. Practical: Using Outputs for Conditional Logic
  22. Version Pinning and Update Strategies
  23. Pinning Strategies
  24. Recommended Strategy
  25. Handling Breaking Changes
  26. Real-World Workflow Examples
  27. Example 1: Security-Focused PR Review
  28. Example 2: Multi-Stage Review Pipeline
  29. Example 3: Scheduled Deep Codebase Analysis
  30. Common Pitfalls and How to Avoid Them
  31. Pitfall 1: Overly Large Contexts
  32. Pitfall 2: Unclear Prompts
  33. Pitfall 3: Not Handling Rate Limits
  34. Pitfall 4: Forgetting Output Configuration
  35. Pitfall 5: Using the Wrong Model
  36. Cost Optimization Tips
  37. 1. Use Haiku for High-Volume Tasks
  38. 2. Cache Results
  39. 3. Reduce Token Usage
  40. 4. Run Less Frequently
  41. 5. Conditional Execution
  42. Troubleshooting
  43. Action Fails with "Invalid API Key"
  44. Action Times Out
  45. No Comments or Results Posted
  46. "Workflow does not have permission to write to repository"
  47. Large PRs Are Truncated
  48. Summary and Best Practices

What Is claude-code-action?

The claude-code-action is an official GitHub Action that integrates Claude's code understanding directly into your GitHub workflows. It runs as a job step, receives GitHub event context (PRs, issues, pushes, etc.), sends that context to Claude Code, and processes the results—all within your workflow environment.

Think of it as a powerful bridge: on one side, your GitHub events and repository state; on the other, Claude's reasoning and code analysis. The action handles authentication, context marshaling, and error handling so you don't have to. It's the missing link between GitHub's native CI/CD capabilities and Claude's semantic understanding of code.

What Can You Do With It?

Real-world use cases include:

  • Automated PR Review: Analyze code changes for bugs, security issues, style violations, and architectural concerns. Every PR gets intelligent feedback without waiting for a human reviewer.
  • Issue Triage: Classify, label, and prioritize issues based on content and context. Automatically tag bugs vs. feature requests vs. questions.
  • Code Quality Gates: Block merges when PR analysis flags critical issues. Combine with branch protection rules for automated enforcement.
  • Documentation Generation: Auto-generate changelogs, API docs, or release notes from code changes. Keep documentation in sync with code automatically.
  • Commit Message Validation: Enforce standards and suggest improvements. Catch poorly formatted or vague commit messages before they hit the history.
  • Security Scanning: Detect patterns that look like hardcoded secrets, vulnerable dependencies, or compliance violations. Go beyond regex-based secret scanning.
  • Performance Analysis: Identify potential performance issues in code changes before they reach production. Flag N+1 queries, inefficient algorithms, or memory leaks.
  • Test Generation: Auto-generate test cases for changed functions, catching edge cases that manual test writing often misses.

The action doesn't replace your linters and unit tests—it complements them with semantic understanding that traditional tools miss. Where a linter catches formatting issues, Claude catches logic errors. Where a static analyzer flags syntax, Claude flags architecture.

Installation and Setup

Step 1: Understand Your Authentication

Before you write a single workflow, you need an Anthropic API key. The claude-code-action requires:

  1. An Anthropic API key (from your Anthropic console at console.anthropic.com)
  2. GitHub repository secrets to store the key securely
  3. Workflow permissions to read repository context and write comments/checks

Without these, the action can't authenticate to Anthropic's API or comment back on your PRs. It's a three-way handshake: GitHub ↔ Action ↔ Anthropic API.

Step 2: Add Your API Key to GitHub Secrets

Navigate to your repository settings → Secrets and variables → Actions, then click "New repository secret."

Name: ANTHROPIC_API_KEY
Value: sk-ant-... (your actual key from console.anthropic.com)

If you're managing an organization with multiple repositories, consider using organization secrets instead of repository secrets. They're centralized and reduce duplication—you set it once and all repos access it. This is especially important for managing API quotas and ensuring consistency.

Security note: Never hardcode API keys in your workflow files. Always use GitHub secrets. The action will fail safely if it detects a raw key, but it's better not to risk exposing it in the first place. If you accidentally commit a key, GitHub's secret scanning will notify you, but prevention is better than remediation.

Step 3: Reference the Action in Your Workflow

Create a .github/workflows/ directory in your repository (if it doesn't exist), then add a new YAML file. Here's the simplest possible workflow:

yaml
name: Claude Code Review
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  claude-review:
    runs-on: ubuntu-latest
    steps:
      - uses: anthropics/claude-code-action@v1
        with:
          api_key: ${{ secrets.ANTHROPIC_API_KEY }}

What's happening here?

  • uses: anthropics/claude-code-action@v1 pins the action to version 1 (major version pinning, more on that later)
  • with: provides input configuration
  • api_key: ${{ secrets.ANTHROPIC_API_KEY }} securely passes your secret into the action

This alone does nothing useful yet—you need to tell the action what to do. That's where the configuration options come in. The bare minimum is useless, but it's a good starting point for understanding the structure.

Complete Configuration Reference

The claude-code-action accepts a comprehensive set of inputs. Let's cover each one with defaults, use cases, and gotchas.

Core Inputs

api_key (Required)

Type: String Default: None Scope: Passed to Anthropic API; never logged or exposed

Your Anthropic API key. Always inject this via secrets.

yaml
with:
  api_key: ${{ secrets.ANTHROPIC_API_KEY }}

The action validates the key format before making API calls. Invalid keys will fail early with a clear error. Never use inline values—use GitHub's secret system. If you're setting up GitHub Actions for the first time, spend 2 minutes on this. It's non-negotiable.

model (Optional)

Type: String Default: claude-3-5-sonnet-20241022 Valid values: claude-3-opus-20250219, claude-3-5-sonnet-20241022, claude-3-haiku-20250122

Which Claude model to use for analysis. Different models have different strengths and costs.

yaml
with:
  model: "claude-3-5-sonnet-20241022"

Model selection guide:

  • Opus: For complex architectural review, security analysis, or when you need deep reasoning about code intent. Slower (might take 30+ seconds) and more expensive (roughly 3x Sonnet cost). Use when accuracy matters more than speed.
  • Sonnet (default): The sweet spot—fast enough for most PR reviews (usually 5-15 seconds), smart enough to catch real issues and understand context. Balanced cost and capability.
  • Haiku: For high-volume, lightweight tasks like commit message validation, simple triage, or metadata analysis. Fastest (usually 2-5 seconds), cheapest, but less capable at understanding complex architectural changes.

Pro tip: Start with Sonnet. If your workflows are timing out or costing too much, drop to Haiku. If you're missing bugs or need deeper architectural feedback, upgrade to Opus. Monitor your spend and adjust—don't let costs surprise you.

prompt (Optional)

Type: String Default: Built-in sensible review prompt Max length: 8,000 characters

Custom instruction for what the action should analyze or do. This is where you inject your team's priorities.

yaml
with:
  prompt: |
    Review this PR for:
    1. Security vulnerabilities (OWASP Top 10)
    2. Performance regressions
    3. Database query inefficiencies
 
    Ignore style issues and focus only on logic errors.

If you don't provide a prompt, the action uses its default behavior: review code changes, flag bugs, suggest improvements. This works for most cases.

When to customize:

  • You want to enforce company-specific coding standards (e.g., "All database operations must use connection pooling")
  • You need focused analysis (security-only, performance-only, test coverage—whatever matters to your team)
  • You're automating something other than code review (like documentation generation or complexity analysis)
  • You have specific architectural constraints to validate (e.g., "All state management must use Redux in this repo")

Pitfall: A bad prompt is worse than no prompt. Keep it clear, specific, and concise. Rambling instructions confuse the model and produce low-signal output. Aim for 100-300 words. If you're writing a novel in your prompt, you're probably over-specifying.

analysis_type (Optional)

Type: Enum Default: review Valid values: review, security, performance, documentation, test_generation, custom

Pre-configured analysis modes that set the prompt and output format automatically. These are shortcut templates for common scenarios.

yaml
with:
  analysis_type: "security"

What each mode does:

  • review: General code review (default). Flags bugs, suggests improvements, checks code style.
  • security: OWASP-focused security scanning. Looks for injection vulnerabilities, hardcoded secrets, unsafe deserialization, authentication bypasses.
  • performance: Performance analysis. Identifies N+1 queries, inefficient algorithms, memory leaks, async bottlenecks.
  • documentation: Documentation gaps. Suggests missing docstrings, outdated comments, unclear code that should be refactored instead.
  • test_generation: Generates test cases for changed functions, often catching edge cases that manual testing misses.
  • custom: Ignores analysis_type and uses your custom prompt instead. Use this for specialized analysis.

Pro tip: Combine analysis_type with a custom prompt for hybrid behavior. For example, set analysis_type: security and then add prompt: "Also check for hardcoded database connection strings specific to our infrastructure." The action merges the base prompt with your additions, so you get security focus plus your custom requirements.

language (Optional)

Type: String Default: Auto-detect from file extensions Valid values: python, javascript, typescript, java, go, rust, csharp, cpp, ruby, php, kotlin, swift, auto

Hint to the model about what language to expect. Usually auto-detection works fine, but explicit is better than implicit for edge cases.

yaml
with:
  language: "typescript"

When to override auto-detect:

  • Your repo mixes languages and the action gets confused about which one to focus on
  • You're reviewing generated code (Terraform, CloudFormation, etc.) that file extensions don't describe well
  • You have polyglot services and want language-specific analysis per service

max_tokens (Optional)

Type: Integer Default: 2048 Range: 500–4,000

Maximum output length. Longer outputs are more detailed but slower and cost more. This controls how much Claude can say back to you.

yaml
with:
  max_tokens: 3000

Guidance:

  • 500–1,000: Quick, lightweight summaries. Good for high-volume triage where you just need signals.
  • 2,000 (default): Balanced. Catches real issues without bloat. Recommended for most cases.
  • 3,000–4,000: Deep dives. Use when you need exhaustive analysis, like security reviews or architectural changes.

Pitfall: Don't set this too high. The model will ramble and produce less focused output. 2,000 is almost always enough for actionable feedback. Beyond that, you're usually paying for verbosity, not insight.

Event and Context Inputs

event_name (Optional)

Type: String Default: Auto-detected from ${{ github.event_name }} Common values: pull_request, push, issues, workflow_dispatch, schedule

Which GitHub event triggered this action. Usually you don't need to set this—GitHub passes it automatically in the context. But if you're using workflow_dispatch (manual trigger), you can explicitly specify which event context to simulate.

yaml
with:
  event_name: "pull_request"

When to override: Rare. Only if you're manually triggering the workflow and want the action to behave as if it were a PR event instead of a workflow dispatch. Most of the time, let GitHub handle this automatically.

include_diff (Optional)

Type: Boolean Default: true

Include the full unified diff of code changes. Disable this if you're only analyzing commit messages or PR titles (not code).

yaml
with:
  include_diff: true

Why toggle this?

  • true (default): Full code context. Necessary for code review. This is almost always what you want.
  • false: Faster analysis, lower cost. Use when reviewing metadata only (e.g., PR titles, commit messages, issue descriptions without code context).

include_comments (Optional)

Type: Boolean Default: true

Include existing comments from the PR or issue in the context. Helps the model avoid duplicate feedback. If you've already commented that there's a security issue, the action won't re-report it.

yaml
with:
  include_comments: true

Set to false if you want fresh analysis without being influenced by human comments. This is useful if you suspect the comments are steering the analysis in the wrong direction or if you want to see what the action would say with fresh eyes.

context_size_limit (Optional)

Type: Integer Default: 10000 (lines of code)

Maximum code context to send to Claude. Prevents huge PRs from overwhelming the model or consuming excessive tokens.

yaml
with:
  context_size_limit: 5000

Guidance:

  • Large PRs (>5,000 lines): Lower this to 3,000–4,000 to stay focused. The model does better analysis on focused code.
  • Small PRs: Keep default at 10,000. You want full context for thorough review.
  • Monorepo with many changes: Consider running the action per file or subsystem, or use allowed_file_patterns to focus on changed source code (skip tests, docs).

Pitfall: If a PR exceeds the limit, the action truncates silently. You won't know—review will be incomplete. Monitor logs and adjust if needed. For massive PRs (50K+ lines), you might need to split the analysis across multiple workflow jobs, each analyzing a subset.

Output and Result Handling

output_format (Optional)

Type: Enum Default: comments Valid values: comments, check_run, summary, raw_json, artifact

Where and how to output the analysis results. This controls how developers see the feedback.

yaml
with:
  output_format: "check_run"

Detailed breakdown:

  • comments: Posts inline comments on the PR (one per issue found). Clean, visible, integrated into GitHub's review UI. Developers see feedback where the code is.
  • check_run: Creates a GitHub Check Run (appears in the "Checks" tab, blocks merges if configured). Structured, can be required by branch protection.
  • summary: Posts a single summary comment with all findings. Cleaner for high-volume reviews; less visual noise than inline comments.
  • raw_json: Writes raw action output to a file (for post-processing). Useful if you want to pipe results to other tools.
  • artifact: Uploads results as a GitHub Actions artifact (for archival or external processing). Useful for compliance, audit trails, or historical analysis.

When to use each:

  • comments (default): Best UX. Devs see issues where they are. Recommended for most teams.
  • check_run: For gating (block merges if issues found). Requires branch protection rules to be configured.
  • summary: Cleaner for high-volume reviews; less noise than inline comments. Good for large teams or repos with frequent PRs.
  • raw_json: When you need to post-process results or integrate with external systems (Slack, Jira, etc.).
  • artifact: When you want to archive all reviews for compliance or audit trails (useful for regulated industries).

You can combine outputs. For example:

yaml
with:
  output_format: "comments,check_run"

This posts comments AND creates a check run. Developers get inline feedback, and the check run gates the merge.

post_comments (Optional)

Type: Boolean Default: true

Actually post comments/results to the PR or issue. Set to false during testing or dry-run mode.

yaml
with:
  post_comments: false # Do the analysis but don't post results

Great for testing your workflow without spamming your team with comments. Run with this false while you're tuning your prompt and configuration.

fail_on_findings (Optional)

Type: Boolean Default: false

Exit with a non-zero status code if issues are found. Use this to gate merges: the action will "fail" the job, which blocks the PR unless you configure branch protection to allow overrides.

yaml
with:
  fail_on_findings: true

Important nuance: This doesn't prevent merges by itself. You need to:

  1. Set fail_on_findings: true in the action
  2. Configure branch protection rules to require the workflow check to pass
  3. Configure the workflow to block on failure (default behavior)

Then, if the action finds issues, the check fails, and the merge is blocked. This is powerful—your team can't merge code that Claude flags as problematic (depending on your branch protection rules).

fail_severity (Optional)

Type: Enum Default: error Valid values: note, warning, error

Only fail if findings are at or above this severity level. Use to ignore low-severity style issues while still blocking on bugs.

yaml
with:
  fail_on_findings: true
  fail_severity: "error" # Only fail on errors, ignore warnings

This gives you fine-grained control over what blocks merges. You might want warnings to be visible but not blocking, errors to be blocking, and notes to be just FYI.

summary_on_workflow_run (Optional)

Type: Boolean Default: true

Post a summary of all claude-code-action runs to the workflow run summary page (visible on GitHub Actions tab).

yaml
with:
  summary_on_workflow_run: true

Useful for auditing and understanding why workflows passed or failed. Developers can see the summary without opening the PR.

Advanced Configuration

cache_results (Optional)

Type: Boolean Default: true

Cache analysis results so re-running the workflow doesn't re-analyze identical code. Saves cost and time.

yaml
with:
  cache_results: true

The action hashes the input (code, prompt, model, etc.) and stores results in the GitHub Actions cache. If the hash matches, it returns the cached result. This is smart about deduplication and can save significant API costs if you re-run workflows frequently.

When to disable:

  • You want fresh analysis every time
  • Your analysis is non-deterministic (e.g., requires current time or external state)
  • You're testing prompt changes and want to see new results even if code is identical

timeout_seconds (Optional)

Type: Integer Default: 300 (5 minutes)

Maximum time to wait for API response. The action will cancel and fail if Claude takes longer.

yaml
with:
  timeout_seconds: 600 # 10 minutes

Guidance:

  • Haiku: 60–120 seconds usually enough. It's fast.
  • Sonnet: 120–300 seconds (default). Balanced.
  • Opus: 300+ seconds (can be slow on complex code). Use longer timeouts.

If workflows timeout frequently, increase this and/or use a faster model. Timeouts are frustrating because the action fails without results, and you have to re-run.

retry_on_failure (Optional)

Type: Boolean Default: true

Automatically retry once if the API call fails (rate limit, temporary outage, etc.).

yaml
with:
  retry_on_failure: true

The action implements exponential backoff (wait 2 seconds, retry, then fail if still failing). This handles transient failures gracefully.

github_token (Optional)

Type: String Default: ${{ github.token }}

Token for authenticating GitHub API calls (posting comments, creating check runs, etc.). Usually you don't need to override this—GitHub provides it automatically.

yaml
with:
  github_token: ${{ github.token }}

Override only if you're using a custom token (PAT) for special permissions or cross-repo access. Most teams never need this.

allowed_file_patterns (Optional)

Type: String (glob pattern) Default: * (all files)

Only analyze files matching this pattern. Useful to skip auto-generated code, third-party libraries, or vendor files.

yaml
with:
  allowed_file_patterns: "src/**/*.ts,src/**/*.tsx"

This only analyzes TypeScript files in src/. Vendor code, tests, and config files are skipped. This is useful if you have a monorepo or mixed codebase and want to focus analysis on production code.

Multiple patterns use comma separation. Negation uses !:

yaml
with:
  allowed_file_patterns: "src/**/*,!src/**/*.test.ts"

This analyzes everything in src/ except test files. Useful if you want to review production code but skip test changes.

ignored_file_patterns (Optional)

Type: String (glob pattern) Default: .vendor, .node_modules, .dist

Explicitly exclude files matching this pattern (opposite of allowed_file_patterns).

yaml
with:
  ignored_file_patterns: "node_modules/**,dist/**,*.min.js"

This is useful if you want to analyze everything except certain patterns. For example, ignore minified files, build artifacts, or vendor code.

Triggering Events: Customizing When the Action Runs

The action doesn't run in a vacuum—it responds to GitHub events. Let's explore how to customize when it triggers.

Pull Request Triggers

yaml
name: Claude Code Review
on:
  pull_request:
    types: [opened, synchronize, reopened]

When it runs:

  • opened: When a PR is first created
  • synchronize: When someone pushes new commits to the PR
  • reopened: When a closed PR is reopened

This is the most common configuration. Every code change triggers review. Developers get feedback quickly.

Variation: Exclude Draft PRs

yaml
on:
  pull_request:
    types: [opened, synchronize, reopened]
    branches:
      - main
      - develop

Only review PRs targeting main or develop. Ignore feature branches. This is useful if you have many short-lived branches and want to focus analysis on release-critical code.

Variation: Analyze Only Certain Paths

yaml
on:
  pull_request:
    paths:
      - "src/**"
      - "lib/**"
      - "!**/*.test.ts"

Only trigger if changes touch src/ or lib/ (and not test files). Skip analysis if only tests, docs, or configuration changed. This saves costs by not analyzing unrelated changes.

Push Triggers

yaml
on:
  push:
    branches:
      - main
      - develop

Review all commits pushed to main or develop (useful for catching issues after merge).

Issue Triggers

yaml
on:
  issues:
    types: [opened, edited]

Analyze issue descriptions for clarity, completeness, or security concerns. Common use case: auto-label bugs, feature requests, etc.

Scheduled Triggers

yaml
on:
  schedule:
    - cron: "0 2 * * *" # Daily at 2 AM UTC

Periodic analysis of your entire codebase (not just new changes). Useful for detecting technical debt, deprecated patterns, or security regressions. Good for health checks on older code.

Manual Triggers

yaml
on:
  workflow_dispatch:
    inputs:
      target:
        description: "File or directory to analyze"
        required: true
        default: "src/"

Let humans manually trigger the analysis with custom parameters. Great for one-off deep dives or special analyses.

Input and Output Parameters in Detail

Understanding Inputs

Inputs are how you configure the action's behavior. They come from the workflow YAML with: block.

yaml
- uses: anthropics/claude-code-action@v1
  with:
    api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    model: "claude-3-5-sonnet-20241022"
    analysis_type: "security"
    fail_on_findings: true

All inputs are strings in YAML (booleans like true are converted to strings). The action parses them into the correct type.

Understanding Outputs

Outputs are data returned by the action that you can use in subsequent steps. You assign an id to the step, then reference its outputs.

yaml
- uses: anthropics/claude-code-action@v1
  id: claude_review
  with:
    api_key: ${{ secrets.ANTHROPIC_API_KEY }}
 
- name: Check Results
  run: |
    echo "Review Status: ${{ steps.claude_review.outputs.status }}"
    echo "Issues Found: ${{ steps.claude_review.outputs.issue_count }}"
    echo "Severity: ${{ steps.claude_review.outputs.max_severity }}"

Available Outputs:

OutputTypeDescription
statusStringsuccess, warning, or failure
issue_countIntegerTotal issues found
error_countIntegerNumber of errors (highest severity)
warning_countIntegerNumber of warnings
note_countIntegerNumber of notes (lowest severity)
max_severityStringHighest severity level found
summaryStringPlain-text summary of findings
json_resultsStringJSON-formatted detailed results
analysis_time_msIntegerHow long the API call took (milliseconds)
tokens_usedIntegerApproximate tokens consumed

Practical: Using Outputs for Conditional Logic

yaml
- uses: anthropics/claude-code-action@v1
  id: review
  with:
    api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    fail_on_findings: false # Don't fail automatically
 
- name: Notify if Critical Issues Found
  if: steps.review.outputs.max_severity == 'error'
  run: |
    echo "Critical issues detected!"
    # Send Slack notification, create issue, etc.
 
- name: Auto-Approve if No Issues
  if: steps.review.outputs.status == 'success'
  uses: actions/github-script@v7
  with:
    script: |
      github.rest.pulls.createReview({
        owner: context.repo.owner,
        repo: context.repo.repo,
        pull_number: context.issue.number,
        event: 'APPROVE'
      })

This workflow conditionally approves PRs with zero findings—a powerful automation pattern. Developers get fast feedback on clean code.

Version Pinning and Update Strategies

Using @v1 in anthropics/claude-code-action@v1 pins to the major version. This is safe but misses bug fixes.

Pinning Strategies

Major version (default, safest for stability):

yaml
uses: anthropics/claude-code-action@v1

Gets all updates in v1.x.x (bug fixes, minor features). Breaking changes only in v2.

Minor version (balanced):

yaml
uses: anthropics/claude-code-action@v1.2

Gets patch updates (v1.2.x) but not v1.3+. More stable than @v1 but still gets important fixes.

Specific version (strictest):

yaml
uses: anthropics/claude-code-action@v1.2.3

Exactly this version forever. No automatic updates (might miss security fixes).

Latest (most aggressive, use with caution):

yaml
uses: anthropics/claude-code-action@latest

Always latest version, even major upgrades. Risky but guarantees newest features.

For production workflows, use major version pinning (@v1):

yaml
uses: anthropics/claude-code-action@v1
with:
  api_key: ${{ secrets.ANTHROPIC_API_KEY }}

This gives you:

  • Automatic critical bug fixes and patches
  • No surprise breaking changes (those only happen in v2+)
  • Predictable behavior across team and time

Handling Breaking Changes

When claude-code-action@v2 launches, you have options:

  1. Pin to v1 indefinitely (risky if v1 goes unsupported)
  2. Create a test PR using @v2, validate it works, then upgrade production
  3. Run both versions in parallel temporarily to validate compatibility

GitHub provides a dependency update tool (Dependabot) that can automate this:

yaml
# .github/dependabot.yml
version: 2
updates:
  - package-ecosystem: github-actions
    directory: "/"
    schedule:
      interval: weekly
    reviewers:
      - "your-github-username"

Dependabot opens PRs when new action versions are available. You review, test, and merge.

Real-World Workflow Examples

Let's build three complete, production-ready workflows.

Example 1: Security-Focused PR Review

yaml
name: Security Review
on:
  pull_request:
    types: [opened, synchronize]
    paths:
      - "src/**"
      - "lib/**"
 
jobs:
  security_scan:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
      checks: write
    steps:
      - uses: anthropics/claude-code-action@v1
        id: security
        with:
          api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          model: "claude-3-5-sonnet-20241022"
          analysis_type: "security"
          fail_on_findings: true
          fail_severity: "error" # Only fail on errors, not warnings
          output_format: "check_run"
          github_token: ${{ github.token }}
 
      - name: Report Results
        if: always()
        run: |
          echo "Security Analysis Complete"
          echo "Status: ${{ steps.security.outputs.status }}"
          echo "Errors: ${{ steps.security.outputs.error_count }}"
          echo "Warnings: ${{ steps.security.outputs.warning_count }}"
 
      - name: Notify on Findings
        if: steps.security.outputs.status != 'success'
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
              body: '⚠️ Claude Code security review found issues. Review the check run above.'
            })

This workflow:

  • Runs on PR changes to source code
  • Uses Sonnet for balanced speed/accuracy
  • Focuses on security (OWASP, hardcoded secrets, etc.)
  • Fails the check if errors are found (blocks merge)
  • Posts a GitHub check run (visible in the Checks tab)
  • Leaves a comment warning the developer
  • Runs only on production code changes (ignores other paths)

Example 2: Multi-Stage Review Pipeline

yaml
name: Comprehensive Code Review
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  review:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
      checks: write
    strategy:
      matrix:
        analysis_type: [review, security, performance]
    steps:
      - uses: anthropics/claude-code-action@v1
        id: claude
        with:
          api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          model: "claude-3-5-sonnet-20241022"
          analysis_type: ${{ matrix.analysis_type }}
          output_format: "summary"
          post_comments: true
          github_token: ${{ github.token }}
 
      - name: Archive Results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: claude-review-${{ matrix.analysis_type }}
          path: claude-results.json
          retention-days: 90

This workflow:

  • Runs three analyses in parallel (review, security, performance)
  • Posts summary comments (less noise than inline comments)
  • Archives all results as artifacts (for compliance/audit)
  • Completes in ~3-5 minutes total
  • Uses matrix strategy to run multiple analyses efficiently

Example 3: Scheduled Deep Codebase Analysis

yaml
name: Weekly Codebase Analysis
on:
  schedule:
    - cron: "0 2 * * 0" # Sunday at 2 AM UTC
 
jobs:
  deep_analysis:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
 
      - uses: anthropics/claude-code-action@v1
        id: analysis
        with:
          api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          model: "claude-3-opus-20250219" # Use Opus for deep analysis
          prompt: |
            Analyze this entire codebase for:
            1. Technical debt and refactoring opportunities
            2. Deprecated patterns or outdated dependencies
            3. Consistency issues across modules
            4. Documentation gaps
            5. Test coverage problems
 
            Prioritize actionable, high-impact findings.
          max_tokens: 3000
          context_size_limit: 50000 # Analyze more context
          output_format: "raw_json"
          post_comments: false # Don't spam repo with comments
 
      - name: Create Issue for Findings
        if: steps.analysis.outputs.status != 'success'
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: '🔍 Weekly Codebase Analysis Results',
              body: `${{ steps.analysis.outputs.summary }}\n\nRun: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}`,
              labels: ['technical-debt']
            })
 
      - name: Slack Notification
        uses: slackapi/slack-github-action@v1
        with:
          webhook-url: ${{ secrets.SLACK_WEBHOOK }}
          payload: |
            {
              "text": "Weekly codebase analysis complete",
              "blocks": [
                {
                  "type": "section",
                  "text": {
                    "type": "mrkdwn",
                    "text": "*Codebase Analysis Results*\nStatus: ${{ steps.analysis.outputs.status }}\nIssues: ${{ steps.analysis.outputs.issue_count }}\nTime: ${{ steps.analysis.outputs.analysis_time_ms }}ms"
                  }
                }
              ]
            }

This workflow:

  • Runs weekly on Sunday night (off-peak hours)
  • Uses Opus for thorough, in-depth analysis
  • Analyzes the entire codebase (50K lines of code max)
  • Creates a GitHub issue summarizing findings (searchable, tracked)
  • Posts to Slack so your team sees results immediately
  • Doesn't spam PRs (post_comments: false)
  • Archives results for historical comparison

Common Pitfalls and How to Avoid Them

Pitfall 1: Overly Large Contexts

Problem: You send 50KB of code to Claude and get generic feedback.

Solution: Use context_size_limit to focus analysis, or break reviews into smaller chunks per file.

yaml
with:
  context_size_limit: 5000
  allowed_file_patterns: "src/**/*.ts"

Smaller contexts lead to more focused, actionable analysis.

Pitfall 2: Unclear Prompts

Problem: Custom prompt is vague ("review this code"), and the action returns unhelpful feedback.

Solution: Be specific about what you care about and why.

yaml
with:
  prompt: |
    Review for these concerns only:
    1. Database queries (N+1, missing indexes)
    2. API rate limit handling
    3. Error handling for network failures
 
    Ignore style, naming, and comments.

Specific prompts yield specific, useful feedback.

Pitfall 3: Not Handling Rate Limits

Problem: Running the action on every commit leads to rate limit errors.

Solution: Batch analysis, use caching, or add retry_on_failure: true.

yaml
with:
  retry_on_failure: true
  cache_results: true

Or run on less-frequent events:

yaml
on:
  pull_request:
    types: [opened, synchronize] # Not every commit
  schedule:
    - cron: "0 2 * * *" # Once daily

Pitfall 4: Forgetting Output Configuration

Problem: You run the action but don't set output_format or post_comments, so results are lost.

Solution: Explicitly configure outputs.

yaml
with:
  output_format: "comments"
  post_comments: true
  github_token: ${{ github.token }}

Or capture results for post-processing:

yaml
id: review
# ... other config ...
 
- name: Process Results
  run: echo "${{ steps.review.outputs.json_results }}" | jq .

Pitfall 5: Using the Wrong Model

Problem: Using Haiku for complex architectural review and missing real issues.

Solution: Match model to complexity.

TaskModel
Commit message validation, quick triageHaiku
Standard PR review, most common caseSonnet
Architectural review, security deep-diveOpus

Choose based on what you're analyzing and how much you care about depth.

Cost Optimization Tips

The claude-code-action incurs API costs. Here's how to optimize without sacrificing quality.

1. Use Haiku for High-Volume Tasks

If you're reviewing 50 PRs a day, Haiku is 3x cheaper than Sonnet:

yaml
with:
  model: "claude-3-haiku-20250122"
  max_tokens: 1024 # Shorter output for quick triage

2. Cache Results

Enable caching to avoid re-analyzing identical code:

yaml
with:
  cache_results: true

3. Reduce Token Usage

Shorter prompts, smaller context, lower max_tokens = lower cost:

yaml
with:
  max_tokens: 1024 # Instead of default 2048
  context_size_limit: 3000 # Truncate large changes
  allowed_file_patterns: "src/**" # Skip vendor code

4. Run Less Frequently

Instead of every push, run on pull_request only:

yaml
on:
  pull_request:
    types: [opened, synchronize]

5. Conditional Execution

Only run on certain branches or file changes:

yaml
on:
  pull_request:
    branches: [main, develop]
    paths:
      - "src/**"
      - "!**/*.test.ts"

This skips analysis if only test files changed.

Troubleshooting

Action Fails with "Invalid API Key"

Cause: The API key wasn't passed or is malformed.

Fix: Verify the secret is set correctly in GitHub Settings → Secrets.

yaml
with:
  api_key: ${{ secrets.ANTHROPIC_API_KEY }} # Exact name matters

Action Times Out

Cause: API response is slow, or you set too long a timeout.

Fix: Increase timeout or use a faster model.

yaml
with:
  timeout_seconds: 600
  model: "claude-3-haiku-20250122"

No Comments or Results Posted

Cause: post_comments: false or output_format is wrong, or permissions are missing.

Fix: Verify settings and GitHub token permissions.

yaml
permissions:
  pull-requests: write
  checks: write
 
with:
  post_comments: true
  output_format: "comments"
  github_token: ${{ github.token }}

"Workflow does not have permission to write to repository"

Cause: Workflow permissions are too restrictive.

Fix: Add explicit permissions at the job level.

yaml
jobs:
  review:
    permissions:
      pull-requests: write
      issues: write
      checks: write

Large PRs Are Truncated

Cause: context_size_limit is too low for the PR size.

Fix: Increase the limit or apply more restrictive file filters.

yaml
with:
  context_size_limit: 10000
  allowed_file_patterns: "src/**" # Focus on source, skip tests/docs

Summary and Best Practices

You now have a complete playbook for the claude-code-action. Here's the checklist:

  1. Install: Add the action to .github/workflows/
  2. Authenticate: Store API key in GitHub Secrets
  3. Configure: Set api_key, model, analysis_type, and outputs
  4. Trigger: Define when it runs (PR, push, schedule, manual)
  5. Handle Results: Use outputs for conditional logic, integrate with tools
  6. Monitor: Archive results, post to Slack, create issues
  7. Optimize: Cache, use appropriate model, reduce token usage
  8. Version: Pin to major version (e.g., @v1) and use Dependabot

Golden Rules:

  • Start simple (security review or standard PR review)
  • Customize your prompt before deploying widely
  • Monitor costs—Haiku is your friend for high volume
  • Always use GitHub Secrets for API keys
  • Test workflow changes in a test branch before production
  • Use Dependabot to stay up-to-date
  • Review and iterate on your prompts based on results

The claude-code-action is a force multiplier for engineering teams. Use it to automate tedious reviews, catch bugs before they ship, and let your team focus on what humans do best: creative problem-solving.


-iNet

Need help implementing this?

We build automation systems like this for clients every day.

Discuss Your Project