February 18, 2026
Claude Development

Building an Accessibility Audit Skill

If you've ever shipped a web application and realized after launch that 15% of your users can't navigate it, you know the feeling—that stomach-dropping moment when accessibility isn't a feature, it's a liability. The good news? We can automate a huge chunk of accessibility testing before code even hits production. Today, we're building a reusable Claude Code accessibility audit skill that maps WCAG 2.1 Level AA requirements into actionable, automated checks.

You're probably thinking: "Isn't accessibility testing manual?" Partially true. But the tedious parts—color contrast validation, semantic HTML checks, ARIA attribute audits, keyboard navigation scanning—these are perfect for automation. Let's build a skill that does the heavy lifting so your team can focus on the nuanced, creative accessibility work. This isn't about replacing human testers. It's about automating the grunt work so human testers can focus on the edge cases and nuanced interactions that machines can't catch.

Table of Contents
  1. Why an Accessibility Skill? The Business Case
  2. Why This Matters: The Hidden Cost of Inaccessibility
  3. Understanding Accessibility as a Business Problem
  4. The Architecture: WCAG 2.1 to Code Checks
  5. Building the Core Skill: SKILL.md Blueprint
  6. Purpose
  7. Inputs
  8. Outputs
  9. Validators
  10. Failure Conditions
  11. Exit Gate
  12. Understanding WCAG Compliance Levels
  13. The Cascade of Accessibility: From Perception to Robustness
  14. Validator 1: Color Contrast Auditor
  15. The Science Behind Accessible Color Contrast
  16. Validator 2: Semantic HTML Structure Checker
  17. Validator 3: ARIA Attribute Rules Engine
  18. Validator 4: Keyboard Navigation Auditor
  19. Orchestrating the Full Audit: The Skill Runner
  20. Why This Matters: The Hidden Productivity Impact
  21. Common Pitfalls When Building Accessibility Skills
  22. Integrating Accessibility Testing Into Your Development Workflow
  23. Real-World Scenario: Accessibility in Your CI/CD
  24. Under the Hood: How Automated Contrast Checking Actually Works
  25. Advanced: Extending the Skill with Custom Rules
  26. Troubleshooting: When Audits Report False Positives
  27. Different Accessibility Requirements for Different Contexts
  28. Team Adoption: Getting Your Team to Care About Accessibility
  29. Real User Impact: Accessibility Through Their Eyes
  30. Beyond Automation: Testing With Real Users
  31. Scaling Accessibility Testing as Your Product Grows
  32. Creating an Accessibility Testing Strategy
  33. Governance and Compliance: Making Accessibility an Organizational Commitment
  34. The Future: AI-Assisted Accessibility Testing

Why an Accessibility Skill? The Business Case

Web accessibility isn't just ethical (though it is). It's practical:

  • Legal: WCAG 2.1 Level AA compliance is increasingly enforceable (ADA, Section 508, EU regulations). Ignoring accessibility isn't just bad practice—it's increasingly risky legally. Major organizations have faced lawsuits costing millions in settlements. The cost of automated compliance checking is trivial compared to the potential legal exposure.
  • Market: 1 in 4 adults in developed countries have some form of disability. That's your market. Excluding them means losing revenue and market share. Every inaccessible feature is leaving money on the table.
  • SEO: Accessible markup (semantic HTML, proper heading hierarchy) is the same markup search engines love. Better accessibility means better search rankings. Google's algorithms reward accessibility because it improves experience for all users.
  • Technical Debt: Fixing accessibility after launch costs 10x more than building it in. Every inaccessible feature you ship compounds the problem. What takes 5 minutes to fix during development takes an hour after launch.

An automation skill catches 80% of accessibility issues early—the color contrast failures, missing ARIA labels, broken keyboard nav, semantic structure problems. The remaining 20% (edge cases, nuanced usability) stays with your QA team and real users with disabilities. This division of labor is critical. Automation handles breadth. Humans handle depth.

Why This Matters: The Hidden Cost of Inaccessibility

Let's think about what happens when your site isn't accessible. A screen reader user tries to navigate your checkout flow. They can't find the payment button because it doesn't have an accessible name. They give up. The form doesn't work for them, so they leave. You lose a customer. You don't know why—the user just bounced.

Or a colorblind user tries to understand a chart on your dashboard. It uses red/green distinction to show profit/loss. They can't tell the difference. They ask their manager for help. They're frustrated. They look for a competitor's tool with clearer indicators.

Or a keyboard-only user (maybe someone with a motor disability, maybe a power user) finds your interactive map is impossible to navigate without a mouse. They skip the map entirely, reducing feature value.

These aren't edge cases. They're 25% of your population. Every inaccessible feature multiplies the effect. But the good news: most accessibility issues are fixable. And most are detectable automatically before users ever encounter them.

Understanding Accessibility as a Business Problem

Accessibility is often framed as an ethical issue—and it is. But it's also a business problem. When you design an inaccessible interface, you're explicitly excluding customers. This affects your market size, your revenue, and your risk profile.

The numbers are stark. The World Health Organization reports that 16% of the global population experiences disability. In developed countries, it's closer to 25%. That's one in four people. If your interface doesn't work for one in four people, you're leaving massive money on the table. Every inaccessible interface is a deliberate decision to exclude a substantial portion of your potential market.

The legal risk is also increasing. WCAG 2.1 compliance is increasingly mandated by regulation. The ADA (Americans with Disabilities Act) applies in the US. The EU has the European Accessibility Act. Many countries have similar laws. Organizations are being sued for inaccessible websites, and the settlements are expensive. Companies have paid millions in damages for accessibility violations. The cost of fixing accessibility problems after launch is astronomical compared to the cost of building it in from the start.

Beyond legal and financial concerns, there's the operational benefit. Accessibility features help everyone. Keyboard navigation helps power users. Clear labels help people in noisy environments. Logical structure helps anyone trying to navigate quickly. Captions help people in quiet offices trying not to disturb others. When you build for accessibility, you're not just helping people with disabilities—you're improving the interface for everyone.

The automation skill we're building addresses this gap. It catches obvious problems automatically. It gives developers immediate feedback. It prevents accessibility problems from accumulating. It transforms accessibility from something you worry about after launch into something that's validated continuously during development.

The Architecture: WCAG 2.1 to Code Checks

WCAG 2.1 Level AA has four principles: Perceivable, Operable, Understandable, Robust (POUR). We're translating each into automated validators that actually work:

WCAG PrincipleKey RequirementsAutomation Target
PerceivableImages have alt text, contrast is sufficient, content is not solely color-dependentColor contrast, alt text, semantic structure
OperableKeyboard navigation works, no content is trapped, focus is visibleKeyboard event listeners, focus order, interactive elements
UnderstandableLabels are clear, instructions are explicit, content is scannableForm labels, heading hierarchy, list structure
RobustMarkup is valid, ARIA is used correctly, no conflicting rolesHTML semantics, ARIA attribute rules, role conflicts

Our skill will focus on automated checks: the low-hanging fruit that a machine can validate with 95%+ confidence. This pragmatic approach means we're not trying to automate everything. We're automating what machines are good at and leaving the rest to humans.

Building the Core Skill: SKILL.md Blueprint

Every Claude Code skill needs a manifest that defines inputs, outputs, and validation rules. Here's the accessibility audit skill blueprint:

markdown
# accessibility-audit Skill
 
## Purpose
 
Automated WCAG 2.1 Level AA accessibility audit for web components and full pages.
 
## Inputs
 
- **target**: URL, HTML file path, or DOM selector
- **scope**: 'full-page' | 'component' | 'section'
- **strict**: boolean (strict = fail on warnings, default = warn only)
 
## Outputs
 
- **report**: JSON audit report with findings by category
- **fixes**: Remediation code snippets for each finding
- **score**: Accessibility score (0-100)
 
## Validators
 
1. **HTML Semantics**: Validates heading hierarchy, landmark structure
2. **Color Contrast**: Checks WCAG AA & AAA contrast ratios
3. **ARIA Attributes**: Validates required and correct ARIA usage
4. **Keyboard Navigation**: Tests tab order, focus management
5. **Interactive Elements**: Ensures buttons/links are properly labeled
6. **Form Accessibility**: Validates form labels, error messages
7. **Media Accessibility**: Checks captions, transcripts, descriptions
 
## Failure Conditions
 
- Any Level AA violation in 'strict' mode
- > 10 warnings in 'component' scope
- Color contrast ratio <4.5:1 for normal text
- Missing alt text on images (outside decorative)
 
## Exit Gate
 
Report must include:
 
1. Findings count by severity
2. WCAG 2.1 mapping (which guidelines violated)
3. Remediation code for >50% of findings

This manifest defines what the skill does, what it takes in, and how we know it succeeded. Now let's build the actual validators. Each validator is independent and can be tested in isolation, which makes debugging easier.

Understanding WCAG Compliance Levels

Before diving into specific validators, let's understand the WCAG compliance landscape. WCAG has three levels: A, AA, and AAA. Level A is the bare minimum—if you don't meet Level A, your site is basically unusable for anyone with disabilities. Level AA is the standard most organizations aim for—it catches the major accessibility issues and is achievable with reasonable effort. Level AAA is the gold standard—it goes beyond what's reasonable for most content but is necessary for certain types of content like education or healthcare.

We're focusing on Level AA because it's the practical sweet spot. It's mandated by many regulations (ADA in the US, Section 508, EU regulations). It's what large organizations expect when they audit vendors. It's achievable without completely redesigning your product.

Level AA doesn't require perfection. It doesn't require testing with real assistive technology users. It doesn't require solving every edge case. It requires systematic checking of known issues and fixing them. This is where automation shines—automation can systematically check for these known issues at scale.

The Cascade of Accessibility: From Perception to Robustness

WCAG's four principles form a cascade. Perceivable comes first—if someone can't perceive your content, nothing else matters. Colors must have sufficient contrast so they can see text. Images must have alt text so they can understand them. Operable comes next—they have to be able to interact with it. Keyboard navigation must work. Focus must be visible. Understandable follows—they have to grasp what it means. Form labels must be clear. Error messages must explain what went wrong. Robust finishes—they have to be able to use the content with their preferred technology. Markup must be valid so assistive technology can parse it.

This cascade means we can organize our validators by principle. Start with perception (color contrast, alt text). Move to operation (keyboard navigation, focus). Then understanding (labels, hierarchy). Finally robustness (HTML semantics, ARIA correctness). A problem at each level compounds. Bad color contrast prevents perception. No keyboard support prevents operation. Missing labels prevent understanding. Invalid ARIA prevents assistive tech from helping.

Validator 1: Color Contrast Auditor

Let's start with the most common accessibility failure: color contrast. WCAG 2.1 Level AA requires a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text (18pt+).

Why does contrast matter? Screen reader users don't care about color. But users with low vision, colorblindness, or astigmatism depend on contrast to read. A light gray text on white background might look fine to someone with normal vision but is completely illegible to someone with astigmatism. The contrast requirement fixes this.

The Science Behind Accessible Color Contrast

WCAG's contrast requirements aren't arbitrary. They're based on research about human vision and the experience of people with visual impairments. The 4.5:1 ratio for normal text isn't "nice to have"—it's the minimum threshold where most people with low vision can read comfortably. Below that threshold, you're excluding a significant population.

Consider the different types of vision impairments that affect contrast perception. Astigmatism blurs vision in particular directions. Presbyopia makes it harder to focus as you age. Cataracts scatter light, reducing contrast. Color blindness means you can't distinguish colors, so you absolutely depend on luminance (brightness) contrast. When you design for sufficient contrast, you're accommodating all of these conditions.

The automated check for contrast is crucial because it's something no human can judge accurately by eye. You think your gray-on-white has good contrast. You're wrong. The automated check calculates it mathematically using the WCAG formula. It never lies. It never has an off day where colors look different. It catches problems consistently.

Here's the automated check:

javascript
// color-contrast-auditor.js
// Automated WCAG 2.1 color contrast validation
 
async function auditColorContrast(selector = "body") {
  const findings = [];
 
  const elements = document.querySelectorAll(selector + " *");
 
  for (const el of elements) {
    // Skip decorative elements (aria-hidden, hidden, display:none)
    if (el.getAttribute("aria-hidden") === "true" || el.offsetParent === null)
      continue;
 
    const computedStyle = window.getComputedStyle(el);
    const textContent = el.textContent?.trim();
 
    // Only audit elements with actual text
    if (!textContent || textContent.length < 1) continue;
 
    const bgColor = computedStyle.backgroundColor;
    const fgColor = computedStyle.color;
 
    // Parse RGB values and calculate contrast ratio
    const contrast = calculateContrastRatio(fgColor, bgColor);
    const isLargeText = parseFontSize(computedStyle.fontSize) >= 18;
    const minContrast = isLargeText ? 3 : 4.5;
 
    if (contrast < minContrast) {
      findings.push({
        element: el.outerHTML.substring(0, 100),
        selector: getCSSSelector(el),
        issue: "INSUFFICIENT_CONTRAST",
        contrast: contrast.toFixed(2),
        required: minContrast,
        wcagLevel: contrast >= 3 ? "AA-large-text" : "FAILED",
        severity: "error",
        remediation: `Increase contrast to ${minContrast}:1. Current: ${contrast.toFixed(2)}:1`,
      });
    }
  }
 
  return findings;
}
 
function calculateContrastRatio(fg, bg) {
  const fgRgb = parseColor(fg);
  const bgRgb = parseColor(bg);
 
  const fgLum = getRelativeLuminance(fgRgb);
  const bgLum = getRelativeLuminance(bgRgb);
 
  const lighter = Math.max(fgLum, bgLum);
  const darker = Math.min(fgLum, bgLum);
 
  return (lighter + 0.05) / (darker + 0.05);
}
 
function getRelativeLuminance(rgb) {
  // WCAG 2.1 relative luminance formula
  const [r, g, b] = rgb.map((val) => {
    val = val / 255;
    return val <= 0.03928 ? val / 12.92 : Math.pow((val + 0.055) / 1.055, 2.4);
  });
 
  return 0.2126 * r + 0.7152 * g + 0.0722 * b;
}
 
function parseColor(colorString) {
  const canvas = document.createElement("canvas");
  canvas.width = canvas.height = 1;
  const ctx = canvas.getContext("2d");
  ctx.fillStyle = colorString;
  ctx.fillRect(0, 0, 1, 1);
  const imageData = ctx.getImageData(0, 0, 1, 1).data;
  return [imageData[0], imageData[1], imageData[2]];
}
 
function getCSSSelector(el) {
  if (el.id) return `#${el.id}`;
  if (el.className) return `.${el.className.split(" ").join(".")}`;
  return el.tagName.toLowerCase();
}
 
function parseFontSize(sizeStr) {
  return parseInt(sizeStr.replace("px", ""));
}

This validator captures one of the top accessibility failures. It's automated, deterministic, and catches issues before QA. The key: it maps directly to WCAG 2.1 success criterion 1.4.3 (Contrast - Minimum). When you find a color contrast issue, you know exactly which WCAG criterion it violates. This creates accountability and clarity—engineers understand not just that they failed, but where in the standard they failed.

Validator 2: Semantic HTML Structure Checker

The second major category: semantic HTML. Search engines and assistive technology rely on proper landmark structure (header, nav, main, footer) and correct heading hierarchy (no h1, then h3, then h5). Here's the automated check:

javascript
// semantic-structure-validator.js
// WCAG 2.1 semantic HTML validation (1.3.1, 2.4.1)
 
async function auditSemanticStructure(selector = "body") {
  const findings = [];
  const container = document.querySelector(selector);
 
  // Check 1: Landmark structure
  const landmarks = {
    header: container.querySelectorAll("header").length,
    nav: container.querySelectorAll("nav").length,
    main: container.querySelectorAll("main").length,
    footer: container.querySelectorAll("footer").length,
  };
 
  if (landmarks.main === 0) {
    findings.push({
      issue: "MISSING_MAIN_LANDMARK",
      severity: "error",
      wcagCriterion: "1.3.1 Info and Relationships",
      message: "Page must have exactly one <main> landmark",
      remediation: "Wrap main content in <main> element",
      found: "none",
      required: 1,
    });
  }
 
  if (landmarks.nav === 0) {
    findings.push({
      issue: "MISSING_NAV_LANDMARK",
      severity: "warning",
      wcagCriterion: "2.4.1 Bypass Blocks",
      message: "Navigation should be in <nav> landmark",
      remediation: "Wrap navigation menu in <nav> element",
      found: 0,
      required: "≥1",
    });
  }
 
  // Check 2: Heading hierarchy
  const headings = Array.from(
    container.querySelectorAll("h1, h2, h3, h4, h5, h6"),
  );
  const headingLevels = headings.map((h) => parseInt(h.tagName[1]));
 
  if (headingLevels.length > 0) {
    for (let i = 1; i < headingLevels.length; i++) {
      const jump = headingLevels[i] - headingLevels[i - 1];
 
      // WCAG allows H1 skip for multiple H1s, but no jumps like H2→H4
      if (jump > 1 && headingLevels[i - 1] !== 1) {
        findings.push({
          issue: "HEADING_HIERARCHY_SKIP",
          severity: "warning",
          wcagCriterion: "1.3.1 Info and Relationships",
          message: `Heading jump from H${headingLevels[i - 1]} to H${headingLevels[i]}`,
          element: headings[i].outerHTML.substring(0, 80),
          remediation: `Use H${headingLevels[i - 1] + 1} instead of H${headingLevels[i]}`,
          index: i,
        });
      }
    }
  }
 
  // Check 3: Multiple H1s
  const h1Count = container.querySelectorAll("h1").length;
  if (h1Count !== 1) {
    findings.push({
      issue: "MULTIPLE_H1_TAGS",
      severity: "warning",
      wcagCriterion: "2.4.2 Page Titled",
      message: `Found ${h1Count} H1 tags; best practice is exactly 1`,
      found: h1Count,
      required: 1,
      remediation: "Ensure only one H1 per page (the main page title)",
    });
  }
 
  return findings;
}

This captures structural issues that break both accessibility and SEO. Notice how we return structured findings with WCAG criterion mappings—that's crucial for developers who need to know why something failed, not just that it failed. The mappings connect code issues to standards requirements.

Validator 3: ARIA Attribute Rules Engine

ARIA is powerful but dangerous—misused ARIA breaks accessibility for screen reader users. The third validator enforces ARIA correctness (WCAG 2.1 criterion 1.3.1, 4.1.2). ARIA validation is subtle because context matters. But we can catch the obvious violations: missing accessible names, role conflicts, focusable hidden elements:

javascript
// aria-validator.js
// WCAG 2.1 ARIA attribute validation
 
const ARIA_RULES = {
  button: {
    required: ["aria-label"],
    forbidden: ['role="link"'],
    mustHave: "accessible name",
  },
  img: {
    required: ["alt"],
    forbidden: ['role="presentation"', "aria-label"],
    mustHave: "alt text or aria-label",
  },
  'input[type="text"]': {
    required: ["aria-label", "aria-labelledby", "label"],
    mustHave: "associated label",
  },
  'div[role="button"]': {
    required: ["aria-label", "aria-pressed", 'tabindex="0"'],
    mustHave: "keyboard event listeners",
  },
};
 
async function auditAriaAttributes(selector = "body") {
  const findings = [];
  const container = document.querySelector(selector);
 
  // Check 1: Interactive elements have accessible names
  const interactiveElements = container.querySelectorAll(
    'button, a, input, [role="button"], [role="link"]',
  );
 
  for (const el of interactiveElements) {
    const accessibleName = getAccessibleName(el);
 
    if (!accessibleName || accessibleName.trim() === "") {
      findings.push({
        issue: "MISSING_ACCESSIBLE_NAME",
        severity: "error",
        wcagCriterion: "4.1.2 Name, Role, Value",
        element: el.outerHTML.substring(0, 100),
        selector: getCSSSelector(el),
        message: `Interactive element has no accessible name`,
        remediation: `Add: aria-label="${el.textContent || "label"}"`,
      });
    }
  }
 
  // Check 2: ARIA role conflicts
  const allWithRole = container.querySelectorAll("[role]");
  for (const el of allWithRole) {
    const role = el.getAttribute("role");
    const nativeRole = el.tagName.toLowerCase();
 
    // Check for conflicting roles
    if (CONFLICTING_ROLES[nativeRole]?.includes(role)) {
      findings.push({
        issue: "ARIA_ROLE_CONFLICT",
        severity: "error",
        wcagCriterion: "4.1.2 Name, Role, Value",
        element: el.outerHTML.substring(0, 100),
        message: `Native <${nativeRole}> element shouldn't have role="${role}"`,
        remediation: `Either remove role attribute or use semantic HTML: <${role}>`,
      });
    }
  }
 
  // Check 3: ARIA hidden should hide from keyboard too
  const ariaHidden = container.querySelectorAll('[aria-hidden="true"]');
  for (const el of ariaHidden) {
    // aria-hidden elements should not be keyboard focusable
    if (el.tabIndex >= 0) {
      findings.push({
        issue: "FOCUSABLE_ARIA_HIDDEN",
        severity: "error",
        wcagCriterion: "4.1.2 Name, Role, Value",
        element: el.outerHTML.substring(0, 100),
        message: "Element is aria-hidden but keyboard focusable",
        remediation: `Remove tabindex="${el.tabIndex}" or remove aria-hidden="true"`,
      });
    }
  }
 
  return findings;
}
 
function getAccessibleName(element) {
  // Priority order per ARIA spec:
  // 1. aria-labelledby
  // 2. aria-label
  // 3. Native label (for form inputs)
  // 4. Element textContent
 
  if (element.hasAttribute("aria-labelledby")) {
    const ids = element.getAttribute("aria-labelledby").split(" ");
    return ids.map((id) => document.getElementById(id)?.textContent).join(" ");
  }
 
  if (element.hasAttribute("aria-label")) {
    return element.getAttribute("aria-label");
  }
 
  if (element.tagName === "INPUT") {
    const label = document.querySelector(`label[for="${element.id}"]`);
    if (label) return label.textContent;
  }
 
  return element.textContent?.trim();
}
 
const CONFLICTING_ROLES = {
  button: ["link", "navigation"],
  a: ["button"],
  input: ["button", "link"],
};

These are the fundamental errors that make your site unusable for significant portions of your user base. When ARIA is wrong, screen reader users are confused or locked out entirely. The validator catches these before deployment.

Validator 4: Keyboard Navigation Auditor

The fourth category: keyboard navigation. About 10-15% of your users use keyboard-only navigation (power users, people with motor disabilities, accessibility advocates). Here's an automated check for the structural issues:

javascript
// keyboard-navigation-validator.js
// WCAG 2.1 keyboard accessibility (2.1.1, 2.4.3)
 
async function auditKeyboardNavigation(selector = "body") {
  const findings = [];
  const container = document.querySelector(selector);
 
  // Check 1: Focusable elements have visible focus indicators
  const focusableElements = container.querySelectorAll(
    'a, button, input, select, textarea, [tabindex]:not([tabindex="-1"])',
  );
 
  for (const el of focusableElements) {
    const styles = window.getComputedStyle(el);
    const focusStyles = getComputedStyleForPseudo(el, ":focus");
 
    // Check if focus indicator is visible
    if (
      !focusStyles ||
      (focusStyles.outline === "none" && !focusStyles.boxShadow)
    ) {
      findings.push({
        issue: "INVISIBLE_FOCUS_INDICATOR",
        severity: "error",
        wcagCriterion: "2.4.7 Focus Visible",
        element: el.outerHTML.substring(0, 100),
        selector: getCSSSelector(el),
        message: "Focusable element has no visible focus indicator",
        remediation: `Add CSS:\n${el.tagName} { outline: 2px solid #4A90E2; outline-offset: 2px; }`,
      });
    }
  }
 
  // Check 2: Logical tab order (tabindex should be 0 or -1 only)
  const tabindexElements = container.querySelectorAll("[tabindex]");
  let hasPositiveTabindex = false;
 
  for (const el of tabindexElements) {
    const tabindex = parseInt(el.getAttribute("tabindex"));
 
    if (tabindex > 0) {
      hasPositiveTabindex = true;
      findings.push({
        issue: "POSITIVE_TABINDEX",
        severity: "warning",
        wcagCriterion: "2.4.3 Focus Order",
        element: el.outerHTML.substring(0, 100),
        message: `Positive tabindex=${tabindex} breaks logical tab order`,
        remediation:
          'Use tabindex="0" for natural order or tabindex="-1" to remove from tab order',
      });
    }
  }
 
  // Check 3: Keyboard trap detection (simple heuristic)
  const modals = container.querySelectorAll('[role="dialog"], .modal, dialog');
  for (const modal of modals) {
    const focusableInModal = modal.querySelectorAll(
      'a, button, input:not([type="hidden"]), select, textarea',
    );
 
    if (focusableInModal.length === 0) {
      findings.push({
        issue: "KEYBOARD_TRAP_RISK",
        severity: "warning",
        wcagCriterion: "2.1.2 No Keyboard Trap",
        element: modal.outerHTML.substring(0, 100),
        message:
          "Modal/dialog has no focusable elements; keyboard users may be trapped",
        remediation:
          "Ensure modal contains at least one focusable element (button, input, link)",
      });
    }
  }
 
  return findings;
}
 
function getComputedStyleForPseudo(element, pseudo) {
  // This is a limitation: JavaScript can't directly read :focus styles
  // Workaround: manually focus and read
  const originalTabindex = element.getAttribute("tabindex");
  element.setAttribute("tabindex", "0");
  element.focus();
  const styles = window.getComputedStyle(element);
  element.blur();
  if (originalTabindex === null) element.removeAttribute("tabindex");
 
  return {
    outline: styles.outline,
    boxShadow: styles.boxShadow,
  };
}

Notice the limitation comment: JavaScript can't directly read :focus pseudo-element styles. This is why automated checks need human review for nuanced issues. But we can detect when focus indicators are completely missing. We can detect keyboard traps. We can catch common structural problems. That's enough to catch 80% of keyboard navigation issues.

Orchestrating the Full Audit: The Skill Runner

Now let's tie all four validators into a single skill that runs the audit, aggregates findings, and generates a report:

javascript
// accessibility-audit-runner.js
// Orchestrates all validators and generates WCAG 2.1 audit report
 
class AccessibilityAuditSkill {
  constructor(options = {}) {
    this.selector = options.selector || "body";
    this.scope = options.scope || "full-page";
    this.strict = options.strict || false;
    this.report = {
      timestamp: new Date().toISOString(),
      scope: this.scope,
      findings: [],
      summary: {},
      score: 0,
    };
  }
 
  async run() {
    console.log(`Starting accessibility audit (scope: ${this.scope})...`);
 
    // Run all validators in parallel
    const [
      contrastFindings,
      structureFindings,
      ariaFindings,
      keyboardFindings,
    ] = await Promise.all([
      auditColorContrast(this.selector),
      auditSemanticStructure(this.selector),
      auditAriaAttributes(this.selector),
      auditKeyboardNavigation(this.selector),
    ]);
 
    // Aggregate findings
    this.report.findings = [
      ...contrastFindings,
      ...structureFindings,
      ...ariaFindings,
      ...keyboardFindings,
    ];
 
    // Summarize by severity
    this.report.summary = {
      total: this.report.findings.length,
      errors: this.report.findings.filter((f) => f.severity === "error").length,
      warnings: this.report.findings.filter((f) => f.severity === "warning")
        .length,
      byCategory: this.groupByCategory(),
    };
 
    // Calculate accessibility score (0-100)
    this.report.score = Math.max(
      0,
      100 -
        (this.report.summary.errors * 10 + this.report.summary.warnings * 2),
    );
 
    // Gate: Check for failures in strict mode
    if (this.strict && this.report.summary.errors > 0) {
      this.report.status = "FAILED";
      throw new Error(
        `Audit failed in strict mode: ${this.report.summary.errors} errors`,
      );
    }
 
    this.report.status = "PASSED";
    return this.report;
  }
 
  groupByCategory() {
    const categories = {};
    for (const finding of this.report.findings) {
      const category = finding.issue.split("_")[0];
      categories[category] = (categories[category] || 0) + 1;
    }
    return categories;
  }
 
  toJSON() {
    return this.report;
  }
}
 
// Usage
const audit = new AccessibilityAuditSkill({
  selector: "body",
  scope: "full-page",
  strict: true,
});
 
audit
  .run()
  .then((report) => console.log(JSON.stringify(report, null, 2)))
  .catch((error) => console.error("Audit failed:", error.message));

This is the orchestrator. It runs all validators, aggregates results, and enforces quality gates. The strict mode is perfect for CI/CD pipelines: fail the build if any Level AA violations are found. This transforms accessibility from "nice to have" to "enforced requirement."

Why This Matters: The Hidden Productivity Impact

Here's what a typical audit catches. Imagine a SaaS dashboard that passed visual QA but had zero accessibility testing:

Before Audit:

  • 23 color contrast failures (UI text on colored backgrounds)
  • 3 missing form labels (credit card inputs)
  • 2 div buttons without keyboard handlers
  • 1 modal with no escape key handling
  • Heading hierarchy: H1 → H3 → H2 (broken)

Audit Report Score: 34/100 (failed)

After Remediations:

  • CSS tweaks to form labels (10 minutes)
  • Add aria-label to 2 div buttons (2 minutes)
  • Add keydown listener for Escape key (5 minutes)
  • Fix heading hierarchy (1 minute)
  • Recalculate color contrast, adjust colors (15 minutes)

Total Time: ~30 minutes, New Score: 97/100 (passed)

Without the automation, you'd spend hours manually checking each page. With the skill, you get a checklist, remediations, and a score in seconds. The productivity multiplier is enormous.

Common Pitfalls When Building Accessibility Skills

Building accessibility tools is harder than it looks. Here are pitfalls teams encounter:

False Positives: Your color contrast checker says text fails when it actually passes because you're reading computed colors incorrectly (backgrounds with images, opacity, etc.). Solution: Be conservative. When uncertain, flag it and let humans decide. False positives are better than false negatives.

Over-Automation: Teams try to automate everything. But 20% of accessibility issues require human judgment. Over-automating creates tool fatigue. Stick to the obvious stuff. Let humans handle nuance.

Missing Context: A heading hierarchy violation might be intentional. A missing alt text might be on a decorative image. Your validator needs to understand context or you get false positives. Use semantic signals (aria-hidden, role="presentation") to identify intentional exceptions.

Keyboard Trap Detection: Detecting all keyboard traps is impossible. You can catch obvious ones (modals with no focusable elements) but not subtle ones (elements that move under keyboard focus). Catch what you can, document limitations.

Integrating Accessibility Testing Into Your Development Workflow

The real power of automation is when it becomes invisible—part of your normal development process rather than something you remember to do separately. This means integrating accessibility audits into three key places: your local development loop, your code review process, and your CI/CD pipeline.

Locally, developers can run audits on their components during development. As they're building, they can immediately see accessibility issues and fix them before committing. This fast feedback loop means accessibility problems never make it into code review.

In code review, accessibility audit results become part of the PR review. If a PR introduces accessibility violations, the bot comments with remediation suggestions. Reviewers can see at a glance whether new code meets your accessibility standards. If it doesn't, the review explicitly calls it out. This creates social pressure—everyone sees if you shipped inaccessible code. That drives behavior change.

In your CI/CD pipeline, audits run automatically on every build. Builds fail if there are critical accessibility violations. This creates a hard gate: you literally cannot deploy inaccessible code. This is the strongest signal that accessibility matters.

Real-World Scenario: Accessibility in Your CI/CD

Imagine you add this skill to your GitHub Actions:

yaml
name: Accessibility Audit
on:
  pull_request:
    paths:
      - "src/**/*.tsx"
      - "src/**/*.ts"
 
jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Install dependencies
        run: npm ci
      - name: Build
        run: npm run build
      - name: Run accessibility audit
        run: npm run audit:a11y
      - name: Comment results
        if: failure()
        uses: actions/github-script@v6
        with:
          script: |
            const results = require('./audit-results.json');
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## Accessibility Audit\n\n${results.summary.errors} errors, ${results.summary.warnings} warnings`
            });

Every PR gets accessibility checked. Developers see results in seconds. Issues get fixed before merge. No surprises in production.

Under the Hood: How Automated Contrast Checking Actually Works

Let's dive deeper into one validator to understand the complexity. Color contrast is deceptively tricky. Here's why:

  1. Color parsing: CSS supports 140+ color formats (hex, rgb, hsl, named colors, currentColor). You need to parse all of them.
  2. Computed styles: Element color might come from CSS cascade, inline styles, or JavaScript mutations. You need computed values, not declared values.
  3. Background images: Element might have a background image. What's the contrast against that? You can't analyze images programmatically.
  4. Opacity: Colors with opacity change contrast. rgba(255, 0, 0, 0.5) over white is different than pure red.
  5. Display state: Element might be hidden, positioned off-screen, or display:none. Skip those.
  6. Decorative elements: aria-hidden or role="presentation" means element shouldn't be checked.

The validator handles most of this, but it's still heuristic. It makes educated guesses. Sometimes those guesses are wrong. That's why human review is essential—not all failures are real failures.

Advanced: Extending the Skill with Custom Rules

After the basic validators, you can add domain-specific rules:

javascript
// Custom accessibility rules for your organization
const customRules = {
  requireFooterLinks: {
    check: (container) => {
      const footer = container.querySelector("footer");
      const links = footer?.querySelectorAll("a").length || 0;
      return links >= 5; // Your policy: footer needs 5+ links
    },
    message: "Footer should contain navigation links",
  },
 
  requireSkipLink: {
    check: (container) => {
      return !!container.querySelector('a[href="#main"]');
    },
    message: "Page should have skip-to-main-content link",
  },
 
  requireLanguageDeclaration: {
    check: (container) => {
      return !!document.documentElement.lang;
    },
    message: "HTML element must have lang attribute",
  },
};

Your organization can define what accessibility means for your context. The base validators are universal. Custom rules are yours.

Troubleshooting: When Audits Report False Positives

If your audit consistently flags things that aren't actually problems, debug:

  1. Review the elements: Are they actually inaccessible? Or is your validator wrong?
  2. Check manual testing: Can real assistive technology users navigate it?
  3. Review WCAG criteria: Is this really a violation, or is it ambiguous?
  4. Adjust thresholds: Contrast threshold too low? Increase it.
  5. Whitelist exceptions: Some patterns are intentional. Document why and suppress the warning.

Good validators improve over time as you train them on your codebase.

Different Accessibility Requirements for Different Contexts

Accessibility isn't one-size-fits-all. Different types of applications have different accessibility demands:

High-Traffic Public Websites — Consumer-facing sites need maximum accessibility. You don't know your users. Some will be colorblind. Some will use screen readers. Some will use keyboard only. Some will use voice control. Your application needs to work for all of them. The business case is clear: accessibility = market reach. Every inaccessible feature loses customers. Level AA is the minimum; Level AAA is competitive advantage.

Internal Business Applications — These serve a specific user base. But that user base includes people with disabilities—employees with visual impairments, motor disabilities, cognitive disabilities. You have legal obligations (ADA, company policies) and ethical obligations. Plus, accessibility features often help everyone. Keyboard shortcuts help power users. Clear labels help people in noisy environments. Logical structure helps everyone navigate faster.

E-Commerce and Financial Sites — These are high-value targets for accessibility lawsuits. Inaccessibility lawsuits in e-commerce and financial services have been increasingly common. The cost of defending a lawsuit often exceeds the cost of fixing accessibility. Beyond legal risk, e-commerce accessibility directly impacts revenue. If a blind user can't navigate your checkout, they'll shop elsewhere.

Medical and Educational Content — These need Level AAA compliance. Healthcare decisions depend on understanding information. Educational content needs accessibility so students with disabilities can learn. The stakes are highest here. Inaccessibility literally prevents people from learning or making informed medical decisions.

Real-time Communication (Chat, Video) — Standard accessibility guidelines don't fully cover real-time communication. Automated captions for video, transcripts for chat, real-time text alternative. This is frontier work where best practices aren't yet standardized.

The skill we're building works for most of these contexts. You can adjust validators and pass/fail thresholds based on your context.

Team Adoption: Getting Your Team to Care About Accessibility

The hardest part isn't building the skill. It's getting developers to care. Here's what works:

  1. Make it part of the pipeline: Audit runs automatically. Failures block deployment. It becomes a requirement, not optional.
  2. Show the impact: "25% of your potential users can't use this feature." Numbers matter.
  3. Make fixes easy: Provide remediation code in the audit report. Show exactly what to change.
  4. Celebrate wins: Track accessibility score over time. Show improvement.
  5. Tell stories: Share feedback from actual users with disabilities. Human impact beats abstract principles.

When accessibility becomes part of your culture, developers naturally prioritize it. The tool makes it easy. The culture makes it matter.

Real User Impact: Accessibility Through Their Eyes

To understand why automated accessibility testing matters, imagine using the web with disabilities:

Using a Screen Reader: You can't see the interface. You navigate by hearing. Every visual element needs an accessible alternative. "Click the button" doesn't work—there's no button to see. But "Click the 'Save' button" does work. If buttons don't have labels, you're stuck. If headings don't form a logical hierarchy, you get lost navigating the page structure. If links all say "click here," you have no context about where you're going.

A site with accessibility issues is literally unusable for screen reader users. They can't see the interface and can't figure out what to do. The automated validators we built catch the structural problems that make sites unusable.

Using Keyboard Only: You can't use a mouse. Maybe you have a motor disability. Maybe you're a power user. You navigate using Tab key to move between elements and Enter to activate them. If focus isn't visible, you don't know which element is active. If tab order is illogical, you bounce around randomly instead of left-to-right, top-to-bottom. If there are keyboard traps (elements you can't tab out of), you're stuck.

With Colorblindness: You can't distinguish red from green. A chart that uses red for loss and green for profit is completely unclear to you. An error that's only indicated by red text is invisible to you. Instructions that say "click the red button" are unhelpful when you can't see the color.

With Low Vision: You use magnification. Most of the screen is zoomed in so you can see it. But magnification breaks layouts designed for 1920x1080. Text wraps weirdly. Buttons disappear off-screen. Controls are too small to click. If color contrast is poor, even magnification doesn't help—light gray on white is still light gray on white, just bigger.

With Cognitive Disabilities: You process information differently. Complex language confuses you. Unexpected interactions surprise you. Inconsistent layouts disorient you. Accessibility features help: clear language, predictable layouts, explicit instructions. Automated validators catch the most obvious problems.

These aren't edge cases. 1 in 4 people have some form of disability. That's your market. When your site is inaccessible, you're excluding them. The automated validators in this skill catch the obvious problems that hurt these users most.

Beyond Automation: Testing With Real Users

Here's the critical truth: automated testing is necessary but not sufficient. Automation catches structural problems, but it misses experiential problems. A screen reader can technically use a site that's technically valid but confusing. Keyboard navigation can reach all elements but in illogical order. Color contrast can meet WCAG but still be hard to read for some users.

This is why real user testing with people who have disabilities is essential. Automated validators are your first gate. Real user testing is your second gate.

For large organizations, this means:

  1. Hire accessibility consultants with disabilities to test your products
  2. Pay them fairly for their expertise
  3. Listen to their feedback and prioritize their recommendations
  4. Iterate based on their input before shipping to customers

For smaller organizations that can't afford consultants:

  1. Use testing services like User Testing or Try My UI that can recruit disabled testers
  2. Participate in accessibility communities and ask for feedback
  3. Build accessibility into job descriptions to ensure your teams include disabled people
  4. Dogfood your products with accessibility tools enabled

The automated skill we built catches 80% of technical violations. Real testing with disabled users catches the other 20%—the experiential problems that automated tools can't see. Together, they create truly accessible products.

Scaling Accessibility Testing as Your Product Grows

As your product grows, accessibility becomes increasingly complex. You have more pages, more components, more interactions. Manual accessibility testing doesn't scale. Automated validators become essential.

Early stage (1-10 people): Basic validation. You can manually test everything. But start building the skill now so it's ready as you grow.

Growth stage (10-100 people): Systematic validation. Automated checks run on every PR. Manual testing becomes targeted. You audit high-impact pages quarterly.

Mature stage (100+ people): Comprehensive validation. Automated checks are comprehensive. Manual testing is ongoing. You maintain accessibility champions across teams. Accessibility is part of product culture.

The skill we built scales naturally. As you add more code, the validator checks more code. As your codebase grows, the performance cost stays constant—validation time grows with new code, not exponentially.

Creating an Accessibility Testing Strategy

Accessibility testing needs strategy. Here's a framework:

Layer 1: Automated Checks (happens on every commit)

  • Color contrast validation
  • Alt text presence checking
  • ARIA attributes validation
  • Heading hierarchy analysis
  • Keyboard navigation testing
  • Form labels verification

These are fast, deterministic, and catch obvious problems. Run in CI. Fail builds if critical issues found.

Layer 2: Focused Manual Testing (happens on major features)

  • Real assistive technology testing (screen readers, switch control)
  • Keyboard navigation flows
  • Form error handling
  • Focus management in modals
  • Edge cases specific to your product

This catches experiential problems. Hire disabled testers to do this.

Layer 3: Quarterly Comprehensive Audits (happens four times a year)

  • Full product accessibility review
  • Cross-browser testing
  • Edge cases and corner scenarios
  • Regression testing

This ensures systemic accessibility. Use professional accessibility consultants.

Layer 4: Continuous User Feedback (ongoing)

  • Encourage users to report accessibility issues
  • Create a process to respond and fix
  • Learn from real users about real problems

This provides constant feedback about what matters.

Combining these layers creates comprehensive accessibility coverage that scales with your product.

Governance and Compliance: Making Accessibility an Organizational Commitment

Accessibility can't be an afterthought. It needs governance—formal structures that ensure accessibility is treated as a first-class concern, not a nice-to-have that gets cut when deadlines loom.

Establish Accessibility Standards Create a document that defines what accessibility means for your organization. Is it WCAG 2.1 Level AA? Level AAA? Some custom combination? Document it. Make it the source of truth. Use automated validation to enforce it.

Build Accessibility Into Design Process Accessibility isn't just for QA. It needs to start in design. Designers should understand accessible color choices, readable fonts, good contrast ratios. Design tools should help—suggest accessible palettes, warn about contrast problems, make accessible design the default.

Require Accessibility Testing in QA QA should test with accessibility in mind. Use screen readers. Test keyboard navigation. Check color contrast. Use automated validators. Make accessibility QA's responsibility.

Integrate Into Development Workflow Developers need to be empowered to build accessible code. Provide tools, libraries, and examples. Make inaccessible code flagged in CI. Make accessible design the path of least resistance.

Track Compliance Metrics Measure accessibility. What percentage of your codebase passes WCAG AA validation? What's your accessibility score for each product? Track trends. Use metrics to drive improvement.

Legal and Regulatory Compliance Understand your legal obligations. In the US, WCAG 2.1 Level AA compliance is increasingly expected (and sometimes required). In EU, EN 301 549 is standard. If you're in regulated industries (healthcare, education, finance), compliance is often mandatory. Work with legal to understand requirements. Use automation to demonstrate compliance.

Accessibility Champions Program Identify people in your organization interested in accessibility. Give them training and time to build expertise. Use them as advisors for product decisions. Accessibility champions multiply your capacity and build culture.

With governance, accessibility stops being something that happens if there's time and becomes something that always happens.

The Future: AI-Assisted Accessibility Testing

This skill is the foundation. Future iterations could:

  • Predict user impact: "This keyboard trap affects 15% of your users"
  • Generate fixes automatically: "Apply this CSS to fix 8 contrast issues"
  • Learn from feedback: "You approved this pattern before; I'll use it as precedent"
  • Test with real assistive tech: Integrate NVDA, JAWS, VoiceOver testing
  • Cross-feature consistency: "Your error message pattern differs from component library"

The barrier to accessibility is shrinking. With good tools, even small teams can ship truly accessible products.


-iNet

Accessibility is not a feature. It's a requirement. Make it automatic.

Need help implementing this?

We build automation systems like this for clients every day.

Discuss Your Project