Our 7-Step Playbook that Boosted Course Catalog Accessibility by 34%

What if you could dramatically improve accessibility in weeks instead of months, without a risky rebuild?

We did exactly that for an Ivy League institution, and the playbook we developed was remarkably practical and effective.

For higher ed websites, accessibility often makes the difference between someone completing a goal (finding a course, comparing options, registering) or getting stuck and quitting. Behind every accessibility metric is a person trying to change their life through education. When a website blocks their path to registration, sure, it's a compliance issue, but it’s also a mission failure.

When this particular university first approached us to rapidly improve the accessibility of their online course catalog, we knew this project would require more than automated fixes. We needed a strategy that would:

  • Maximize meaningful impact despite very tight timeframe constraints

  • Make targeted improvements, avoiding a risky rebuild

  • Balance code compliance with human user experience

In this case study we’ll share our 7-step playbook: a repeatable process you can adapt for your own accessibility initiatives.

Challenge

At the start of the project, the website’s Siteimprove accessibility score was 63.7%, which is well below the industry benchmark for higher ed. The client knew they needed dramatic change, but not just in numbers. Our job was to make the experience straightforward, so people using assistive technology could complete tasks without trial-and-error.

As a course catalog, this site required users to go beyond passive reading: people had to find, compare, and register for courses. Even after simplifying filter types and selection states, the structure of the course data itself still presented significant complexity. For example:

  • A single course often had multiple offerings running concurrently or in staggered succession (with different terms, registration deadlines, and instructors).

  • Courses could belong to different schools or programs, each with their own rules and context.

  • Registration links could route people out to different external systems depending on the course.

Working toward an 8-week deadline, we needed to make major accessibility gains without a redesign or rebuild. Just as important, we wanted to go beyond what automated scans could catch and use real human feedback to make sure key tasks were genuinely clear and achievable, not just technically compliant. That’s what drove the approach you’ll see in the playbook below.

Playbook

Focused prioritization was the key to combining automated evaluation with human testing efficiently. The core idea was to fix the obvious, high-volume issues first, so human test participants could spend their time on the things tools can’t measure. Here’s the steps we took:

1) Run initial scans

To establish a baseline and find the highest-volume failure patterns across templates, we started with automated evaluation tools including:

  • Siteimprove: An automated accessibility scanner that crawls pages and flags common issues with Web Content Accessibility Guidelines (WCAG) at scale, so you can quickly find repeat problems across templates and establish a measurable baseline.

  • Lighthouse (Chrome DevTools): A built-in browser audit tool that checks accessibility (plus performance and best practices) on a single page, making it great for quick spot-checks and catching obvious front-end issues while you iterate.

Our team also performed keyboard-only navigation testing with common screen readers. This helped us identify obvious issues before engaging external test participants, who bring deeper expertise and familiarity with common accessibility barriers.

  • Native screen readers: Screen readers built into operating systems (like VoiceOver on macOS/iOS and TalkBack on Android), letting you confirm the experience across tools many people actually rely on and catch platform-specific differences in behavior.

  • JAWS: A widely used Windows screen reader that reads and navigates pages through keyboard and spoken output, helping you uncover real interaction barriers (focus order, labels, states, announcements) that automated tools often miss.

2) Prioritize for impact

With a Siteimprove score of only 63.7%, automated scans were returning a very long list of potential adjustments. With limited time and budget, we had to prioritize the highest-impact fixes first, considering factors such as:

  • Template-level issues: Problems that repeat across many pages, yet stem from a single block of referenced code.

  • Severity of barrier: Keyboard traps, missing form labels, or complete blockers that prevent filtering, searching or registration.

  • Frequency of use: Issues affecting high-traffic pages or frequently-used features.

  • Technical dependencies: Issues that unlock other work or affect multiple downstream features.

3) Fix as much as possible

Before bringing in human test participants, we fixed nearly all issues that automated tools could reliably detect. Anything we couldn’t fix was clearly documented so the moderator and participants could acknowledge and move past them without wasting time. This allowed us to:

  • Maximize the value of human feedback. Focus moderated sessions on uncovering what automated tools miss, and avoid re-reporting problems scanners already caught.

  • Prevent fixation and use time wisely. Avoid anything that could mentally block participants from exploring other parts of the interface where they might have subtler, more valuable insights.

  • Present a clean baseline. Show participants a version that works well enough to focus on experience rather than breakage, revealing problems that only emerge during real task completion.

4) Define test scenarios

We asked participants to perform a focused set of tasks aligned with what the catalog is designed to support, what the client prioritizes for their users, and what aligns with their broader institutional goals:

  • Registering for a course of interest.

  • Browsing courses by categories or tags.

  • Using the keyword search field to find a specific course.

  • Locating all available free courses.

  • Determining the associated school for a course.

  • Filtering courses by difficulty level.

  • Discovering course instructors.

Preparing specific, goal-oriented tasks for test participants is far more valuable than unstructured exploration. To get the most out of participants’ time and attention, it's crucial to focus on the areas that will deliver the greatest benefit and return on investment.

5) Recruit test participants

We needed quick, diverse feedback. So we partnered with a recruiting agency to find a variety of participants with visual, motor, and cognitive disabilities. This breadth ensured we could identify barriers across different interaction patterns and assistive technologies, rather than optimizing for just one type of user experience.

We limited the group to 3 participants to keep the amount of feedback manageable and minimize redundant issue reports. This focused approach also preserved resources for potential follow-up testing of specific features or specific disability types.

To get the most value from human testing, other factors we recommend considering include:

  • Match disability types to features. Include participants with visual disabilities for image-heavy interfaces, motor disabilities for drag-and-drop interactions, and cognitive disabilities for complex workflows.

  • Consider experience level and impairment context. Test with both experts and newcomers to assistive technology. Recency of impairment affects proficiency, and temporary disabilities influence motivation to learn tools deeply.

  • Stick to relevant demographics: Age and digital literacy affect how people navigate barriers, so recruit test participants who match your real audience.

  • Consider intersectionality: Include participants who have multiple disabilities, to reveal compounding friction that single-impairment testing might miss.

6) Prioritize again

Similar to step 2, we selected the most impactful findings to address first, based on severity of the issue, how many participants mentioned it, timeline and risk constraints, etc.

But this prioritization effort felt substantially different from before, because we were working with qualitative verbal descriptions of people’s experiences. We had to consider:

  • Thematic feedback: When multiple people struggle with the same thing, they may describe it differently. We had to identify themes, for example "confusion about filter states" affecting all three participants in different tasks.

  • Translation work: Participants don't report "missing ARIA labels", they say "I didn't know if the filter was on." We had to interpret what they meant and map it to actionable development tasks.

  • Experiential factors: When prioritizing qualitative feedback, we had to ask less technical and more practical questions, like does it block task completion? Does it create anxiety or confusion? Does it force workarounds?

  • Friction vs. feasibility: When weighing task-blockers against implementation risk and timeline constraints, anything deemed out of scope was documented clearly for future improvement.

7) Define a maintenance cycle

Accessibility is not a one-and-done project. As content and components change, dependencies update, technologies evolve, and user expectations shift, the quality of the user experience can gradually drift.

That’s why we incorporated accessibility into this client’s monthly site maintenance, beyond this initial cleanup project. The recurring cadence helps us catch regressions early and keep improvements from fading as the site evolves.

Here’s an achievable cycle we often recommend, which most higher ed teams can sustain:

  • Monthly (lightweight): run automated scans on key templates and fix high-severity regressions.

  • Quarterly (hands-on): spot-check the most important tasks with assistive tech (search, filtering, course detail, registration, and high-traffic forms).

  • Once or Twice Yearly (program-level): review results against targets, refresh internal standards, and update documentation.

To support continuous improvement, we also provided this client with:

  • Training documentation for content editors (headings, links, alt text, tables, PDFs, and media captions)

  • Input regarding standards and targets (what “good” looks like for your institution, plus what you measure)

  • A tracking and reporting system (a place to log issues, prioritize them, and show progress over time)

Our goal was to help our client integrate accessibility into their workflow permanently, turning it from a project into a practice.

Findings

The automated tools helped us resolve common best-practice problems and adhere to defined standards. But real human users told us much more. We learned:

  • Whether users felt confident that they’re not missing important information.

  • When users had trouble identifying or remembering when filters are active/selected.

  • How distracting the graphics or other visual elements were.

  • Whether the interface felt familiar and consistent with user expectations.

  • How much stress the interface was causing for each user.

  • Whether the path to completion was obvious or intuitive.

  • When the user was unsure if an action they took was successful.

All of these insights involve mental and emotional engagement with the system on a level that's inherently invisible to automated tools. This feedback was invaluable, and in fact from just three participants, we received not only critiques but feature suggestions and ideas for the future.

That's why the iterative prioritization and ongoing cyclical aspects of our approach were so important. We couldn't do everything in the first phase, but we wanted to capture the full value of our efforts and these insights, so we could act on them in the future as resources and priorities allowed.

Results

Within just 8 weeks, we improved the catalog’s Siteimprove accessibility score from 63.7% to 98.2%, transforming it from barely accessible to exemplary.

Human test participants reported a noticeably smoother experience, expressing greater confidence navigating the catalog, reduced confusion about filter states, and less friction completing core tasks like registration and search.

Our partnership allowed this client to turn accessibility into a managed program, continuously improving the system while preventing the slow drift back to broken experiences.

The takeaway is that you can make major accessibility gains on a tight timeline, and you can do it without a total redesign, as long as you combine the right tools with a healthy dose of human feedback and a disciplined prioritization process. The secret is knowing exactly where to focus the effort.

What’s Next?

We’ve seen what’s possible when accessibility work is treated as a focused, time-bound effort with a clear method behind it (as opposed to a vague aspiration). Our 7-step playbook above can also serve as a practical starting point for your own team, whether you’re aiming for rapid website improvement or incorporating it into a larger Digital Accessibility Program.

And you’ll get the best results by following through on that last step: establishing a maintenance cycle at the end of an initial project. That keeps accessibility real after the initial push, incorporating regular scans, hands-on checks, and ongoing refinements that keep pace as content and features evolve.

This project gave us the opportunity to pave a clearer path for prospective students to access high quality education. In doing so, we’re empowering individuals to pursue knowledge and realize their full potential.

When the stakes are that high, it can be hard to know where to start and what “enough” looks like, especially when you’re balancing limited resources and high stakeholder expectations. If you’re carrying the weight of “getting accessibility right” while also keeping your web strategy moving, let us know. We’re always here to help!

Tell us about your accessibility goals »

Previous
Previous

We Got Roasted in Live Accessibility Testing: 8 Signals You Should Listen For (With Quotes)

Next
Next

Purpose Amnesia: Why Web Projects Lose Focus (Plus a Free Template to Stay Aligned)