CRA E-Services Worksheet Redesign
2023 - 2024

Summary
Project Goals
As the Government of Canada advances its digital services, occasional issues are inevitable. When taxpayers face these challenges, they turn to CRA call centre agents, who must troubleshoot and often escalate problems to the appropriate teams.
The existing process for handling these issues was oudated and inefficient, making it difficult for agents to respond effectively. To improve this, my team and I reviewed the current workflow and redesigned it using modern tools and a more intuitive user experience. The result is a streamlined process that helps agents resolve taxpayer concerns faster and ensures digital issues are properly addressed.
My Role
For this project, I served as both the UX team lead and design lead, working alongside two UX specialists. I developed the UX project plan, coordinated closely with the client and program teams, and facilitated collaborative workshops with both the UX and client teams. I also managed the day-to-day aspects of the UX work, ensuring everything stayed on track and aligned with project goals.
Client
Internal team responsible for the administration and delivery of digital services within the CRA.
Services
-
UX Design
-
UX Research
-
Flow mapping
-
User flows
-
Benchmark testing
-
Moderated Usability testing
Project background
This project aimed to improve how CRA call centre agents handle calls about digital service issues. Previously, agents used a segmented Word form that was overly complex, lacked clear guidance, and made it difficult to know which sections to complete—leading to frequent errors and omissions.
These issues caused delays, as forms were often sent back for clarification, creating unnecessary back-and-forth between agents and resource officers resulting in extra time spent by both groups further delaying a resolution.
To solve this, my team and I conducted a full usability study and redesigned the form into a digital version. We automated many of the manual checks previously handled by resource officers, which improved data accuracy, reduced call times, boosted agent confidence, and significantly cut down on follow-up requests from program areas.
Outcomes
This transformation significantly improved the accuracy of the information collected, reduced the time agents spent on calls, and boosted their confidence in completing the process. Most importantly, it dramatically reduced, if not completely eliminated the need for follow-ups from support teams to clarify form submissions.
85%
Increase in the average success rate by agents
6m 56s
Less time spent on average completing these forms by agents
20%
Increase in agent confidence while completing the form
31%
Increase in agents preceived ease of use while interacting with the form
The Design Process - Part 1
Product Research
Discovery
To fully understand how the form was intended to be used—along with the services and issue types it was meant to capture—my team and I conducted a thorough analysis of the form, its related procedures, and the supporting resources available to agents.
It quickly became clear that over time, several surface-level fixes had been introduced to patch deeper usability issues. The form lacked built-in guidance, validation, and clarity, making it difficult for agents to complete accurately. To compensate, a supplementary web page was created to provide instructions. However, this solution was far from ideal—it required agents to switch between the form and the web page, and the content itself was dense, hard to navigate, and lacked the clarity needed to support quick decision-making in real time.
Based on our own research and anecdotal feedback from the program areas as well as members of the client team who had prior experience as call centre agents using the form, it became clear we had several key areas to focus on during testing.
-
The length of time it took for an agent to complete the form
-
Are agents able to complete the required sections based on the issue
-
How accurate and complete are the forms completed
The Design Process - Part 2
Benchmark Testing
Running the test
After gaining a solid understanding of the types of issues the form is used to report and the procedures agents follow when handling related calls, we moved into the testing phase.
For the benchmark test, we worked with 10 Individual Tax Enquiries (ITE) contact centre agents. Each participant was given four tasks to complete using the Word document version of the form, designed to simulate real-world scenarios. Each session lasted one hour and was supported by a UX team member who moderated and took notes, while I observed and provided direction and feedback as needed.
The sessions focused on measuring four key metrics:
-
Time on task
-
Task success
-
Agent confidence
-
Perceived ease of use
Analysis
Following the sessions, we transcribed the feedback and analyzed the results using affinity diagrams to identify trends, patterns, and key areas for improvement.
Screen shots of the benchmark test analysis and affinity mapping
Key Findings
The analysis from the benchmark test revealed several challenges agents faced when using the form, reflected in the following key metrics:
8%
Average success rate for agents to accuratly and correctly complete a form
18m 19s
Average time on task to complete a form
57%
Was the average rating for agent confidence for how accurately they completed the form
58%
Average rating for agents preveived ease of use while interacting with the form
Additional Behavioral insights included:
-
Agents struggled to identify the correct program area for a caller’s issue.
-
Many agents completed the wrong section, or multiple sections due to a lack of clear direction.
-
Agents often avoided using the associated reference webpage, missing critical guidance as a result.
-
Fields requiring multiple inputs frequently led to incomplete or missing information.
-
The “caller issue summary” section was often overlooked, especially when agents had already entered detailed information elsewhere.
These insights directly informed the redesign strategy, which focused on:
-
Streamlining the process to guide agents clearly and efficiently.
-
Embedding necessary information within the form to eliminate reliance on external resources.
-
Implementing form validation to prevent missing required inputs.
-
Ensuring accurate routing of submissions to the appropriate program area.
The Design Process - Part 3
The Redesign
Understanding Willow
Before redesigning the form, my team and I needed a solid understanding of the environment it would operate in. For this project, the client chose to use an internal automated ticketing system called Willow. After several demos and discussions with the Willow development team, we gained a clear understanding of the system’s capabilities and limitations, allowing us to design the form effectively within those constraints.
To begin the redesign, we mapped out user flows for each specific program the form needed to support. One of the key challenges agents faced was identifying which part of the form to complete based on the caller’s issue. Since Willow couldn’t support branching logic, we decided to separate each program flow into its own dedicated form. This approach helped simplify the experience for agents and ensured the form remained functional within Willow’s technical limitations.
Inital user flows and form logic
Designing the flows
To meet the goals identified after the benchmark test, the user flows needed to be completely reworked. We began by mapping out each section of the original form in Miro, breaking down broad inputs into more specific ones. These updated flows included form validation, contextual descriptions, and examples where appropriate. We also built in logic to minimize duplicated inputs by accounting for the branching nature of certain workflows.
Throughout this process, I worked closely with the UX team, the Willow development team, and both client and program stakeholders. We held several brainstorming sessions to iterate on the flows and ensure all proposed changes were technically feasible. Subject matter experts were also involved to validate the content, ensure accuracy, and align with CRA’s writing standards.
After multiple iterations, the flows were finalized and the Willow form reached a stable state—ready for testing with CRA Contact Centre agents.
Screen shots of the redesigned form
The Design Process - Part 4
Testing the Redesign
Running the test
As with the initial benchmark test, 10 ITE Contact Centre agents participated in moderated usability sessions. Each agent was given four tasks designed to reflect real-world scenarios, with each session lasting one hour. Participants interacted with the online form in the development environment.
This round of testing showed a dramatic improvement across all key performance indicators compared to the initial benchmark—highlighting the effectiveness of the design changes and refinements made since the first round.
Results
Following the second round of testing, we conducted a comprehensive analysis of the data and feedback collected during the sessions. Using Affinity mapping in Miro, my team and I transcribed, grouped, and synthesized key insights and trends.
The results showed significant improvements across all four KPIs:
93%
Average success rate, an 85% increase from the benchmark test
11m 12s
Average time on task, a 6m 56s reduction from the benchmark test
77.4%
Average agent confidence rating that the form was completed correctly, up from 57%
89%
Average rating for agents preceived ease of use, up from 58%
Key insights
Notable insights from testing were:
-
Agents using the form for the first time took several minutes longer, but on subsequent attempts, their completion time dropped by 5–7 minutes, demonstrating a clear learning curve that was overcome without training.
-
Default Willow features caused confusion, as some didn’t align with the agents’ tasks. At the time, we were unable to remove or modify these elements
-
Hover-state buttons meant to reveal helpful information were often misinterpreted as clickable, causing them to deactivate before displaying the content.
-
Dynamic fields requiring multiple inputs overwhelmed some users, leading to skipped information and unrevealed follow-up fields.
-
Inconsistent or unclear wording in several fields left agents uncertain about how to proceed.
Despite these challenges, the overall results were very positive. The findings highlighted areas for further improvement, and thanks to our strong collaboration with the Willow development team, and their commitment to enhancing the system, we had a clear path forward to address the issues identified.
The Design Process - Part 5
Final Review and Launch
Changes to the Form
Ahead of the form’s launch, we made several revisions based on insights from the second round of testing. These updates addressed key pain points and included a redesigned flow, improved logic and content, and enhanced technical capabilities—thanks to close collaboration with the development team.
Some initial workarounds were necessary, such as adding a disclaimer to clarify that certain default Willow form features didn’t need to be completed for submission.
After a few iterations, coordination with the client, program, and dev teams, and completing the French translation, the form was finalized and ready for launch.
Revised form
Launching the Form
The form launched successfully to contact centre agents, thanks in large part to the strong communication plan implemented by the client team. All related reference guide pages were correctly linked, targeted agent groups received training, and leadership was informed well in advance, ensuring a smooth rollout with no confusion.
Post-launch feedback echoed the positive results we saw during testing. The form proved to be a much-needed improvement, enhancing the day-to-day experience for agents handling these requests and reducing the workload for resource officers who previously had to follow up on incomplete or incorrect submissions.
Conclusion
Outcome
This project was a resounding success. Seeing the success rate jump from 8% to 93% after just one round of testing and iteration was a first for me—and a powerful reminder of the impact of good UX. When users are given the right information at the right time, they’re empowered to make informed decisions and succeed in their work.
While this project was just one small part of the agency’s broader workload, it marked a meaningful step toward a more efficient and effective workforce—ultimately improving service to Canadians. It also reinforced that while UX may not directly drive day-to-day operations, it can be a powerful force multiplier in how an organization functions.
Lessons learned
This was one of my first projects involving form design. While it wasn’t drastically different from other UX work in terms of user flows and core principles, it presented a unique challenge: translating written procedures into intuitive, user-friendly inputs. It required some creative thinking to ensure the form felt seamless and logical for end users.
I also gained valuable experience in project management—particularly in aligning with other teams and ensuring the right people were involved at the right time. That said, we did face some challenges due to miscommunication from the client and program teams, which led to duplicated efforts. Specifically, our UX team wasn’t informed about how the program areas had organized themselves to receive the new forms. While this was a minor setback, it highlighted the importance of clear communication and transparency across teams.
Finally, this project was a good reminder that designs often need to evolve in response to technical constraints and business requirements. While it’s important to aim for the ideal solution, having flexible, adaptable designs is just as critical to meeting organizational needs.












