Go1 · 2023
Simplifying Reporting for Efficiency
TL;DR
Reporting was one of Go1's most persistent customer frustrations, costing approximately 2000 support hours annually and consistently ranking in the top five complaints. As Senior Product Designer on the L&D Manager experience, I led the research and design effort to understand why — and to redesign reporting from the ground up. Usability testing validated the direction strongly, and when the initiative was later deprioritised, customer churn told the story we already knew. Reporting is now being built, with the original research and design as its foundation.
Certain information has been omitted or obfuscated in this case study. The opinions presented here represent my views alone, not of my current or past employers.

Reporting had long been a source of frustration for L&D managers using the Go1 platform. Although the reporting toolkit was technically capable — built to mimic database functionality — it was designed without adequately considering the technical expertise of its primary users. The result was a powerful system that most people couldn't use confidently. During my first year and a half at Go1, focused on the L&D Manager experience as a Senior Product Designer, fixing this became one of the most significant problems I worked on.
Discovery
The business case was clear before the research even began. Approximately 2000 support hours were spent annually on reporting queries — the equivalent of a full-time staff member. The reporting suite was also built on legacy technology, making even minor updates a significant engineering effort. But understanding the why behind the frustration required talking to the people experiencing it.
Customer Interviews
We spoke to L&D managers from a diverse range of organisations — medium to enterprise-sized, spanning education, retail, logistics, insurance, engineering, law, agriculture, and healthcare. Rather than asking them to describe their problems in the abstract, we asked them to show us: how they set up reports, what they did with the results, and where things broke down.

The most important insight was that reporting wasn't a single job. L&D managers report for fundamentally different reasons: ensuring mandatory training is being completed, understanding engagement with upskilling content, and demonstrating return on investment. Each of these has different data needs, different audiences, and different definitions of success — yet the existing tool treated them all the same way.
We mapped our findings into an opportunity solution tree to ensure the opportunities we identified were grounded in research and aligned to business goals.

Given the scope, we narrowed our focus to mandatory training reporting — the most common requirement across the businesses we spoke to. The key friction points here were: not knowing where to go to perform reporting, not being able to make sense of enrolment data, and not being able to trust what they were seeing.

Solution Exploration
Wireframes
The central design challenge was making complex, relational data legible to a non-technical user. The approach we explored centred on hierarchy — giving users an aggregate view of resources, learners, and groups that they could drill into progressively, rather than presenting raw data and expecting them to interpret it.


Filtering was another significant design decision. The existing filters used database identifiers and complex parent-child relationships that meant nothing to an L&D manager. The new filters needed to speak the user's language — and critically, needed to allow segmentation between assigned and self-directed learning, a fundamental distinction in how L&D teams track progress.

We tested the concepts internally with our L&D team and customer success managers to validate the direction before moving to high fidelity.
High-Fidelity Designs
Most components already existed in the Go1 design system, which meant progressing to high fidelity was relatively straightforward. The main effort went into the data visualisations and ensuring the prototype used realistic mock data — small inaccuracies tend to distract expert participants during usability testing and undermine confidence in the results.




Usability Testing
We tested with five customers from organisations of varying sizes and industries. The focus was on whether the designs were discoverable, intuitive, and genuinely valuable to L&D managers in practice.
The results were strong. All participants were able to interpret the aggregate data and drill down to find the detail they needed without assistance. The hierarchy approach — which had been the central bet of the design — proved to be the right call.
“I think this [resource report] is awesome. It's easy to consume no matter what role you play in the organisation.”
— Participant 2
“I get tasked to check what new employees complete on the first and second day of employment so this [learner report] will be amazing.”
— Participant 4
“The group report will help us work more effectively with how we've set up the platform.”
— Participant 1
Testing also surfaced useful directions for future iterations: proactive compliance alerts before a learner becomes overdue, clearer filter labels, and reporting views tailored for managers and team leads rather than just L&D admins.
One participant went further than the test itself. Shortly after the session, they reached out directly:
“Essentially this is exactly what our schools are desperate for. It's perfect! … I thought I would reach out and see if there is anything you can do to throw some weight behind this to move it to the top of the list to be built.”
— Participant 3
Outcome
Despite the validation, an untimely change in product strategy led to the initiative being deprioritised. In the following half-year, customer churn rose to levels not seen in years — with reporting cited as one of the top reasons for leaving. The decision was reversed.
The feature is now being built, informed by the original research and design. Another designer has carried the work forward, extending it with the ability to build reports using AI. My work provided the foundation; theirs took it further. That feels like the right outcome.
What this project reinforced is that good research has a longer shelf life than the project it was created for. The opportunity solution tree, the customer interviews, the usability testing — none of that became irrelevant when the initiative was paused. It waited. When the business was ready to act, the direction was already there.