Furst Person
Product Design
0  /  100
keyboard_arrow_up
keyboard_arrow_down
keyboard_arrow_left
keyboard_arrow_right
Furst Person / Product Design

Improvements in the creation and administration of aptitude assessments that reduced goal achievement times by 60%

Start Reading

Furst Person / Product Design

Overview

Simplification of the creation, flow, and editing of aptitude tests and candidate management

FurstPerson (now Harver) is the industry leading hiring B2B solution helping organizations optimize their talent decisions. Backed by I/O psychology and cognitive science, FurstPerson delivers a suite of automated solutions that enables organizations to engage, hire, and develop the right talent in a fast and fundamentally less biased way. FurstPerson provides trusted, data-driven, and adaptable offerings, including assessments, video interviews, scheduling, and reference checking.

To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study.
All information in this case study is my own and does not necessarily reflect the views of FurstPerson or any other related company.

The system that needed improvement

The business goals

Improve, simplify, and leverage.

The client had a system that had been developed in India, which was very cost-effective to build but was proving to be very expensive to run when it came to performing the tasks it was supposed to carry out. Roughly speaking, the client wanted:

  1. Improve the system: Candidate testing processes, navigation, information organization, and look & feel.
  2. Simplify: All the processes involved in assessment creation and evaluation.
  3. Leverage: Capitalize on the information obtained from tests to make the right decisions regarding each candidacy.

The client had a general idea of the problem, and now it was necessary to validate whether the specific issues of the users were aligned or if there were other additional problems.

Users & audience

Candidates and different levels of administrators

The candidates used the assessments generated by the system, and it was their only interaction with it. In the case of the administrators, they had different levels depending on the tasks they were assigned to.

  1. Administrators: There were four levels of administrators with different needs, issues, roles, and permissions. The application administrator had the ability to create, edit, and delete almost everything within the system. The client administrator had restricted permissions for creating tests and test components. The candidate manager's permissions allowed them to handle only candidate-related matters but not tests. The project manager had influence over all aspects, overseeing the general supervision of tests, hiring processes, and candidates.
  2. Candidates: The candidates were passive participants, as they simply completed the created assessments and attended the scheduled interviews.

Team, role & responsabilities

1 Product designer / 1 Senior UX Designer / 2 POs / 4 BAs / 2 TLs & +16 Devs

My most frequent interaction was with the POs, BAs, TLs, and stakeholders on the client side. There was also a lot of teamwork with the Devs when deciding on libraries, frameworks, or defining technological limitations in the implementation of certain ideas. I worked closely with a senior designer with whom I shared the workload of user research, analysis and creation of flows and screens, and the update of the design system.

  1. My role: My main task was to evaluate the system to contribute ideas that would help meet the business objectives and needs, conduct user research to understand the users' needs and pain points, design task and user flows to validate them with stakeholders and the team, update the design system, and validate its proper implementation in the final development. I also divided tasks and defined delivery deadlines for the design area, and supervised and approved deliverables.
  2. My responsabilities:
    • Lead the strategic redesign of vital features and sections of the SaaS.
    • Participate in product design from conception to production.
    • Build wireframes, prototypes and Hi-Fi designs for key features of the platform.
    • Partner closely with Engineering and Product teams to find efficient, user-friendly, and practical solutions to design challenges.
    • Be hands-on at every stage of the design process—from discovery through detailed design, execution, and iteration—creating user flows, wireframes, pixel perfect designs, and prototypes along the way to bring ideas to life.
    • Meet and align with cross-functional teams, including brainstorming with stakeholders and developers on product problems, solutions, requirements, and priorities.
  3. Team work methodology: The meetings with the client typically involved the CTO, the PO, and the BA from the client's side, and on our company's side, there were the POs, one or both TLs, one or several BAs, and myself as a Product Designer. I worked with design sprints and continuous, frequent reviews to validate flows, screens, and functionalities with stakeholders, the designer, and developers to validate technical feasibility and seek the best way to implement solutions.

The challenge

Simplify the parameterization and segmentation of actions dependent on multiple variables

To make the test creation processes more user-friendly, faster, and efficient, involving the creation of separate items (and in multiple languages) that are added as needed for each vacancy, creating workflows and scales for the tests. Additionally, seeking ways to present information more usefully for decision-making by administrators.

Factors to consider

Test creation workflows

It was crucial that the way tests were created be very intuitive for administrators, allowing for a clear and holistic view of the questions and components to be used in each test created.

Creation, editing, and testing of assessments

The creation of components that were part of each test, their incorporation into the tests, as well as the review, approval, and quality control process of all parties involved were vital to achieving the goal of having an efficient and valuable system.

Scale creation

Scale creation is a fundamental part within a testing system as it can limit or amplify the effectiveness in the approval of candidates within a process, it was necessary to find efficient ways to create scales without becoming complex or difficult to learn.

Different languages

The system could be used by companies with a presence in different countries but who wanted to use the same tests for the various positions they had, which meant having the same test in different languages and complicated the editing process that had to be done in different languages by different people, leading to effective change control that allowed checking the validity of a change within the company's chain of command.

Many branches and sub-branches in different states, regions, and time zones

The goal was to achieve a system that would allow creating a single test that could be used and reused by the main branches but also by the various related subsidiaries or companies within the same group, even if they were in different regions and time zones.

Process

Step 1 - Define User Personas

With the information provided by stakeholders, I was able to create the different groups of user personas for the system: 4 levels of administrators with different permissions and the candidate.

The system did not have quantitative evaluation tools like Google Analytics or Hotjar, so only quantitative analyses were conducted. However, I recommended implementing both to have quantitative data in the future with the new system.

Step 2: Send the survey to the users

For this project, a standard SUS survey was sent to administrators to estimate their satisfaction level regarding the usability of the system and to have an initial measure to compare later when repeating the survey.

Step 3: User interviews & contextual research

User interviews were conducted, asking questions to assess the NPS of the system before the changes. Additionally, during the same interview, users were asked to share their screens and show those task flows that presented major issues for them. A total of 32 interviews were conducted, with approximately 10 people belonging to each of the 3 groups of administrators. No interviews were conducted with candidates as the focus of this stage was on the administration flows and test visualization and hiring processes.

Step 4: Analysis & utilization of the results

All information gathered from the interviews was converted into high-level documents for management and stakeholders, which were presented and explained by me in meetings, as well as detailed documents for TLs, BAs, and Devs to identify any complications or bottlenecks we might have, as well as to prioritize tasks that needed to be done to address the most important issues.

Step 5: Task Flows & Lo-Fi prototype

Once the findings from the interviews were reviewed, I began working on task and user flows to understand all the steps involved in the crucial processes for each user persona. With a clearer vision of the flows, I started designing low-fidelity screens and creating a prototype to validate possible solutions to user pain points with the team.

Along with the UX Designer, I designed the most important flows, addressing the problems we had identified in the interviews, and created wireframes that were reviewed in several meetings with stakeholders, refined, and improved with the help of the entire team.

Step 6: User testing & iterations

Due to delivery deadlines, testing with many users could not be conducted, so tests were performed with one or two users of each type. During these live tests where users tested the low-fidelity prototype and provided feedback, I reviewed the results with stakeholders and made proposals and iterations of the revised screens.

Step 7: UI design proposals

I worked on two UI design proposals that were presented and evaluated by stakeholders and some users. Two rounds of iterations were conducted, and the final design was defined.

Step 8: Update of the design system, management of transferring high-fidelity design resources to developers & support in implementation

The UX designer and I updated the design system based on the approved and corrected design proposal, validating the feasibility of using existing components, creating new components as needed, and seeking prefabricated solutions to implement in the system, especially in the part of creating workflows, which was technically complex.

Step 9: New survey and user testing

Once the changes were implemented and after 6 months, the SUS evaluation and user interviews and NPS evaluation were repeated, resulting in an increase in both indicators by 30% to 50%, respectively.

After the changes made to the new system, both in simplifying test creation and result evaluation processes, changing components for key task execution, and improving candidate data visualization, the SUS score increased from 66 to 94, and the NPS increased from 4 to 8.

Outcomes & results

Simplification of test creation, flow, editing, and testing.

The process of creating test flows was tedious and error-prone. For this reason, I proposed creating tests as a step-by-step process where components could be added first, then the flow was created incorporating these components and the different possibilities depending on the score obtained at each step, and finally, it could be previewed, edited, sent for review, or put into production.

Significant improvement in candidate and test data visualization

The data collected from tests and candidate results were placed in a customizable dashboard with widgets and information tailored to each profile within the system, with dashboards containing specific information for: Candidates manager, Application admins, Client admins, and Project managers.

Improvement in component creation and multilingual administration

Creating the components that were the building blocks of the tests was a significant challenge, not only due to the complexity of creating them but also due to the difficulty of ensuring that changes made in one language or by one person did not negatively affect results in other languages. Previously, each country/region managed its components, but one of the business objectives was to centralize tests so that the same test could be used in different countries. To address this, I proposed a flow for creating and validating components to prevent or resolve conflicts between different languages within the same component, leading to an adjustment of the component approval process coordinated among language leads within the companies.

Improvement in usability and tool perception by users

The increase in SUS score and NPS was the result of many meetings and analyses conducted by the entire team and was proof that the change had been made in the right direction.

What would I have liked to do differently?

Having quantitative information would have been useful, especially for identifying navigation patterns within the system and for detecting other issues that were not commented on in the interviews but could have been captured if Hotjar or some other session recording tool had been implemented.

More info about this project