I designed and launched a new onboarding flow for Research.gov that allows users to self-register as volunteer peer reviewers for proposal evaluation within the National Science Foundation (NSF). Prior to this effort, no public-facing "Volunteer to Review" path existed, and reviewers had to be manually recruited by NSF program staff. This initiative demanded a balance between user experience, strict federal data rules, and backend identity-matching constraints.
Within two weeks after the release, this new self-service experience has onboarded 431 reviewers through this flow alone (out of ~1000 total in this time period with our invited-to-review flow), helping NSF dramatically expand its reviewer pool and lower administrative burden on NSF staff members.
Link to live solution: www.research.gov
As the primary UX resource on an agile product team, I was responsible for the following:
The National Science Foundation (NSF) relies on expert peer reviewers to evaluate federal research grant proposals. Traditionally, reviewer participation has been driven by program staff outreach, where individuals are selected and invited to join review panels based on their expertise. While this approach ensures quality and relevance, it also places a continual outreach responsibility on NSF staff and limits opportunities for interested researchers to proactively express interest.
The Volunteer to Review initiative aimed to solve a structural policy gap: there was no way for qualified researchers to proactively express interest in participating. By designing a self-service onboarding flow for Research.gov, I helped introduce a scalable mechanism for self-nomination, enabling researchers to “raise their hand” to serve as reviewers.
This project was a UX problem just as much as it was a service design challenge with organizational impact. The onboarding flow supported NSF’s broader goals to:
This project required designing not just a usable interface, but a trustworthy and policy-compliant service entry point into one of the federal government’s most critical scientific review pipelines. A core challenge was enabling self-nomination without compromising data security, identity-matching accuracy, or user trust. Many past reviewers had incomplete or mismatched records across legacy systems, and some lacked updated contact information.
Design decisions had to navigate strict privacy requirements (particularly around social security numbers and historical emails), conform to legacy backend matching logic (based on phone, name, and social security numbers to prevent duplicate user IDs being created), and work within a rigid Research.gov information architecture that could not be altered without agency approval.
I collaborated with the development team to create a decision tree that captured all possible reviewer identities (e.g., new, past, unsure, changed name). This logic formed the basis of the conditional screens that guided users to the correct onboarding path.
After validating the verification flow, I then drafted wireframes of the 'Volunteer to Review' pre-check form, The design includes different cases for when the system could or couldn't verify a user's information, email one-time-password (OTP) verification, and confirmation email templates.
Axure Wireframe Folder Structure: Outlining versions of the form for each verification state.
Animated demo: One user path through the Volunteer to Review onboarding flow. This flow adapts dynamically based on user input, verification status, and data-matching outcomes.
Wireframe: Signed-out Research.gov homepage state
Wireframe: Signed-out Research.gov homepage state with captured feedback from stakeholders
Wireframe: Verification state for users who need to validate with phone and social security numbers (note - all scenarios were fictional, and we gave the users fake data, no real phone or social security numbers were used in this study).
Wireframe: One Time Password (OTP) modal
Wireframe: Email verification template
Once I had finalized a set of wireframes, the next step was to plan for usability testing to validate them and identify any areas of improvement and potential long-term enhancements for the product backlog. The goal of my usability study was to address the following questions:
Seven participants were recruited for this usability test. I moderated each test session remotely using Zoom, and participants were required to share their screens as they clicked through the wireframes. In order to ensure my sample accurately reflected the intended audience, I worked with my UX team and the NSF to recruit principal investigators and researchers from the academic community who had never conducted review or panel activities.
My test comprised of four scenarios:
While peer reviewers aren’t traditional customers, the dynamics of trust, communication, and service expectations in their relationship with the NSF closely mirror a customer-business interaction. As part of a lightweight Voice of the Customer (VoC) effort, I asked participants open-ended questions before and after usability sessions to understand their broader experiences with NSF and Research.gov.
Questions included:
These responses uncovered service-level pain points around navigation, trust, and onboarding, which were insights that wouldn’t have emerged from task flows alone. I incorporated this VoC feedback into final leadership presentations to advocate for clearer reviewer guidance and expanded support for first-time users beyond the immediate Volunteer to Review form experience.
After each task, I asked the users to rate the screens they clicked through on a 5-point Likert scale from "Very Dissatisfied" through "Very Satisfied", and also asked what changes, if any, would they make to the screens tested.
At the end of each session, I also asked the users to then give an overall rating of the entire set of wireframes that they had tested, in addition to some targeted follow-up questions about their preferred methods of adding review meeting attendees.
As a part of my analysis, I categorized key user behaviors and feedback using the Six Minds of UX framework, organizing observations into cognitive dimensions such as Vision, Wayfinding, Memory, Language, Decision Making, and Emotion.
Major Finding: Users found the Volunteer to Review form itself to be easy to navigate, but the identity verification section caused some hesitation - even in the context of fictional scenarios with fake data.
"The [questions] are straightforward, so that's good. They are to the point. I like that it has 'I'm not sure.' I like the 'I don't have an SSN' option. I want to know it's ok that I don't see my number and don't have an SSN."
-Usability Test Participant
"I think these [questons] were very clear. Everything was easy to find, so I don't see any changes I would make."
-Usability Test Participant
"I would begrudingly enter my social security number...I would want to know why NSF needs this."
-Usability Test Participant
"Transitioning to another organization, this can be tricy. I don't know how long I will have access to previous emails, and it says I need to have current access."
-Usability Test Participant
Major Finding: Getting to the Volunteer to Review form however, from any of the NSF entry points was a major pain point for users.
"The Volunteer to Review and Panels information [on the landing page] was confusing...The Volunteer to Review information was lost there...Maybe make Volunteer to Review its own section...When I see reviews and meetings, it seems like I would be looking for my own panels and meetings, and not a place where I would volunteer"
-Usability Test Participant
"Most of [the landing page] I would say very satisfied...Finding where to get to volunteering to review could be easier to find"
-Usability Test Participant
"I would probably go to Google first and type in 'NSF reviewing'...I would not start on an NSF site because they can be confusing"
-Usability Test Participant
When asked to given a satisfaction rating for all wireframes, everyone responded with 'Satisfied' or higher. Task completion and satisfaction rates are listed below:
Major Finding: Higher completion and satisfaction rates for Tasks 3 and 4, compared to Tasks 1 and 2, aligned with qualitative observations: the primary usability barrier was locating the Volunteer to Review flow. Once users entered the form, they were generally able to complete their tasks successfully and with minimal friction.
Wireframe: Expanded Privacy Act Statement
Wireframe: Tested question wording
Wireframe: New question wording
This ongoing monitoring effort is designed to validate usability test findings at scale by using behavioral analytics to assess whether the homepage remains a barrier for a broader user base. The goal is to supplement qualitative insights with real usage data and inform future design or content improvements with evidence.
Key areas of focus include:
In addition to tracking scroll and click behavior, I collaborated with our development team to monitor account activity and conversion rates. Specifically, how many users who visited Research.gov proceeded to sign up and volunteer. My goal was to ensure that usability metrics were tied directly to business and customer success outcomes, so we could continue building solutions that deliver real value for end users and the NSF.
Some snapshots of our current heatmaps are included below:
Crazy Egg Heatmap Snapshot: Signed-in view of Research.gov home page
Crazy Egg Heatmap Snapshot: Volunteer to Review splash page
Crazy Egg Heatmap Snapshot: Volunteer to Review Form
This was a large-scale, high-impact government onboarding project that blended customer experience strategy, legal compliance, and user empathy. I owned the experience end-to-end—leading flow design, wireframing, usability test planning and moderation, and post-launch monitoring. I consider this my flagship project because it demonstrates my ability to navigate complex, multi-stakeholder design challenges with clarity, pragmatism, and rigor.
If I had more time or resources for this project, I would have: