UX research & design

Library of Congress

Sep 2022 - May 2023 (9 months)

Project Overview

The Library of Congress (LOC) is embarking on a multi-year project to replace its integrated library system (ILS). Our team is tasked with understanding the user experience that LOC patrons have with the existing ILS, specifically the discovery layer, and developing design concepts based on our findings.

The project is organized into 5 sprints, where each sprint has a different focus but goes through the same stages to produce outcomes: mapping, sketching, deciding, prototyping, and testing.

Menu

Our Team

Sara Hu - UX Research
Shireen Patel - UX Research
Tulika Mohanti - UI & Visual Design
Rajin Suchdev - UX Design
Jocelyn Sun - Interaction Design & UXR

Tasks

Solution sketching
Prototyping
Usability study design
Contextual interview
Data analysis

Final Product

Problem Statement

After our first meeting with the client, we narrowed the problem space down into a more specific and practical problem statement as our long term goal for the entire project:

The long term goal is to improve the user’s discovery experience between loc.gov, the loc catalog, and Stacks by improving the presentation of functionalities to guide the user to the correct platform for their needs.

We further divided our goal into three areas: the Library of Congress homepage, the LOC catalog, and Stacks (currently available on the computers on-site only). We decided to put the LOC catalog and Stacks as the priority of focus because our client expressed interests in learning about the experience of these two platforms and the potential of integrating the two together.

We chose not to follow the traditional way of creating designs for all problem areas and iterating on them through the sprints. Instead, we favored workload management and ensuring the quality of our work. Starting mid-way through the current sprint, we began asking our client whether they wanted us to continue working on this design or switch to a new area of focus. We then used this to guide us in designing questions for contextual interviews taking place concurrently with the testing stage, which gave us more context to work with in preparation for the next sprint. As a result, we were more informed to make better decisions in the future sprint's mapping stage.

Area 1: The LOC Catalog

Sprint 1 (9/19/2022 - 10/27/2022)

The sprint begun with the mapping stage where we aligned our visions and formed a sprint goal by voting on the questions and concerns our design team have with the current design of the LOC catalog.

sprint 1 - sprint questions

sprint 1 - sprint questions

Combining our top questions with prior knowledge from our client, we brainstormed and voted on the How Might We (HMW) questions that will inform us the focus on the sprint.

We also created a user flow to visualize any blocks or overlaps during the discovery (search and browse) phase on the LOC catalog (please click to enlarge for readability).

The user flow actually revealed 2 pain points:

Our team decided to focus on the first pain point, which allowed for more design possibilities, as the second pain point mainly concerns backend logistical issues. Combining our insights from the competitive analysis and inspirations from the lightning demos, each team member produced one solution sketch that addressed our sprint questions and answered our "How Might We" questions. We then put all our sketches together and invited our stakeholders to an online critique session, where they commented on our sketches and dot-voted for their favorite ideas to be implemented in the prototype.

Our stakeholder was inspired by (the highlighted ideas are from my sketch):

To help us stitch these ideas together into one coherent application, we composed a storyboard to see exactly where they fit in.

In one week's time, we produced a minimum viable prototype for three experts to test.

sprint 1 - prototype view in gif

During our testing, we found that the users need:

PERSONALIZATION

  • Wants a personalized rank of search results
  • Would like a confirmation of the search
  • Wants guidance on how to divide the search facets

INTUITIVE FUNCTIONS

  • Need prominent filters that won’t be overlooked
  • Wants explanations for niche actions
  • Wants contextual description for tools

FUNCTIONAL FILTERS

  • Needs more control and specifications for filters
  • Wants to filters the initial result into facets

One thing I learned...

The biggest lesson I learned from this sprint is that all active CTA buttons and controls on the testing prototype must be obvious and intuitive with signifiers to avoid confusion, even for low-fidelity prototypes. In other words, users tend to expect everything they see to be active. So ideally, if it is not ready to be implemented, don't include it in the testing prototype.

This rather experimental approach to a persona search also got me thinking about categorization in general -- how do we sort users into different groups, and are these groups inclusive, covering all use cases that have ever existed? We eventually decided not to include this persona search feature in our final product for the same reason. But if I were to do this project again, I would bring up this question earlier in the stage, and we might be able to develop another novel search scheme that is more flexible and more suitable for the platform.

     

Sprint 3 (1/26/2023 - 2/23/2023)

Sprint 3 continued and was built on top of the works and findings from sprint 1. The focus of the sprint was straight forward, and we were more concerned with the logistics and limited resources we had, which, in my opinion, was a good thing to remember at the beginning of a sprint, because it grounded our thoughts and made sure we could deliver anything we promised.

sprint 3 - sprint questions

sprint 3 - sprint questions

We then brainstormed some How Might We questions referencing the same user flow we created in sprint 1, but with more background knowledge pf the user and experience working with the design language.

sprint 3 - how might we's

sprint 3 - how might we's

Due to scheduling difficulties, our client was not able to join us in voting for the HMW questions, so we sent her the Miro board and they were able to cast their vote using the big dots.

At the end of the mapping stages, we have two main questions to tackle:

With these questions in mind, we each created our solutions illustrated in sketches (my sketch second from the left and boxed):

My sketches focused on two area:

  1. reorganizing the content on the site, removing unnecessary/redundant features (e.g., browse and keyword search, which both are similar to the search bar on top, and the advanced search, which directs to a new page), and create more consistency throughout the different platforms by incorporating design ideas from Stacks (sprint 2); and
  2. designing a new on-page search functionality that consolidates all three types of search methods currently offered by the site, along with a helper text section that provides suggestion for additional catalogs based on the user's selection for the dropdowns

However, during storyboarding/wireframing, we collectively felt that the design of the search functionality, especially the advance search function, had potential to be more friendly to the general users. I stood up to the challenge and took the lead in designing this feature.

Inspired by some google search results and natural language, I wireframed an advance search function that mimics the logic of human thought process. In other words, I structured the input fields of the search query so that it flows and reads like a natural English sentence, making it easier to understand and hence increasing the accuracy of searching for the users.

And this is what the feature looks like prototyped:

sprint 3 - URX findings annotated on prototype screenshot

We received many constructive feedbacks during our testing stage, and we annotated the screenshot of our prototype with user feedbacks on sticky notes, which makes it easy for us to understand and convenient to present to stakeholders.

Based on the feedback we received, before we deliver our final design, we will:

One thing I learned...

Aside from the testing feedback we received, which we would evaluate and incorporate into our prototype during the last sprint, this sprint also highlighted the importance of UX writing, and how it could become a setback in the user experience, even if it is accurate but misaligned with the user's mental model. After this sprint, I noticed myself being more aware of the language (especially tones) used on the interfaces I use every day, and I began thinking more about the impact of language in terms of branding and marketing communication. As a non-native English speaker, this will be a lifelong lesson for me.

I also learned more about designing for accessibility in practice. For example, colors should always be used in conjunction with other signifying elements to accommodate color-blind users, in addition to passing WCAG guidelines for contrasts.

Area 2: Stacks

Sprint 2 (11/7/2022 - 12/9/2022)

We shifted our focus to Stacks for this spring. The first challenge we faced was the fact that due to copyright restrictions, Stacks currently only exists as a software on the computers on-site. In order for us to have adequate understanding of the platform, the sprint started with our client providing a virtual walkthrough of Stacks via Zoom.

With our understanding of the Stacks system, we formed our sprint questions using dot-voting on a collection of potential questions written on sticky notes.

sprint 2 - sprint questions

sprint 2 - sprint questions

We then recreated the user flow according to the virtual walkthrough during the mapping stage, and with that, we brainstormed and voted on our How Might We (HMW) questions and incorporated it into the flow (click image to enlarge).

Moving onto the sketch stage, we each produced a series of sketches as our solution to the HMW question as seen below. Our stakeholder voted on her favorite ideas using dots with her initials (my sketch is boxed).

sprint 2 - sketches

sprint 2 - sketches

My solution sketch answers the HMW question by:

We combined the strengths of different sketches together by drawing a storyboard showing a typical user interacting with our new data visualization:

sprint 2 - storyboard

sprint 2 - storyboard

And we transformed the storyboard into a mid-fidelity interactive prototype:

sprint 2 - prototype view in gif

Through testing and interviewing our testers, we found that our users could roughly be categorized into two user groups: the Congressional Reserarch Service (CRS) researchers, who are the professionals; and the casual users.

PROFESSIONALS want EFFICIENCY

  • be able to find information with a short deadline
  • use external resources (Google Scholars, Amazon, etc.) to speed up search process
  • Know what they are doing --> only switch search system when required by the task or when they exhausted the search results

CASUAL USERS want DIFFERENTIATION

  • Need to understand the difference between loc.gov, the LOC catalog, and Stacks
  • Need a clear discovery layer to guide user to the most appropriate search system which suit their needs
  • Don't want to be confused or overwhelmed by search results

Overall, our redesign of the current "Stacks at a glance" section was liked by our stakeholder and received positive feedback from our testers saying that the treemap visualization encourages exploration and "added elements of curiosity to a familiar interface".

One thing I learned...

The biggest lesson I learned from this sprint is the value of storyboarding at the end of the deciding stage, where we pieced all the voted ideas together into one application. We faced the challenge of connecting all the top ideas from the sketches made by different team members, and storyboarding provided the opportunity to "picture" how all these features could help at different points throughout a typical user flow. I also wonder if the same idea could be applied to conflict resolution to facilitate better team work, that instead of arguing which one idea to develop, we could try setting up a hypothetical situation and see if both ideas would work together. We may find a natural counterargument for one of the ideas, or we may even create a better solution from it ("1+1>2").

Area 3: Loc.gov Landing Page

Sprint 4 (2/27/2023 - 4/3/2023)

In our second-to-last sprint, we turned our focus to the landing page of the Library of Congress, which not only provide information about the library and showcase the variety of the service and programs it offers, but it is also the entry point for its different library systems: the digital collection (which is hosted on loc.gov), the main catalog and links to additional catalogs, and Stacks (available remotely in the future).

At the beginning of the sprint, I suggested that we split into two groups of two after the decide stage. One group would focus on designing and prototyping, while the other group would focus on user research and gathering relevant information to guide and validate any design decisions made by the design group. I would work between the two groups to track their progress, serve as a point of contact, and offer extra help when needed.

While the new process aimed to solve the problem of not having enough user data to work with, our team was also concerned with questions such as the scope of work for the sprint, the amount of freedom we had in terms of designing, and the design task we were trying to solve.

sprint 4 - sprint questions

sprint 4 - sprint questions

To help us focus on a specific goal for this sprint, we decided to differentiate users based on the accuracy of their mental models. Our ultimate objective was to ensure that the correct information was displayed and that users were guided to the appropriate search platform, even if their mental model of the site and system was not entirely accurate.

To achieve this, we first mapped out the flow of the two different types of users by our definition:

Then, based on the user flow map, we brainstormed some HMW questions that would help mitigating the problem users face when using the site.

sprint 4 - how might we's

sprint 4 - how might we's

Focusing on the most voted HMWs, we each drew some sketches illustrating our own takes of the questions:

sprint 4 - sketches

sprint 4 sketches, mine is the 2nd one from left

There are two trends among all our sketches:

  1. None of the sketches contained information on the content of the site, I even annotated my sketch to provide a list of potential contents for the section
  2. Out of the 4 sketches for the landing page, 3 sketches designed a top navigation bar, which is not part of the current live site.

To properly address the questions arising from these two trends, I worked with the research-focused group to design our first round of user testing. Each session consisted of a series of tasks to evaluate the content and layout of the current sites (which can be seen here), a card-sorting activity to provide insights into the design of the top navigation bar, and a couple of questions based on a preliminary version of the landing page with the navigation bar.

UXR finding result - tasks

sprint 4 - UXR finding result - tasks

UXR finding result - card sorting

sprint 4 - UXR finding result - card sorting

Based on the results shown in the first picture, we can conclude that the preliminary design of the navigation bar was successful because all five testers were able to complete the relevant discovery tasks using the top navigation bar, and the overall discovery experience was further improved with the inclusion of a bottom navigation footer acting as a site map.

Now that we have a site map at the bottom of the page, we have more flexibility in choosing what to display on the top navigation bar. Our idea was to place the most important and most frequently accessed information on the top bar to provide convenience to all types of users. The data in the second screenshot, computed using MaxDiff analysis, helped us understand the priority of the contents based on our testers' preferences.

Using the insights and feedback from the first round of testing, I designed both navigation systems while my teammates worked on the landing page and prepared for testing the mid-fidelity prototype on the actual testing stage of the sprint.

And here is the prototype for the landing page:

sprint 4 - prototype view in gif

During testing, we received overall positive feedback on the inclusion of the navigation systems but also found areas for improvement and had some questions to address in the final sprint. One participant mentioned that "I think the bottom three options on the 2nd column of the footer are all related; they should have one heading" (paraphrased). I think this comment captured some of the problems I faced when designing the bottom navigation.

Creating a complete navigation section for the main Library of Congress site was difficult due to the vast number of pages and subpages, some of which are repetitive. During a weekly meeting, we discussed this issue with our client, who explained that they typically prefer creating new pages instead of modifying existing ones for convenience. As a result of website architecture and organizational constraints, the bottom navigation includes the most popular pages, some necessary pages, and an overview of all page categories that serve as entry points to respective subpages. Each small section follows a structure of a bolded page title and one or more popular subpage titles accessible directly through a hyperlink or from the bolded page. While I plan to experiment with different section orders for future edits, I am hesitant to give some sections a common heading to maintain the established structure that works well for other pages.

One thing I learned...

Up until Sprint 3, we had been using placeholder texts for ease of prototyping with limited time and resources, instead of using real content from the live site. The experience of working with the real architecture of the site was a valuable lesson in handling practical constraints, especially with currently existing sites/applications. I realized that working with placeholder texts gives us more creative freedom; however, since we are working on an existing site, we are essentially deferring practical constraints until later sprints. This is the trade-off we made for the sake of time and the structure of the capstone course. Reflecting on my design process for the navigation systems, I realized the importance of good information architecture and how it can enhance or hinder design efforts. I will take these lessons with me to my future projects.

Integration

Sprint 5 (4/7/2023 - 5/12/2023)

We used the remaining 4 weeks of the project to connect all the pieces together. Our game plan had three steps:

  1. Using our overall strategy of divide-and-conquer, we assign team members to the three areas (Catalog, Stacks, and loc.gov landing page), and we edit current prototype based on the feedback received from testing sessions at the end of respective sprints,
  2. meet with two other UX experts from the Library of Congress to get feedback on the edited prototype; in other words, pushing our design through another informal iteration,
  3. incorporate expert feedback into our design and finalize the prototype by linking them together.

I will focus on the changes made to our design of the loc.gov landing page here, as I took charge in editing materials from sprint 4.

The comments we received from sprint 4 testing stage mostly concern the top half of the page design around the navigation bar and the search bar.

our findings

  • the navigation bar and the search bar were not differentiated enough
  • users expect a search bar at the top right of the page for general information
  • the purpose of some sections on the page were not clear, partially due to the fact that we were using placeholder images and fake text

solutions I proposed

  • I separated the navigation bar and the search bar by moving up the hero image carousel, I also redesigned the search bar to take up two rows to increase visual prominence
  • I designed a small search bar on the top right corner specially for library information (services and programs, etc.), and provided a helper tooltip below to clarify
  • I incorporated actual images and writings from the current site

In addition, I also introduced two more changes:

After the above edits, the design was shown to the two UX experts for evaluation during our meeting with them. We addressed the key concern raised for accessibility with form fields on the page with the decision that this will be done on the development side using techniques like HTML <label> tags, and we will state this as one of the design assumptions. The experts also pointed out a minor mistake of not including an option to see all programs & services under the tab "Browse All Programs & Services" and suggested the inclusion of reading rooms as a frequent destination of some users.

What's the difference? see for yourself (initial designon the left, final design on the right) —

sprint 4 - initial design of loc.gov
sprint 4 - final design of loc.gov

loc.gov landing page initial design (on the left), loc.gov landing page final design (on the right)

And after connecting all the parts together, we have our final prototype (linked at the top of the page)!

Final words...

Aside from participating in design-related activities, I took part in planning, editing user testing tasks, and proofreading the scripts used in testing sessions for all sprints. In Sprint 4, I also took on the role of facilitating work between the research team and design team. I made sure that questions from the design team were answered by the user research team, and that all findings were translated into actionable ideas for the design team. This role required me to stay up-to-date with the progress and tasks of both teams while ensuring that the entire team met all deadlines established by our client and the capstone course. I did this by sending out reminders a day or two before. I would like to thank my team for trusting my judgement in making most of the internal decisions (supported by logical reasoning, of course). I would also like to thank our client for being approachable and responsive. It has been a valuable experience taking part in this big mission by LOC, and I hope that some elements from our design will be live on the real site one day!

© 2023 Jocelyn Sun