New Gradebook Tool

2015-2017-Sakai Community / NYU

Overview video for the New Gradebook, produced by Kyle

 
 

In response to feedback from faculty and staff, in 2015 NYU IT began work on a new, modernized grading tool for the university’s Learning Management System (LMS), Sakai. This new Gradebook, the design and features for which were informed by multiple usability testing cycles, provided faculty with a clean, self-evident, and easy-to-use interface reminiscent of other modern web applications. Working closely with the global Sakai community, Kyle and team released this new Gradebook as part of the core open-source product, where it continues to be used actively by millions of learners. Released as the default Gradebook for NYU in 2017, it has been used by tens of thousands of courses each year.

 

 Project Overview

Need
Develop a new, redesigned Gradebook tool for the LMS that addresses the usability pain points and feature gaps present in the existing tool. Modernize the user experience to bring the tool in line with other modern web applications and ensure parity with other top LMS platforms. Gather extensive usability feedback and iterate so as to provide the best possible grading tool within the system.

Solution
The New Gradebook was created in alignment with the global Sakai community, and was contributed as a major feature in the 2016 Sakai version 11 release, which ensured its long-term sustainability and compatibility. The tool was piloted as an opt-in tool in Spring 2017, and was released as the default Gradebook for all course sites in NYU Classes in Summer 2017.

 

Role
Product Analyst, UI/UX Designer, UX Researcher

Task
Evaluate user data to determine the features required for a redesigned grading tool. Design new Gradebook interface, ensuring that all existing grading tool functionality is present. Conduct multiple rounds of usability testing and iterate as needed. Deploy new tool to faculty at NYU, and work with broader Sakai community to incorporate this tool into the core Sakai 11 release.

Tools
Omnigraffle, JIRA

 Process

 
KB UX Workflow.png
 

Empathize

Survey Data

NYU transitioned from its existing Learning Management System to the Sakai open-source LMS in 2013. In adopting an open-source platform, the university was free to be able to customize the system’s code as needed, as well as develop entirely new tools and features for the platform. However, to truly develop for the platform in a sustainable way, custom code would need to be developed in collaboration with the global Sakai community and eventually merged back as part of the core product.

With this in mind, the university conducted a faculty-wide survey in 2014 to determine the greatest pain points in the system and opportunities to improve the core product not just for NYU, but for the global Sakai community. Receiving roughly ~1,000 responses, this survey indicated that faculty were most dissatisfied with the system’s Gradebook tool.

System data indicated that, despite being one of the most (if not the most) critical tools within a teaching/learning platform, the Gradebook was only being used by roughly ~30% of course sites within the system. From survey responses, 42% of users were actively dissatisfied with the Gradebook, 17% more so than that for the second-most disliked tool.

With these survey results in mind, the university decided to dedicate IT resources to a redesign the Gradebook and provide a more modern, streamlined, and easy-to-use experience for faculty.

 

Data Analysis / Requirements Gathering

Top Gradebook Pain Points / Feature Requests

Spreadsheet-style grade entry

High Priority

Confusing and limited import/export process

High Priority

Inability to assign TAs to individual categories/sections

High Priority

Lack of granular grade statistics

Medium Priority

Easier sorting / filtering

Medium Priority

Greater visibility into extra credit grade status

Low Priority

Easier commenting capabilities

Low Priority

With the project established, Kyle and the LMS Product team worked to define the new tool’s requirements. Kyle looked at system data (e.g., bug reports originating from the system), as well as feedback received from users since implementation. The sources of this feedback included:

  • Ticket data submitted through the IT Help Desk

  • Feedback received during trainings/consultations

Kyle aggregated this feedback and weighted the results based on the frequency of their occurrence. It was immediately clear that the biggest pain point with the existing tool was the way that grades were displayed; its legacy item-based view, wherein grades were entered one screen at a time, was frustrating to users and prevented them from seeing a holistic view of students’ grades. Consistently, users requested a spreadsheet-style interface for their grading.

In addition, the other high-priority issues with the existing Gradebook tool revolved around the difficulty to import/export grades (the existing Gradebook offered 2 different methods for importing grades, either at the whole Gradebook level or the individual level; neither method, it turns out, was comprehensive), and the ability to assign TAs to individual categories/sections.

Design (Iteration 1)

Low-Fidelity Wireframe

Initial, low-fidelity spreadsheet interface design

When approaching the design for the new tool, Kyle’s first consideration was: do not take away existing functionality. The new tool would be based on the underlying database model of the existing Gradebook, and while users expressed frustration with the existing interface, this new tool would not do away with existing features.

Based on the requirements and feedback that had been gathered, Kyle created very simple, low-fidelity wireframes that could be used to communicate the overall proposed design to users for usability testing. He created a simple, spreadsheet style interface with multiple screens for different settings, dropdown actions, and options to reflect the capabilities available in the existing tool, along with representative indicators for some of the newly proposed features (e.g., granular grade statistics).

With this initial design, the team then worked to gather initial usability feedback.

Test Usability (Round 1)

Usability Expert Consultation

Going into this project, Kyle and the team were completely new to the process of usability testing (this also predated the presence of a dedicated Usability Lab or UX professionals at the university). Knowing, however, that usability was going to be paramount to the success of the new design, the team consulted with an external usability/accessibility vendor to collaboratively produce a long-form (30-45 minute) usability testing protocol and conduct a first round of testing.

 

Testing with Paper Prototypes

Leveraging the vendor’s expertise, Kyle and the team learned the fundamentals of lightweight usability testing. Following the recommendations of UX experts such as the Nielsen Norman Group, Jared Spool, and Steve Krug, who propose that effective testing can be conducted with 5-10 users, the team proceeded to conduct an initial round of testing with 7 participants. Using Kyle’s initial designs, the team printed out various screens on paper and encouraged participants to draw on, circle, and annotate the designs during the testing sessions.

From this initial round of testing, Kyle and the team received valuable feedback that would inform a second design iteration, including:

  • The ability to group or ungroup items by category

  • Grade averages surfaced in column headers

  • Clear save status indicators (i.e., a “Saving…” / “All Changes Saved” message at the top of the tool)

  • The ability to show/hide grade items

  • The ability to easily filter/sort by student

Design (Iteration 2)

High-Fidelity Wireframe / Interactive Prototype

Using the results of the first round of usability testing, Kyle proceeded to develop higher-fidelity wireframes that incorporated more of the workflows within the tool, including various Settings screens, lightbox windows, and view options. With these designs in place, Kyle worked closely with the project’s front-end developer to create a single-screen HTML/CSS/JS prototype that mirrored the designs. This simple interactive prototype would allow for limited clickable interactions during future rounds of usability testing, and could be supplemented by screenshots of the design to demonstrate additional screens.

Clickable HTML Prototype

Non-Clickable Design for Settings Screen

Test Usability (Rounds 2 & 3)

Testing with Clickable HTML & Paper Prototypes

Modifying the test protocol that had been produced in collaboration with the usability/accessibility vendor, Kyle and team conducted two additional rounds of usability testing with a total of 13 additional participants, iterating upon the designs in between to incorporate feedback into the designs. These rounds of testing consisted of:

  • Round 2: 7 participants

  • Round 3: 6 participants

For these additional rounds, Kyle and team incorporated two additional elements to the testing:

  • A section in which users would click into the interactive prototype and enter grades for individual users, to see the edit/save/error process

  • A set of qualitative feedback questions at the end, to get a sense of overall ease-of-use

 

User Feedback

Feedback from these sessions provided additional valuable input, particularly in relation to including self-explanatory labels for different states (e.g., concurrent edit errors, extra credit, etc.), the overall styling of the spreadsheet UI (e.g., the need for zebra striping to better differentiate between rows), and the location of different features (e.g., course grade overrides, which made more sense to users as an actionable option within the spreadsheet itself, rather than an option on a separate settings screen).

In addition, this testing uncovered additional use cases that helped to inform the features within the tool. One such example, while initially seeming to be an edge-case, helped to inform the inclusion of a dedicated student overview / grade preview section within the tool.

Student Preview Mode feature designed based on faculty feedback

 

Feature Use Case: Student Overview / Grade Preview

Feedback

Faculty member described a situation in which they wanted to review only a specific student's grades. This was not possible in the previous UI. They had to pull up all grades, and focus on a single line for that student.

Moreover, they recounted a scenario in which a student came to the faculty member's office hours to review their grades. Without an option to preview the Gradebook from the student's perspective, the instructor described literally holding up a piece of paper over their computer screen to hide other students' grade data.


Resulting Feature

Based on this feedback, Kyle designed a new view within the Gradebook: when clicking on a student's name, a lightbox would open showing only that student's grades. The faculty member could then preview the Gradebook from that student's perspective by entering a "Student View Mode"; this would show only grades that were currently released, and would obscure other students' data behind the lightbox.

 

Testing Results

gb ux feedback.png

Through continuous iteration between usability testing sessions, Kyle and the team found that feedback to the new tool was universally positive. By asking a standardized set of qualitative likert-scale questions at the end of each session, he could reliably aggregate feedback at the conclusion of testing.

When asked how easy to use the new tool would be, based on their experience during testing, 12 out of 13 participants rated it as “easy,” while 1 participant rated it as “average.”

When asked to rate the tool against other Gradebook tools they had used (either in other LMS platforms or based off of their experience with the existing Gradebook tool in Sakai), 13 out of 13 participants rated it as “better.”

And, when asked about their likelihood to recommend the new Gradebook tool to other faculty, 12 out of 13 participants stated that they were “likely” to recommend it, while 1 participant was “neutral.”

Overall, this universally positive feedback, along with the feature iterations that were possible due to this testing, allowed Kyle to finalize the designs and pass them off to the project’s developers.

 Deploy

Development / Sakai 11 Release

Coordinating closely with the community’s governing body, the team worked to ensure that the tool was being developed in such a way that it could be easily included in the next major Sakai version release (v. 11.0). The benefit here was twofold:

  • By making the tool available as part of the core product, NYU would not be running a highly differentiated “branch” of the platform; this would make it easier in the future to upgrade the system, merge patches, etc.

  • By releasing the tool into the global open-source community, NYU would no longer “own” the ongoing maintenance and development of the tool. Other schools / vendors would be able to contribute to the tool’s long-term vision, and NYU would be able to leverage these improvements moving forward.

Throughout the remainder of 2015, the developers wrapped up development of the tool. During this time, Kyle worked closely with the Sakai community to contribute a number of other major features for the Sakai 11 release.

Once development was complete, Kyle and team conducted QA on the tool, both locally with other teams at NYU, and with the Sakai community at large. In late Spring 2016, the New Gradebook was released along with the New Gradebook to a global community of 4 million+ learners:

 

Sakai Gradebook tool, as made available in Sakai 11

 
 

NYU Release

To make the tool available to the NYU community, the team worked to ensure that users would be using the core community product. While a select few faculty used a local version of the tool in NYU’s Sakai instance in Fall 2016, the main rollout would begin in Spring 2017 once NYU was on the Sakai 11 release version.

In winter 2016, the university upgraded to the v.11 release, and made the new Gradebook available as an “opt in” pilot for the Spring 2017 semester. In summer 2017, the tool was flipped on as the default for all courses moving forward.

 

Outcome

The New Gradebook project was a resounding success. In Summer 2017 (the first term with the new Gradebook enabled by default), adoption of the tool had already increased by 27%.

Moreover, when satisfaction was again ranked via an all-faculty survey in Spring 2018, over 89% of respondents indicated that the Gradebook was meeting their needs well.

 
2018 satisfaction.png
 
 

Ongoing Development

The Gradebook as it appears in Sakai 21.

The Gradebook as it appears in Sakai 21.

Since being released to the global community in 2016, the Gradebook tool has continued to receive ongoing enhancements developed by other schools/vendors in all of the major releases since. These include:

  • v. 12 (2018) — Refactored table with infinite scroll for large courses

  • v. 19 (2019) — Inclusion of rubrics, along with improved grading analytics and visualizations

  • v. 20 (2020) — Option for equal weighting of items in categories, a new full-screen mode, and the ability to message students from within the Gradebook

  • v. 21 (2021) — Additional enhancements for exporting class averages

The current Gradebook and its features continue to evolve, built upon the UX foundations established by Kyle and team.

Previous
Previous

Conversation Platform

Next
Next

Sakai Redesign