Metrics, Usability, and Digital Experience

How research shaped the way I approach strategy, UX, and data-driven decision-making.

Overview

My graduate research examined how usability, behavior, and design influence the success of non-transactional websites — especially in complex environments where “conversion” isn’t a single action, but a combination of clarity, trust, and accessibility.

The work explored whether traditional web metrics could predict user satisfaction. They couldn’t — and that insight continues to inform how I build digital experiences today.

Thesis Summary

This study investigated whether common analytics metrics — such as bounce rate, session duration, and pageviews — could accurately reflect user experience on a higher-education website with no direct transactional goals. Through remote usability testing with nine North Carolina residents, the research uncovered significant usability issues and revealed no meaningful correlation between metrics and actual satisfaction or task success. The study concludes that for non-transactional or information-heavy websites, direct usability testing is the only reliable method of measuring success.

Why this research still matters

Higher-education, healthcare, and public-sector websites share a challenge:
they serve diverse audiences with complex information, but they rarely have a single “conversion rate” to measure success.

This research highlights lessons that apply across industries today — reinforced by my Google Data Analytics Professional Certificate training:

  • Metrics alone don’t explain behavior.
    Numbers tell you what happened, not why — and good analysts learn to combine data with intuition to uncover the real story.

  • AI can be the first stop, but not the last.
    Automated insights and machine-generated summaries are useful starting points, but human interpretation is still essential for context, nuance, and accuracy.

  • Usability issues hide behind “good” analytics.
    High traffic can coexist with confusion, errors, or abandonment, making qualitative insight just as important as quantitative data.

  • Clarity and plain language outperform jargon.
    Users trust websites that speak simply, guide them clearly, and help them recover from missteps.

These principles — combining data, intuition, usability, and human-centered interpretation — guide my approach to UX, content strategy, campaign development, and analytics for mission-driven organizations.

Research Methods

To explore the relationship between metrics and usability, I conducted:

  • Remote guided usability tests with nine participants

  • Task analysis based on Nielsen’s five usability components

  • Screen recordings for behavioral insights

  • Post-test surveys for satisfaction, clarity, and perceived value

  • Comparisons to real site metrics (bounce rate, time on page, search entry)

This combination of qualitative and quantitative data allowed me to test where analytics fall short — and where human-centered testing reveals deeper truth.

Key Findings

  1. Metrics didn’t predict success.

    Participants with identical analytics footprints had completely different experiences — one successful, one frustrated.

  2. Terminology and structure created critical errors.

    Users struggled with internal jargon (“advancement,” “program inventory”) and confusing navigation paths.

  3. “Pretty” design hid deeper problems.

    People liked the visuals but still failed core tasks, proving that aesthetics ≠ usability.

  4. Satisfaction remained high despite errors.

    A reminder that user perception and user performance don't always align — and that satisfaction alone can't define “success.”

Impact

This research helped improve the UNC System website, informed policy communication, and strengthened digital accessibility considerations. More broadly, it forms the foundation of how I now approach:

  • Campaign strategy

  • UX and content clarity

  • Analytics interpretation

  • User-centered design

  • Digital ecosystem optimization

It established the philosophy I still follow today:

Measure what matters. Test what metrics can’t explain.

The Thesis

Abstract

The purpose of this study was to discover relationships between metrics and usability standards to assess the success of the University of North Carolina System Office website. Though no relationship was arguably found, the study results will be used to improve the UNC System Office website and other non-transactional websites that convey policy based information to the general public. Remote usability testing with nine users on desktop computers revealed that the site succeeds in terms of aesthetic design and top-level navigation but suffers from critical errors, poor organization, and an overuse of industry-specific terminology. The testing results demonstrate the importance of presenting the wide breadth of information to the general public in a way that is visually appealing and topical for some while direct and deeply specialized for others. Above all, the study provides a case study in conducting usability testing.

Measure what matters.
Test what metrics can’t explain.

Let's talk.