Overview

Usability testing focuses on understanding the ease of use and effectiveness of a system. This lecture covers essential evaluation methods including heuristic evaluation, cognitive walkthroughs, and think-aloud studies. You'll learn how to measure usability through objective metrics and gather meaningful user feedback to improve designs.

Key Insight Usability impacts user satisfaction, productivity, and error rates. A well-designed system reduces cognitive load and helps users achieve their goals efficiently.

Lecture Material (PDF)


Evaluation Methods

Different evaluation methods help identify usability problems at various stages of development. Expert-based methods like heuristic evaluation are cost-effective, while user-based methods provide direct insights into real user behaviour.

  • Heuristic Evaluation

    A method to identify usability problems using established usability principles (Nielsen, 1994). Experts review the interface against a set of heuristics like visibility, consistency, and error prevention.

  • Cognitive Walkthrough

    Evaluating task flow based on user goals and system feedback (Wharton et al., 1994). Steps include defining tasks, identifying goals, and analysing whether users can determine correct actions.

  • Think-Aloud Protocol

    Users verbalise their thoughts while interacting with a system. Provides insights into user decision-making processes and reveals confusion or frustration in real-time.

  • Contextual Inquiry

    Observing users in their natural environment to gather insights. Combines observation with interview techniques to understand real-world usage patterns.

Nielsen's Heuristics Common usability principles include: visibility of system status, match between system and real world, user control, consistency, error prevention, recognition over recall, flexibility, aesthetic design, error recovery, and help documentation.

Usability Metrics

Objective measures help quantify usability and track improvements over time. These metrics provide concrete data to support design decisions and demonstrate the value of usability improvements.

  • Task Completion Rate

    Percentage of users who successfully complete a task. Aim for >85% completion rate for critical tasks.

  • Error Rate

    Tracks errors made during interaction. Lower is better - target around 5% or less for well-designed interfaces.

  • Time on Task

    Measures efficiency by recording time needed to complete tasks. Compare against benchmarks to identify bottlenecks.

  • System Usability Scale (SUS)

    A standardised 10-item survey to evaluate usability (Brooke, 1996). Scores above 68 indicate above-average usability.

  • Net Promoter Score (NPS)

    Measures user loyalty based on likelihood to recommend. Users rate 0-10; scores are calculated as % promoters minus % detractors.

SUS Scoring SUS provides a single score from 0-100. Scores above 68 are considered above average. The scale is quick to administer and provides reliable results across different types of interfaces.

Testing Techniques

Various testing techniques provide different types of insights. Combining quantitative and qualitative methods creates a comprehensive understanding of usability issues.

  • A/B Testing

    Comparing two versions of a system to determine which performs better. Provides quantitative data on user preferences and behaviour differences.

  • Eye-Tracking Studies

    Analyse user focus and navigation patterns using specialised equipment. Produces heatmaps showing where users look and for how long.

  • Remote Usability Testing

    Allows users to test systems from their own environment. Increases participant diversity and tests in realistic conditions.

  • Controlled Experiments

    Studies that test usability changes under controlled conditions. Isolates variables to determine cause-and-effect relationships.


Design Principles for Usability

Good usability is built on solid design principles. These guidelines help create interfaces that are intuitive, efficient, and satisfying to use.

  • Consistency in Design

    Ensures uniformity in navigation and interaction patterns. Users learn faster when similar actions produce similar results.

  • Minimizing Cognitive Load

    Reduces mental effort needed to use a system. Present information progressively and avoid overwhelming users with choices.

  • Feedback Loops

    Providing users with immediate and clear system feedback. Users should always know what the system is doing and whether their actions succeeded.

  • Error Recovery

    Designing for smooth recovery from errors. Provide clear error messages and easy ways to undo or correct mistakes.

  • Progress Indicators

    Show users their progress in multi-step tasks. Reduces uncertainty and helps users estimate time to completion.

Design Tip Designing clear and actionable error messages is critical. Users should understand what went wrong and how to fix it without technical jargon.

Advanced Methods & Future Trends

Modern usability testing incorporates advanced techniques and emerging technologies to gather deeper insights and test with diverse user groups.

  • Persona Development

    Creating user personas to guide design decisions. Personas represent key user groups and their goals, frustrations, and behaviours.

  • User Journey Mapping

    Visual representation of user interactions over time. Maps the complete experience from initial contact through task completion.

  • Accessibility Testing

    Ensures usability for users with disabilities. Follow WCAG standards to create inclusive designs.

  • Mobile & Responsive Design

    Special considerations for usability on mobile devices. Adapts layout and functionality to different screen sizes and touch interactions.

  • Cross-Cultural Usability

    Designing for a global and diverse audience. Consider language, cultural conventions, and regional preferences.

  • AI & VR in Testing

    Exploring innovations like AI-powered analytics and VR environments for immersive usability testing.


Labs & Practical Exercises

Apply usability testing concepts through hands-on activities evaluating real forms and interfaces. Work through planning, conducting, and analysing usability tests.

  • Section 1: Introduction

    Brainstorm usability factors, analyse well-designed vs poorly-designed forms, and review Nielsen's 10 Usability Heuristics.

  • Section 2: Planning

    Define test objectives, recruit 3-5 participants, develop realistic tasks, and consider ethical requirements like informed consent.

  • Section 3: Conducting Tests

    Practise think-aloud protocol facilitation, record observations, task completion times, and errors. Use NNGroup templates for data collection.

  • Section 4: Analysis

    Synthesise observations, identify recurring issues, rate severity (low/medium/high), and generate concrete recommendations for redesign.


Tools & Further Reading

VLE Page

VLE Page

Course management and resources.

Link