User Interface Evaluation

The next two weeks
Oct 21 & 23:
Lectures on user interface evaluation
Oct 28:
Lecture by Dr. Maurice Masliah
No office hours (out of town)
Oct 30:
Midterm in class
No office hours (out of town)
Oct 21
1
Midterm material
Everything up to exactly this point
(including DemoCustomDialog)
Things to study:
Slides
Programs
Javadoc
No need to memorize all methods of Swing
classes. Familiarity with the most common
ones will be tested though.
Oct 21
2
Evaluating User Interfaces
Material taken mostly from
“Interaction Design” (Preece, Rogers, Sharp 2002)
User Interface Humor
Oct 21
4
User Interface Evaluation
Users want systems that are easy to
learn and use
Systems also have to be effective,
efficient, safe, satisfying
Important to know:
What to evaluate
Why it is important
When to evaluate
Oct 21
5
What to evaluate
All evaluation studies must have specific
goals and must attempt to address specific
questions
Vast array of features
Some are best evaluated in a lab, e.g. the
sequence of links to find a website
Others are better evaluated in natural
settings, e.g. whether children enjoy a
particular game
Oct 21
6
Why it is important to evaluate
Problems are fixed before the product
is shipped, not after
One can concentrate on real problems,
not imaginary ones
Developers code instead of debating
Time to market is sharply reduced
Finished product is immediately
usable
Oct 21
7
When to evaluate
Ideally, as early as possible (from the
prototyping stage) and then
repeatedly throughout the
development process.
“Test early and often.”
Oct 21
8
Evaluation Paradigms
“Quick and Dirty” evaluation
Usability Testing
Field studies
Predictive evaluation
Oct 21
9
“Quick and Dirty” evaluation
User-centered, highly practical approach
Used when quick feedback about a design
is needed
Can be conducted in a lab or the user’s
natural environment
Users are expected to behave naturally
Evaluators take minimum control
Sketches, quotes, descriptive reports are fed
back into the design process
Oct 21
10
Usability Testing
Applied approach based on experimentation
Used when a prototype or a product is available
Takes place in a lab
Users carry out set tasks
Evaluators are strongly in control
Users’ opinions collected by questionnaire or
interview
Reports of performance measures, errors etc.
are fed back into the design process
Oct 21
11
Field studies
Often used early in design to check that
users’ needs are met or to assess problems
or design opportunities
Conducted in the user’s natural environment
Evaluators try to develop relationships with
users
Qualitative descriptions that include quotes,
sketches, anecdotes are produced
Oct 21
12
Predictive evaluation
Do not involve users
Expert evaluators use practical
heuristics and practitioner expertise
to predict usability problems
Usually conducted in a lab
Reviewers provide a list of problems,
often with suggested solutions
Oct 21
13
Evaluation techniques
Observing users
Asking users their opinions
Asking experts their opinions
Testing users’ performance
Modeling users’ task performance to
predict the efficacy of a user interface
Oct 21
14
The DECIDE framework
Determine the overall goals that the
evaluation addresses
Explore the specific questions to be
answered
Choose the evaluation paradigm and
techniques
Identify practical issues
Decide how to deal with the ethical issues
Evaluate, interpret, and present the data
Oct 21
15
Determine the overall goals
What are the high level goals of the
evaluation?
Examples:
Check that evaluators have understood
the users’ needs
Ensure that the final interface is
consistent
Determine how to improve the usability of
a user interface
Oct 21
16
Explore specific questions
Break down overall goals into relevant
questions
Overall goal: Why do customers prefer
paper tickets to e-tickets?
Specific questions:
What is the customer’s attitude?
Do they have adequate access to computers?
Are they concerned about security?
Does the electronic system have a bad
reputation?
Is it’s user interface poor?
Oct 21
17
Choose paradigm and techniques
Practical and ethical issues might be
considered
Factors:
Cost
Timeframe
Available equipment or expertise
Compromises may have to be made
Oct 21
18
Identify practical issues
Important to do this before starting
Find appropriate users
Decide on the facilities and equipment
to be used
Schedule and budget constraints
Prepare testing conditions
Plan how to run the tests
Oct 21
19
Decide on ethical issues
Studies involving humans must uphold
a certain code
Privacy of subjects must be protected
Personal records must be kept
confidential
Exact description of the experiment
must be submitted for approval
Oct 21
20
Evaluate the data
Should quantitative data be treated
statistically?
How to analyze qualitative data?
Issues to consider:
Reliability (consistency)
Validity
Biases
Scope
Ecological validity
Oct 21
21
We’ll take a closer look at…
Two predictive evaluation techniques:
Heuristic evaluation
Cognitive walkthroughs
A usability testing technique
User testing
Oct 21
22
Heuristic Evaluation
Heuristic evaluation is a technique in which
experts, guided by a set of usability
principles known as heuristics, evaluate
whether user interface elements conform to
the principles.
Developed by Jakob Nielsen
Heuristics bear a close resemblance to
design principles and guidelines
Interesting article on Heuristic Evaluation:
http://www.useit.com/papers/heuristic/heuristic_evaluation.html
Oct 21
23
List of heuristics
Visibility of system status
Match between system and the real
world
User control and freedom
Consistency and standards
Help users recognize, diagnose, and
recover from errors
Oct 21
24
List of heuristics (cont.)
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help and documentation
Oct 21
25