reading-patterns

People read printed text, web text, and images differently. Designs need to consider the medium
they will be using and how they will read it in order to maximum the communication.
Searching printed text
People reading printed material use a Z-shaped eye pattern. They start at the upper left, read
across the page and then move down and back across.
Searching web pages
People reading on the web use a F-shaped eye pattern, which is different from how they read
printed material.
From F-Shaped Pattern For Reading Web Content. Jakob Nielsen’s Alertbox: April 17, 2006
http://www.nngroup.com/articles/f-shaped-pattern-reading-web-content/
In our new eyetracking study, we recorded how 232 users looked at thousands of Web
pages. We found that users' main reading behavior was fairly consistent across many
different sites and tasks. This dominant reading pattern looks somewhat like an F and
has the following three components:
Users first read in a horizontal movement , usually across the upper part of the content
area. This initial element forms the F's top bar.
Next, users move down the page a bit and then read across in a second horizontal
movement that typically covers a shorter area than the previous movement. This
additional element forms the F's lower bar.
Finally, users scan the content's left side in a vertical movement . Sometimes this is a
fairly slow and systematic scan that appears as a solid stripe on an eyetracking heatmap.
Other times users move faster, creating a spottier heatmap. This last element forms the
F's stem.
Obviously, users' scan patterns are not always comprised of exactly three parts.
Sometimes users will read across a third part of the content, making the pattern look more
like an E than an F. Other times they'll only read across once, making the pattern look
like an inverted L (with the crossbar at the top). Generally, however, reading patterns
roughly resemble an F, though the distance between the top and lower bar varies.
Searching images
How people scan an image for areas of interest strongly depends on experience. Kundel and La
Follett (1972) studied experienced and novice radiologists looking at x-rays to detect tumors.
They found novice radiologists looked at the entire film evenly while the experienced
radiologists focused on the areas which were most likely to contain tumors. Stagar and Angus
(1978) had search and rescue experts study photographs of crash sites. The experts only scanned
about 50% of the terrain and missed relevant clues even in the areas they did look at.
Learning how to search images requires dedicated training. For example, when pilots are
learning how to scan control panels or radiologists learning how to evaluate x-rays. Besides
learning what to look for, they also need to learn how to control their scan path while looking at
the image.
Of particular interest in how people search within and across graphical objects:

Attention can be captured by salient but task-irrelevant information. The chance of
this undesirable attention capture increased when people worked in high cognitive
load conditions (Lavie & De Fockert, 2005)

Instructing people to ignore goal-irrelevant objects is not sufficient to prevent the
objects from being processed (Lavie, 2005). They may be disregarded after
processing, but they still require cognitive resources.

People fixate on the areas that contain the most information. For example, in a
photograph, people look at the face or other areas of high detail.

The eye will be drawn to large, bright, or blinking areas. While this might help draw
attention to warnings, it can also bias the information interpretation. Excessive use of
high contrasting elements will also result in people ignoring the area. Consider banner
blindness and how people ignore the blinking ads which appear on the right-hand side
of many web sites; this ignoring effect is so strong that some research has found
designs where a warning was displayed as flashing text in boxes, but the users
ignored it as irrelevant and later didn’t recall the flashing element.

People try to minimize eye movements as part of their information seeking. When
asked to evaluate six different models of cars based on multiple criteria, the
comparisons were done between adjacent sets of information with less comparison as
the data sets were farther apart (Russo & Rosen, 1975).