helpUsability, a learning module
techweb.bsu.edu/jcflowers1/rlo/modusability.htm

Too often, consumers are frustrated by products because they can't seem to use them efficiently. Sometimes the problem lies in product design, but other times, it has to do with instructions for use, selection of the product, user habits, or other factors outside the typical control of a product manufacturer.

This module will guide you through an examination of product design criteria, user centered design, anthropometrics, and Universal Design, then follow with a number of lessons on usability testing, ending with other forms of user research.
 


Instructions:

1. If you do not see a list of lesson on the left, click here:

techweb.bsu.edu/jcflowers1/rlo/modusability.htm

2. Click on each of the lessons, read through the information, and continue through to the end of the last lesson.

If you have any questions, click the help and contact the instructor.
 


Helpful Resources:

There is a page of annotated links related to the study of using technology that you may find helpful:

http://techweb.bsu.edu/jcflowers1/rlo/linus.htm

Please contact Jim Flowers with corrections or suggestions.


All information is subject to change without notification.
© Jim Flowers
Ball State University
HelpProduct Design Criteria

Objectives:

By the end of this lesson, you should be able to:

1. Use weighted product design criteria to assess product designs.

Product Design Criteria

Product reviews can be found in Consumer Reports on television shows that have come and gone, like TechTV's Fresh Gear, at Amazon.com, all over the Internet, and in much advertising. Objective reviews should consider a variety of weighted criteria, and allow the reader to apply their own weighting of these criteria as an aid in selection.

For example, a review of portable DVD movie players on Fresh Gear noted that while the Panasonic DVD player did not beat the competition in an overall score, consumers who value screen color and clarity above battery life, for example, would be wise to choose the Panasonic.

Among the criteria used to evaluate a product may be the following:
 
  • Utility and function
  • Aesthetics
  • Economics (value)
  • Human interface (ergonomics)
  • Social appeal
  • Environmental soundness
  • Personal appeal
  • Availability and access
  • Reliability
  • Life-cycle of the technology
  • Appropriate service availability
  • Timing (seasonality, market, billing, etc.)
  • Personal resource availability
  • Embedded energy in components
  • Energy resources required for product use
  • Product disposition (infrastructure, alternatives, impacts)
  • Other

  • Research Based on Design Criteria

    Research on design criteria may be helpful. In particular, where design criteria are based on a reaction by a user, the designer or engineer may find it difficult to accurately determine that reaction without some research. For example, "in order to provide design criteria for a sidewalk landscape based on the emotional perception, [Lee, Jang, Wang, and Namgung (2009) conducted ] an emotional satisfaction survey was conducted and the characteristics of sidewalk landscape according to types were identified" (p. 139). For more, read their report:

    Lee, B., Jang, T., Wang, W., & Namgung, M. (2009). Design criteria for an urban sidewalk landscape considering emotional perception. Journal of Urban Planning and Development, 135(4), 133-140. This can be retrieved by those in Ball State University at http://search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=45252009&site=ehost-live

    Many product design criteria are not based on either aesthetics or perception, but have to do with performance parameters. For example, child restraints in vehicles may not be properly used by many, so it makes sense for there to be recommendations presented to consumer on their proper use and the need to use them properly. But Anderson and Hutchinson (2009) observed that the advice given to parents often asks the parents to make decisions based on the weight of the child. But, they note, parents often don't know the weight, but do know the age. However, if age is used to communicate with parents, then there can be more inaccuracies since children vary so much by weight at a given age. If you are interested, read about their suggestion and see how they attempted to validate their work:

    Those in TEDU 510 can find this under the Assignments button in the TEDU 510 Blackboard site:

    Anderson, , R., & Hutchinson, T. (2009). Optimising product advice based on age when design criteria are based on weight: child restraints in vehicles. Ergonomics, 52(3), 312-324.

    It would be cumbersome to keep adding additional design criteria. How can you tell if the criteria are appropriate and effective? Leporini and Paternò (2008) looked at website use by users who were visually impaired. They examined 15 web usability criteria to see if they really did make a difference. To read their methods and findings, see:

    Leporini, B., & Paternò, F., (2008). Applying web usability criteria for vision-impaired users: does it really improve task performance? International Journal of Human-Computer Interaction, 24(1), 17-47. This can be retrieved by those in Ball State University at http://search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=33372241&site=ehost-live


    Weighting Factors

    It is possible to assign a weighting factor to relevant criteria and determine the weighted score of alternative options.

    For example, let's say Lynn is considering purchasing a new car. In this over-simplified model, Lynn has narrowed it down to one of two cars, and has decided to use "initial cost," "interior volume," and "feel" as the only three criteria to be used.

    Lynn's Unweighted Ratings

    CR-V
    Outback
    Cost 
    9
    10
    Volume
    10
    7
    Feel
    9
    10
    Total
    28
    27

    In the above example, Lynn has assessed the options in terms of the criteria, and found the CR-V to have a higher total score than the Outback. However, Lynn realized that cost and volume may both be important, but they are not equally important. Therefore, factors have been assigned to each factor, and multiplied by the original score. The result, shown below, is that the Outback wins.

    Lynn's Weighted Ratings

    CR-V
    Outback
    Cost (10)
    9 x 10 = 90
    100
    Volume (2)
    20
    14
    Feel (4)
    36
    40
    Total
    146
    154

    As with many decisions, it might be wise to check the decision against one's inner feelings and against other measures.



    "Product Design Criteria"
    All information is subject to change without notification.
     © Jim Flowers
    Ball State University
    HelpUser-Centered Design

    Objectives:

    By the end of this lesson, you should be able to:

    1. Discuss reasons for hard-to-use products.

    2. Explain principles of user-centered design.

    3. Identify at least one corporate strategy to approach user-centered design.


    Reasons for Hard-to-Use Products

    But why are some products so difficult to use, while others are easy to use? The following reasons for hard-to-use products are quoted from Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. NY: Wiley Technical Communication Library.

    “1. During product development the emphasis and focus have been on the machine or system, not on the person who is the ultimate end user.”
    “2. As technology has penetrated the mainstream consumer market, the target audience has changed and continues to change dramatically. Development organizations have been slow to react to this evolution.”

    “3. The design of usable systems is a difficult, unpredictable endeavor, yet many organizations treat it as if it were just ‘common sense.’”

    “4. Organizations employ very specialized teams and approaches to product and system development, yet fail to integrate them with each other.”

    “5. The design of the user interface and the technical implementation of the user interface are different activities, requiring very different skills.”

    Rubin has concentrated on the manufacturer and designer. Do you agree? Who else shares responsibility for "hard-to-use products?" (A rhetorical question that might lead to a Discussion Board posting)


    User-Centered Design (UCD)

    In response to an abundance of products that are difficult to use, there has been a movement toward "user-centered design" (UCD). Rubin (1994) lists the principles of UCD as:

    “1. An early focus on users and tasks.”

    “2. Empirical measurement of product usage.”

    “3. Iterative design whereby a product is designed, modified, and tested repeatedly.”


    Corporate Strategies

    Modern manufacturing design and engineering technologies can be at odds with UCD. A conflicting approach, for example, is "Design for Manufacture," where parts have certain design characteristics to aid in the making of the part. But this need not be a contradiction with UCD. In fact, some corporate design strategies are well-suited to reconciling the needs of the manufacturer with the needs of the end user.

    Among these are "participatory design," where the user is one of the members of the design team, and "concurrent engineering," where designs are not developed by an isolated department but by collaboration.


    Impacts on Sustainability

    Wever, van Kuijk, and Bocs (2008) suggested that there are sustainability implications for a product or system based on consumer behavior.  They suggested several strategies to improve this, including "functionality mapping, eco-feedback, scripting and forced functionality" (Conclusions section, para. 1). For more, read (optional):

    Those in TEDU 510 can find this under the Assignments button in the TEDU 510 Blackboard site:

    Wever, R., van Kuijk, J., & Boks, C. (2008). User-centered design for sustainable behaviour. International Journal of Sustainable Engineering, 1(1).



    "User-Centered Design"
    All information is subject to change without notification.
     © Jim Flowers
    Ball State University
    HelpProduct Users 

    Objectives:

    By the end of this lesson, you should be able to:

    1. Discuss the need for companies to learn about the users of their products.

    2. Critique the validity of consumer research.

    3. Apply the conversion model of consumer behavior to a particular relationship between and individual and a product brand.

    4. Discuss Everett Rogers' classification of technology adopters.


    Consumer Behavior It is short-sighted to look at product design without attention to the users of that product.
    Often, products are used by a non-target population, or in a way unanticipated by producers.


    Graphic from http://www.futuretoys.com/barbie/hulahoop.jpgThe Barbie Hula Hoop

    During a tour of a manufacturing plant for Swimways corporation, a successful toy manufacturer in Virginia, I noticed something odd. I asked the plant manager why there so many boxes of plastic tubes. He said, "That was a big mistake."

    He took out a pink and white striped tube that had a picture of the Barbie doll on it (graphic from http://www.futuretoys.com/), and attached the ends to form a hula hoop (also spelled, "hoola hoop.") He said, "It was only after we made these and tried to sell them that we found out they wouldn't sell. You see, although people of many ages use hula hoops and appreciate Barbie dolls, most of the people who find Barbie dolls appealing are too young to have developed the coordination to use a hula hoop. I don't know what to do with all of these now."


    (Graphic from http://i77.photobucket.com/albums/j62/pgg009/hulahoop.jpg)


    Learning About Consumers Many companies now include consumer research as an integral part of their product development plans. There are many ways to gather and use information about consumers and potential consumers. But don't be misled by quasi research or marketing ploys.
    Take the Pepsi Challenge

    Do more consumers prefer to drink Coke or Pepsi? At a state fair in Delaware, the makers of Pepsi set up a taste test booth. On the tote board, Pepsi was shown to be preferred over Coke by a large margin. 

    When a person walked over to the booth, a server would pour the two colas into two cups behind a visual block,  and place them in front of the taste tester. (Graphic source: http://static.musictoday.com/store/bands/3898/images/PromoBanners/logo.jpg )

    But after watching this for 10 minutes, I wondered if they always put the Pepsi on the same side, the right side of the taste tester. When I asked a server, she said she put the Pepsi on the consumer's right, unless there was a clue that the consumer was left-handed. And consumers seemed to grab the cup to their right more often than not. And that first sip of cool cola on a hot day sure seems refreshing.

    It became obvious that this was a marketing scam, not true consumer research.

    Have you confronted psuedo-reasearch ("4 out of 5 dentists surveyed ...") that seemed to be concerned more with marketing a product to potential consumers than with objectively uncovering relationships?


    Conversion Model of Consumer Behavior Some companies have found it useful to classify the relationship people have to their products. Jan Hofmeyr developed a model called The Conversion Model of Consumer Behavior while he was working at Customer Equity Company; this was later acquired by TNS Global, and they published a brief description of this model, which has been uploaded to our course Blackboard site under Assignments (optional reading). The Conversion Model of Consumer Behavior classifies brand loyalty of consumers. (So, are you a pc-user or a Mac-user, and how strongly are you attached to that decision?)
    For each product, such as Pepsi Cola, the model classifies people as either Users or Non-users. Users are then subdivided into four categories based on their likelihood and desire to switch to a different brand. 
    User Classification Characteristics
    Entrenched very low likelihood and desire to switch away from the brand
    Average moderate brand loyalty
    Shallow little brand loyalty
    Convertible high likelihood and desire to switch away from the brand

    Similarly, non-users of a particular brand can be subdivided into four categories:  

    Non-User Classification Characteristics
    Available highest likelihood and desire to switch to the brand
    Ambivalent little preference in one brand over another
    Weakly unavailable would rather not switch
    Strongly unavailable very low likelihood and desire to switch to the brand

    How would you classify yourself for a particular product? Why is it especially useful for marketers to classify users this way? 


    Technological Adoption Everett Rogers attempted to classify people according to their relative willingness to adopt technology (based on earlier work by Ryan and Gross.) 
    He identified five categories of technological adopters. Please visit this link now for a summary of these categories (required reading).
    www.hightechstrategies.com/profiles.html

    Does this categorization seem either accurate or useful? Is the language problematic? How could this classification help a business? And how do these compare with the stages of learning to use technology presented by Anne Russell? (These are rhetorical questions, to which you may wish to respond in a Discussion Board.)
     


    Many Consumer Characteristics Willingness to adopt an innovation, and brand loyalty are just two of many psychological characteristics of consumers and potential consumers. At a more basic level it is important to look at the physical nature of the consumer or the product user.


    "Product Users"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpAnthropometrics

    Objectives:

    By the end of this lesson, you should be able to:

    1. Define "human factors engineering" and "anthropometrics."

    2. Discuss the importance of using measurements of humans in designing human interfaces.

    3. Perform anthropometric experiments.


    Ergonomics, or Human Factors Engineering

    has been defined as 

    “… the practice of designing products so that users can perform required use, operation, service, and supportive tasks with a minimum of stress and maximum of efficiency.”  Woodson, W. (1981). Human Factors Design Handbook. McGraw Hill 1981. 
    Historically, products have been designed out of chunks of metal, wood, plastic, and other materials. Their shape, size, and features may be a result of the equipment used to machine or create these products, rather than based on fitting the user.

    Look at this typical design for a utility knife:

    Typical design for a utility knife
    Standard Utility Knife

    Compare that design to this "ergonomic utility knife":

    Ergonomic utility knife
    "Ergonomic" Utility Knife

    The second knife does have features that take into account the typical user's hand size, shape, and movements, unlike the first design. Yet, some may be unconvinced that the second design is an improvement. For example, it is cumbersome to hold inverted. What do you think?
     


    You are likely reading this text off a computer monitor right now, and using a mouse and a keyboard. But does the layout of your workstation put any extra and unnecessary stress on your wrists, eyes, or back? Do your habits of using these tools heighten the risk of physical problems? For suggestions on computer workstation habits to overcome some of these common problems, please see "Ergonomic Guidelines for arranging a Computer Workstation - 10 steps for users" at http://ergo.human.cornell.edu/ergoguide.html. This is just one of the resources available from the Cornell University Ergonomics Web (http://ergo.human.cornell.edu/).

    In order to better design products for humans, it may be necessary to learn about human sizes and abilities.


    Anthropometrics "Anthropometrics" refers to measurements of humans. .
    These measurements are usually made of a particular sample of the population, and often separated on the basis of sex and age. Typical anthropometric measurements include standing stature, weight, distance between eyes, and circumference around waist. However, sensory abilities may also be measured, such as hearing ability, sight, and the ability to sense touch.


    Experiment 1

    Try to perform this experiment on a friend, with a witness so your friend does not later accuse you of lying.

    1. Ask you friend to stand or sit with their back to you. 

    2. Tell them that you are going to lightly touch them on the back either with one index finger, or with two index fingers, and they are to say "one" or "two." (This works through the shirt.)

    3. Hold your index finger so they are touching and parallel to each other. Gently tap your friend's back, making sure both finger tips touch the back at the same time. (Your friend should say "one" or "two.")
     

     
    4. Repeat this with a slightly increasing distance between your fingers and at different sights on the back, but always use two fingers.
     
    5. What did your friend say? What did you learn about their ability to sense touch on their back?



    Other anthropometric measurements are made of abilities, such as lifting strength, jump reach, and grip strength. But should designers use anthropometric data in creating or adapting product designs?


    Experiment 2: Anthropometrics: Assumptions in the Data?

    Perform Experiment 2 following the procedure described below, and in the optional video:

     Windows Media Video, 321Kb


    1. Stand with your arms at your side.

    2. Bend your right elbow at a 90-degree angle. Your right palm should be facing left, and your right wrist should be straight.
     


    Right wrist straight
     
    3. Place two fingers from your left hand against your right palm; keeping your right wrist straight, make a fist around these fingers and squeeze hard as a test of your grip strength. Release.

    Right wrist straight

    Gee, you're pretty strong, aren't you?

    4. Now, repeat the procedure, but this time bend your right wrist at a sharp angle (so your palm faces your abdomen.)

    Right wrist at a near-90-degree angle
     


    Grip strength with a bent wrist

    5. Was your grip strength different at this position? Have you ever tried to complete a task that was much more stressful than normal because your body was twisted or extended in an odd way? What mistakes might designers make by looking at tabled values of anthropometrics? What mistakes might users make concerning their own estimates of their abilities? (These are rhetorical questions, but feel free to add answers, comments, or other questions on the subject to the discussion board forum for this module.)

    More Data

    The National Aeronautics and Space Administration (NASA) has placed online some detailed data related to anthropometrics and biomechanics:

    http://msis.jsc.nasa.gov/sections/section03.htm

    and to human performance capabilities:

    http://msis.jsc.nasa.gov/sections/section04.htm

    As part of their volume on "Man-Systems Integration Standards," though I wish they had not chosen to use sexist terminology.



    "Anthropometrics"
    All information is subject to change without notification!
    © Jim Flowers
    Ball State University
    help Assignment 4.1:

    Anthropometrics Activity: Mirror Mirror, On The Wall


    Objective:
    By the end of this assignment, you should be able to:
    1. Use anthropometric data in determining design specifications for technological product use by a specified segment of the population.

    Design Problem:
    Mirror, Mirror, on the Wall

     Use anthropometric data, and maybe a ruler and a calculator, to solve this design problem collaboratively.


    Problem:

     A 20" wide mirror is to be installed on the wall of the girl's/women's student lounge at a high school. Please assume that those using the mirror will be typical high school age female students. Using anthropometric measurements, determine the minimal length (i.e., height) and wall position of the mirror so that 90% of this population can view their entire height.

    (Please note that the selection of a girl's/women's lounge as opposed to a boy's/men's lounge or a mixed sex lounge is arbitrary.  Because anthropometric data is often specified by sex it may be easier for students to select single-sex data, either female or male.)


    Sources of Data:

    1. Please use the tabular anthropometric data found at the following PDF file, which will open in a new window:

    www.cdc.gov/growthcharts/data/set1/chart08.pdf

    2. Additional data may be acquired by measuring yourself or others, but please do not do this to get data on stature.


    Limitations:

     1. The mirror is to be flat against a vertical wall and as short as possible, to minimize cost.

     2. Each user accommodated must be able to view her entire height from the shoe tip to the top of the hair while standing straight, without bending or stretching.

    3. This problem (for a different target population) and a solution appear in print, but please do not look at that source when developing your solution (Flowers, J., & Rose, M. A. (1998). Mirror, mirror, on the wall. The Technology Teacher, 57(5), 32-34.) 


    Deliverables:

    1. You may either post your solution to the Module 4 Discussion Board, or post a suggested improvement or correction to another's solution. With either of these, be sure to include your reasoning. As a class, you are responsible for solving the problem collaboratively.


    Tips:

    1. High school physics teachers teach that "the angle of incidence equals the angle of reflection" in a unit on optics. Is the distance between the mirror and the user a factor?

    2. Have you identified the "shortest female" and the "tallest female?"

    3. What is the user wearing?



    "Mirror, Mirror, On The Wall"
    All information is subject to change without notification.
     © Jim Flowers
    Ball State University
    HelpUniversal Design 

    Objectives:

    By the end of this lesson, you should be able to:

    1. Define "Universal Design."

    2. Discuss principles of Universal Design.


    Introduction Typically, a designer may be  asked to work on a design problem that is targeted at, say, 30% of the population.
    While this is sometimes appropriate, it excludes 70% of the population. When we design products that are intended to be used by all, then even if we expand our target to 80 or 90% of the population, we are still excluding a lot of people.

    In response for the need to develop a more inclusive design philosophy, a movement known as "Universal Design" has emerged.


    Defining Universal Design

    The Center for Universal Design at NC State University defines Universal Design as:

    "The design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design."

    Some of the impetus for Universal Design comes from the needs of those in wheelchairs, or those with arthritis, or the hearing impaired, to name a few. However, Universal Design is a broad philosophy of technological inclusion that can benefit all.


    Principles of Universal Design

    A number of principles have been established to guide the creation and adaptation of designs in hopes of improving their inclusiveness and use. These are:

    1. Equitable Use
    2. Flexibility in Use
    3. Simple and Intuitive Use
    4. Perceptible Information
    5. Tolerance for Error
    6. Low Physical Effort
    7. Size and Space for Approach and Use

    If you would like further explanation and illustration for any of these seven, please visit the following website, and follow the links to PDF files that more fully explain and illustrate the principles of Universal design:

    http://www.ncsu.edu/www/ncsu/design/sod5/cud/
    about_ud/udprinciplestext.htm


    Summary of Universal Design Principles The following information is quoted from Principles of Universal Design from the Center for Universal Design at NC State.

    "Principle 1: Equitable Use
    The design is useful and marketable to people with diverse abilities.


    Equitable Use Guidelines:
    1a. Provide the same means of use for all users: identical whenever possible; equivalent when not.

    1b. Avoid segregating or stigmatizing any users.

    1c. Provisions for privacy, security, and safety should be equally available to all users.

    1d. Make the design appealing to all users.


    Principle 2: Flexibility in Use

    The design accommodates a wide range of individual preferences and abilities.


    Flexibility in Use Guidelines:
    2a. Provide choice in methods of use.

    2b. Accommodate right- or left-handed access and use.

    2c. Facilitate the user's accuracy and precision.

    2d. Provide adaptability to the user's pace.


    Principle 3: Simple and Intuitive Use

    Use of the design is easy to understand, regardless of the user's experience, knowledge, language skills, or current concentration level.


    Simple and Intuitive Use Guidelines:
    3a. Eliminate unnecessary complexity.

    3b. Be consistent with user expectations and intuition.

    3c. Accommodate a wide range of literacy and language skills.

    3d. Arrange information consistent with its importance.

    3e. Provide effective prompting and feedback during and after task completion.


    Principle 4: Perceptible Information

    The design communicates necessary information effectively to the user, regardless of ambient conditions or the user's sensory abilities.


    Perceptible Information Guidelines:
    4a. Use different modes (pictorial, verbal, tactile) for redundant presentation of essential information.

    4b. Provide adequate contrast between essential information and its surroundings.

    4c. Maximize "legibility" of essential information.

    4d. Differentiate elements in ways that can be described (i.e., make it easy to give instructions or directions).

    4e. Provide compatibility with a variety of techniques or devices used by people with sensory limitations.


    Principle 5: Tolerance for Error

    The design minimizes hazards and the adverse consequences of accidental or unintended actions.


    Tolerance for Error Guidelines:
    5a. Arrange elements to minimize hazards and errors: most used elements, most accessible; hazardous elements eliminated, isolated, or shielded.

    5b. Provide warnings of hazards and errors.

    5c. Provide fail safe features.

    5d. Discourage unconscious action in tasks that require vigilance.


    Principle 6: Low Physical Effort

    The design can be used efficiently and comfortably and with a minimum of fatigue.


    Low Physical Effort Guidelines:
    6a. Allow user to maintain a neutral body position.

    6b. Use reasonable operating forces.

    6c. Minimize repetitive actions.

    6d. Minimize sustained physical effort.


    Principle 7: Size and Space for Approach and Use

    Appropriate size and space is provided for approach, reach, manipulation, and use regardless of user's body size, posture, or mobility.


    Size and Space for Approach and Use Guidelines:
    7a. Provide a clear line of sight to important elements for any seated or standing user.

    7b. Make reach to all components comfortable for any seated or standing user.

    7c. Accommodate variations in hand and grip size.

    7d. Provide adequate space for the use of assistive devices or personal assistance.


    Note:

    Please note:

    These Principles of Universal Design

    • address only universally usable design, while the practice of design involves more than consideration for usability. Designers must also incorporate other considerations such as economic, engineering, cultural, gender, and environmental concerns in their design processes.
    • offer designers guidance to better integrate features that meet the needs of as many users as possible. All Guidelines may not be relevant to all designs."

    Copyright 2006 NC State University, The Center for Universal Design.

    The document quoted here is available at:
    http://www.ncsu.edu/ncsu/design/
    cud/about_ud/udprinciples.htm


    For Health Professionals:

    There are specialized resources on Universal Design. available for product designers, architects, health professionals, and other disciplines. For example, health professionals may be interested in reading the following:

    "Removing Barriers to Health Care - A Guide for Health Professionals" by the Center for Universal Design (optional):

    http://fpg.unc.edu/sites/fpg.unc.edu/files/resources/other-resources/NCODH_RemovingBarriersToHealthCare.pdf


    Endnote: Now that you have worked on a design problem that was not inclusive, and looked at the guidelines for a more inclusive approach, what are your thoughts, conclusions, and questions?


    "Universal Design"
    All information is subject to change without notification.
    © Jim Flowers
    Department of Technology, Ball State University
    HelpUsability
    Objectives:

    By the end of this lesson, you should be able to:

     

    1. Define "usability" in terms of its attributes.

    2. Stipulate a subset of usability factors as a definition in a given context.


    What is Usability?

    Sure that device has a lot of features, but just how "usable" is it?

    Usability can be thought of as "ease-of-use" or "user-friendliness." But these simplistic synonyms do not convey much information.

    As cited by the User Experience Professionals' Association (UXPA.org, formerly the User Professionals Association), the ISO standard definition of usability states that it refers to "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of user" (ISO 9241-11). And "human-centered design is characterized by: the active involvement of users and a clear understanding of user and task requirements; an appropriate allocation of function between users and technology; the iteration of design solutions; multi-disciplinary design" (ISO 13407).

    Usability Attributes or Factors

    Usability Engineering, photo from Amazon.comJakob Nielsen suggests we look at five different attributes typically associated with usability, quoted below [with boldface added]:

    "Learnability: The system should be easy to learn so that the user can rapidly start getting some work done with the system.

    "Efficiency: The system should be efficient to use, so that once the user has learned the system, a high level of productivity is possible.

    "Memorability: The system should be easy to remember, so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again.

    "Errors: The system should have a low error rate, so that users make few errors during the use of the system, and so that if they do make errors they can easily recover from them. Further, catastrophic errors must not occur.

     "Satisfaction: The system should be pleasant to use, so that users are subjectively satisfied when using it; they like it." (p. 26)

    Nielsen, J. (1993). Usability engineering. San Diego, CA: Morgan Kaufman

    Nielsen's description has been adopted by others, including the US Dep't. of Health and Human Services. See "Usability.gov" at (optional): http://www.usability.gov/basics/.


    Handbook of Usability TestingIn his "Handbook of Usability Testing," Jeffrey Rubin presents a list of four usability factors, based on the work of Paul Booth.

    "Usefulness concerns the degree to which a product enables a user to achieve his or her goals, and is an assessment of the user's motivation for using the product at all."

    "The second element, ease of use or effectiveness, is usually defined quantitatively, either by speed of performance or error rate, and is tied to some percentage of total users."

    "Learnability has to do with the user's ability to operate the system to some defined level of competence after some predetermined amount and period of training. It can also refer to the ability of infrequent users to relearn the system after periods of inactivity."

    "Attitude (likability)... refers to the user's perceptions, feelings, and opinions of the product, usually captured through both written and oral interrogation." (pp. 18-19)

    Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. NY: Wiley Technical Communication Library.


    As you can see, these two authors do not differ much in their characterization, and usability does seem to extend beyond a vague and fuzzy notion of user-friendliness, including both quantitative and qualitative aspects. For a comparison of other usability factors, see "Identification of Usability Decomposition" at (optional reading):

    de Andrés, A., Chatley, Ro., Ferré, X., Folmer, E., Juristo, N., Montejo, M., & Stavros, M. (n.d.). Identification of usability decomposition. Retrieved from http://www.doc.ic.ac.uk/~rbc/status/STATUS_T2_1_v1.0.doc


    Stipulated Definitions and Focused Studies

    When the usability of a particular system is in question, it is sometimes helpful for a research to specify or stipulate a definition of usability for that particular contents. For example, Marvin J. Dainoff of the Center For Ergonomic Research at Miami University in Oxford, Ohio, conducted a study on "The Effect of Ergonomic Worktools on Productivity In Today’s Automated Workstation Design," mentioned in

    O'Dell, L. (2003). Standing for comfort. Occupational Health and Safety, September, 2003. Retrieved from http://ohsonline.com/articles/2003/09/standing-for-comfort.aspx

    In a section dealing with the usability evaluation of a keyboard support, Dainoff noted:

    "In this context, usability is defined as:

    1 – A minimum number of individual steps required for operation of controls;

    2 – The ability to make the adjustment with one hand;

    3 – The ability to make the adjustment rapidly;

    4 – Keeping the adjustment mechanisms in close proximity to the keying position; and

    5 – Keeping the adjustment mechanisms visible from the keying position."

    You might wish to compare this list to the previous lists by Nielson and Rubin, noticing how narrow Dainoff's focus might be. However, Dainoff's study clearly does look at usability issues, and is not meant to be an exhaustive accounting of workstation usability.



    "Usability"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpUsability Engineering
    Objectives:

    By the end of this lesson, you should be able to:

     

    1. Explain and model usability engineering.

    2. Discuss usability research and list 3 different methods other than usability testing.

    3. Distinguish between peer-reviewed literature and trade literature on usability research, and identify examples of the former.


    Usability Engineering 

    Work that attempts to assess and improve ease and efficiency of use is called "usability engineering."

    For an overview of usability engineering, please stop now and read "The usability engineering approach" by Dr. Robert Remington. Those enrolled in TEDU 510 can find this reading under the Assignments tab in the Blackboard course site.

    Remington's report is a bit dated, but still contains relevant discussion.

    I also suggest you read one of the following articles on usability engineering (optional):

    Haklay, M, & Zafiri, A. (2008). Usability engineering for GIS: Learning from a screenshot. The Cartographic Journal, 46 (2), 87-97. Those at Ball State can access this at: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=32573028&site=ehost-live
    or through
    http://search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=32573028&site=ehost-live

    D'Hertefelt, S. (2000). Emerging and future usability challenges: Designing user experiences and user communities. Netherlands: InteractiveArchtecture.com. Retrieved from http://users.skynet.be/fa250900/future/vision20000202shd.htm

    Holzinger, A. (2005). Usability engineering methods for software developers. Communications of the ACM, 48(1), 71-74. Those at Ball State can access this at:http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15515645&site=ehost-live
    or through
    http://search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=15515645&site=ehost-live

    Keeker, K. (1997). Improving web site usability and appeal. Seattle, WA: Microsoft Corp. Retrieved from
    http://msdn.microsoft.com/en-us/library/cc889361(office.11).aspx

    In addition, the US Department of Health and Human Services has an in-depth Website on Web usability at www.usability.gov with helpful information on the "What & Why of Usability" at  (optional):
    www.usability.gov/what-and-why/index.html


    Cost

    Does usability engineering have costs? Certainly. And while sometimes these costs may not be justified, other times they are well worth the savings. Jakob Nielson wrote a short piece on the "Cost of User Testing a Website," which you can find here (optional):

    Nielsen, J. (1998). The cost of user testing a website. Freemont, CA: Author. Retrieved from www.useit.com/alertbox/980503.html

    A listing of different usability testing charges by various companies can be seen in the following PC Magazine article called, "It Still Pays to Know" by Sarah L. Roberts-Witt (optional):

    Roberts-Witt, S. L. (2001). It still pays to know. PC Magazine Sept. 25, 2001. Retrieved from
    www.pcmag.com/article2/0,1759,132848,00.asp



    Usability Research

    As noted in Remington's report, usability research is one facet of usability engineering.

    "How usable or user-friendly is this product, service or system?" That's the primary question most usability research attempts to answer. 

    Is it important to know how usable a product is? How can a product be made more user-friendly?

    There is a TV commercial in which out-of-touch executives are listening to their CEO state: "We'll give the customer what we want, when we're ready." In today's competitive corporate environment, many companies are beginning to realize that a product or service not based on the perceived and actual needs and wants of users is less competitive.

    A previous lesson discussed user-centered design. But even if a product or service producer does not buy into that the notions of "user-centered design" and "usability engineering" in their entirety, they may stand to increase profits and long-term corporate health by conducting selected usability research. Of course, a company may choose to conduct usability research when none is called for or helpful. If so, under what circumstances should a company consider contracting a usability test of its product or service? When should such a test occur in the development cycle of a product?


    Usability Research Methods

    There are a variety of research methods that address usability. These include:

    • Focus group research
    • Surveys
    • Design walk-throughs
    • Expert evaluations
    •  
    • Usability audit (a checklist of standards)
    •  
    • Usability testing
    •  
    • Field studies
    •  
    • Follow-up studies

    A different list of eight distinct methods is provided by Jakob Nielsen at this link (optional reading):

    Nielsen, J. (n.d.). Summary of usability inspection methods. Freemont, CA: Author. Retrieved from http://www.useit.com/papers/heuristic/inspection_summary.html

    James Hom has compiled "The Usability Methods Toolbox," which describes many usability research methods. You can find it at the following location, but be sure to click on the link there to the list of usability methods (optional reading): http://usability.jameshom.com/  

    A somewhat more rigorous approach can be seen in

    Howarth, J. R., (2007). Supporting novice usability practitioners with usability engineering tools.  Doctoral dissertation. Blacksburg, VA: Virginia Tech. Retrieved from:
    http://scholar.lib.vt.edu/theses/available/etd-04202007-141645/unrestricted/JonathanRHowarthDissertation.pdf


    Usability Research Literature

    A good usability research study makes use of the literature in the field. This includes literature about the product or system under study, about the users, and about usability test methodology. Some of this literature is produced by private firms, or may appear in trade magazines or product Websites; this type of literature is generally not "peer-reviewed" and therefore should be scrutinized for bias, validity, etc., and used appropriately by future researchers. However, there are examples of higher-quality research literature in the field. One of these is the Journal of Usability Studies. You can find the articles online at (required visit):

    http://uxpajournal.org

    Please visit this site and scan a few of the articles, both to get a better conceptualization of usability, and to see examples of higher quality usability research reports. Notice the link to "All Issues" at the top of the page. However, please realize that this journal contains articles about the topic of usability, rather than about particular products, so it has a different scope and purpose than Consumer Reports and the other publications that center on products.


    Endnote

    Please note that although usability testing has been selected as the focus for our unit on using technology, there are many forms of research on product usability and consumer habits.

    For example, Ratner, Kahn, and Kahneman (1999) conducted experiments to determine information about our tendency to choose options we do not prefer. (Does that make any sense?) BSU students can see this article at:

    https://proxy.bsu.edu/login?url=http://www.jstor.org/stable/2489778?origin=JSTOR-pdf  ...............................................................................
    In another study, Lisa Levy, et al., attempted to determine how well consumers understood percentage daily value on food labels (optional reading.) BSU students can see this article at: 

    http://www.bsu.edu/libraries/protected/ereserves/
    FlowersJ/TEDU510/Levy2000-HowWell.pdf

    Usability research and user research is a fertile and diverse field that may integrate a variety of specialized content areas. What applications for research on the use of technology are needed in your field?



    "Usability Engineering"
    All information is subject to change without notification.
    © Jim Flowers
    , Ball State University

    HelpUsability Tests
    Objectives:

    By the end of this lesson, you should be able to:

     

    1. Discuss the purposes, benefits, and typical types of usability testing.

    2. Explain a selected usability report.


    Usability Testing

    For a good overview of usability testing, please stop now and read the report by Jase Chugh at this link (required reading):

    http://grouplab.cpsc.ucalgary.ca/saul/681/1997/jas/


    A Typical Purpose of Usability Testing

    The purpose of most usability tests is to determine information that can lead to the production and support of products and services that are:

    • easy to learn;
    •  
    • easy to use;
    •  
    • satisfying; and
    •  
    • provide much utility valued by the users.

    Benefits of Usability Testing

    In addition to finding out about how usable a product is, there are additional benefits of usability testing:  

    • Creates a historical record or benchmark.
    •  
    • Reduces service costs.
    •  
    • Increases probability of sales and repeat sales
    • .  
    • Usability makes products more competitive.
    •  
    • Minimizes risk.
    •  
    • Improves forecasts and future goal-setting.
    However, usability tests require time and money, and are not useful in all situations. They can also produce erroneous results if sampling and controls are not well chosen.


    Types of Usability Testing

    There are different types of usability tests, characterized by Rubin (1994) as follows:

    1. Exploratory Tests

    These usually occur during the design of a product to explore a certain design option related to usability. They may often be informal.
     


    2. Assessment Tests

    These are probably the most common forms of usability tests, and also typically occur during the design phase. They are an attempt to determine the efficiency and ease of use of a specific design prototype. The user or test subject performs tasks, rather than just "walking through" them. It usually has the following four characteristics:

    • Examines and evaluates effectiveness of implementation.
    •  
    • Users actually perform tasks.
    •  
    • The monitor has minimal interaction with the user. 
    • Qualitative and quantitative measures are taken.

    3. Validation or Verification Tests

    Usually occurring later in the development cycle of a product, these tests are an attempt to see if a product measures up to an established benchmark or standard. 


    4. Comparison Tests

    Unlike the previous three types, comparison tests are often performed at many stages during and after the development of a product to specifically compare one alternative to another on specified measures.


    5. Other

    (It would be too dogmatic not to include this category. Can you think of other types of usability tests?)


    Examples of Usability Tests

    Please visit one or more of the following usability test reports now (your choice of 1 is required reading).


    Previous Students' Reports

    Although many of the reports are no longer accessible, and although some may not be of a quality you'd like to duplicate, there were some terrific usability research reports produced by previous classes of (I)TEDU 510, Technology: Use & Assessment from Ball State University. There were also some that were not as good. Please feel free to visit the reports from some previous (I)TEDU 510 classes at the following page (which also includes a list of usability assessment test reports from undergraduate students) (recommended visit):

    http://techweb.bsu.edu/jcflowers1/rlo/usabilityreports.htm



    "Usability Tests"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpPlanning a Usability Test
     

    Objectives

    By the end of this lesson, you should be able to:
     
     

    1. Select an appropriate topic for a limited usability test.

    2. Outline a procedure for a usability test.

    3. Describe and perform task analysis.

    4. Adequately plan a usability test.

    5. Set up an appropriate usability testing environment.

    6. Write and test a usability test script.

    7. Select appropriate usability test subjects.


    Selecting a Usability Test Topic

    One of the first tasks in planning a usability test is selecting the subject (or product) to be tested. It would be of little benefit to test some products, while others would be prime candidates for a usability test. In a corporate setting, the selection of the topic or subject may be dictated by the client, not the usability researcher, yet that choice may be in error. In this class, your own experience should guide you into selecting an appropriate subject.

    In a commercial or governmental setting, however, one is not asking the question, "Which product should I test?" Instead, there is a product or system of concern and one asks the question, "When and how should the usability of the product or system be measured and analyzed?"

    Clients contact professional usability testing firms and contract with them to prepare a report based on usability testing. There is an example where the Internal Revenue Service wanted to have a usability assessment test performed on it's 1996 Business Master File Telefile Script; look at what they have specified for conducting that test:
    http://www.federalregister.gov/articles/1996/06/04/96-13912/submission-for-omb-review-comment-request

    Determine the Purpose and Client

    Why are you performing this usability test? Whom will it help? What type of information will it provide? What information does a decision maker need from your test in order to make a wise decision?

    One of the important first steps is to write a problem statement or research question for your usability test. Please avoid questions that are answered "yes/no" or by a rating e.g., 7.85 on a scale of 1 to 10. They tell us rather little. Instead, think about uncovering actionable information about the usability of a product. But who takes the action?

    Determining the "client" for the study can help. Let's say you are doing a usability test of a SmartBoard used in schools. Possible clients include:

    1. Administrators in schools owning the product, to inform them about areas in need of staff training and support
    2. Those considering the purchase of the SmartBoard
    3. The manufacturer's product re-design team

    These are three different types of clients, and each one would do something different with the information provided. In fact, it might be that each one would prefer a different set of information from your test. Thus, it is you responsibility to choose a purpose and a client, and to make sure that your test would provide that client with actionable information for the client:

    "The purpose of this usability assessment test is to inform the Sony product re-design team of a variety of usability problems encountered in testing the CyberShot, along with the contexts and causes of those problems and recommendations for specific product redesign to alleviate those problems."


    Usability Testing Non-Computer Products

    The discussion in this module uses many examples of usability testing for websites and computer software. However, usability testing is appropriate for a host of products and services that have nothing to do with computers, software, or the Internet. Consumer Reports Magazine and many other publications contain product and service reviews that consider a number of usability factors. Students are welcome to select a product or service for a usability test from a wide range of possibilities, based on personal interest.


    Examiner's Familiarity with the Product

    In order for a usability test to be successful, the developer of the test must be keenly aware of the nuances of the product being tested. Therefore, your selection of a product might be based somewhat on your own familiarity with it; and if you select a product with which you are only marginally familiar, you should become well-versed in using the product before planning the usability test.


    General Usability Test Procedure

    Elements of Usability Testing

    Rubin (1994) lists the following elements as typical of usability testing:

  • Develop problem statements or objectives, not hypotheses.

  •  
  • Use a representative sample of users, not necessarily a random one.

  •  
  • Represent the actual work environment.

  •  
  • Observe (and interrogate) the subjects.

  •  
  • Collect quantitative & qualitative performance & preference measures.

  •  
  • Recommend improvements.
  •  


    For specific concerns related to testing the usability of a website, you may wish to visit (optional reading):

    Boling, E. (1996). Usability testing for web sites. Bloomington, IN: Indiana University. Retrieved from
    www.indiana.edu/~iirg/ARTICLES/usability/usability.main.html

    If any of you are planning to conduct a usability test with children, please first read

    Risden, ., & Alexander, . (). Guidelines for usability testing with children. Interactions, 4(5), 9-14. Retrieved from
    microsoft.com/usability/UEPostings/p9-hanna.pdf
    or from
    http://delivery.acm.org/10.1145/270000/264045/p9-hanna.pdf?key1=264045&key2=2847914621&coll=GUIDE&dl=GUIDE&
    CFID=74358197&CFTOKEN=22790323


    Tasks

    During the usability test, the subject will perform some tasks. These are typically specified by the researcher, though the list of tasks could also include some open-ended tasks the user decides to do.

    Before developing the list of user tasks, you should be well-acquainted with the technology and its interface. You should have searched for and read critical reviews of the technology, including those from non-academic sources, such as product reviews at amazon.com. You should know what usability problems to expect, but still keep an open mind and remain objective.

    In some instances, a usability researcher you might find it helpful to perform a task analysis. In order for you to plan and record data about how the user accomplishes specific physical and psychological tasks, you would perform a preliminary analysis of the tasks that are involved. What are the steps, or sub-steps involved? What actions are involved? Where does the user have to look during each step? Are there multiple paths that could be used? These are just some of the questions you may ask. For a discussion on task analysis in usability research please see:

    Mills, S. (2000). The importance of task analysis in usability context analysis - Design for fitness for purpose. Behaviour & Information Technology, 19(1), 57-67. [The full text is available online through EBSCO: http://search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=3961768&site=ehost-live]

    In writing tasks for the user, please think of what the user's goals are. Do not try to write a step-by-step procedure for using the product. You are not a teacher here, but a usability researcher, and the goal is not for a student to learn to use the technology; the goal is to uncover problem in using the technology and explore their causes and possible solutions.

    Poor Task Statements

    "Turn on the Power Switch. It is the Orange button on the bottom right." (Asking this prevents you from uncovering problems in knowing to turn on the power or locating the switch.)

    "Click File, New File." (Again, don't give them step by step instructions. Tell them what the goal is."

    "Enter your name and Email address into the boxes on this page." (Don't ask them to reveal personal information.)

    "First, read the manual." (If a customer buys this technology, brings it home and unpacks it, there is no one standing there telling them to read the manual.)

    "Open AutoCAD and proceed through Tutorial 1." (This would not be a usability test of AutoCAD, but a learning activity involving a tutorial.)

    "Use the DeWalt power drill without reading the manual." (Don't ask them to use any dangerous equipment without providing instruction. Let other's perform those particular usability tests; they have better insurance.)


    Better Task Statements

    "1. Using the SureShot camera, take a photograph. 2. Now get that photograph to your computer and look at it on the screen. 3. Print a copy of the photograph."

    "Your first task is to use the school website to find out what is being served for lunch tomorrow."

    "Using your Pathfinder watch, determine 1. your altitude above sea level; 2. the current temperature; and 3. the compass heading you are facing."


    Planning the Test

    Here are some tips for planning your usability test:

  • Keep the number of tasks for the user manageable.

  •  
  • Write the tasks as realistic goals a user would have with this technology rather than as procedural steps.
     
  • Make sure the procedure for taking the test is user-friendly.

  •  
  • Write and get feedback on a script that you will follow during the test.

  •  
  • Try to think through different observations you might encounter.

  •  
  • Carefully prepare the test environment.

  •  
  • Carefully select and prepare your subjects.
  •  

    Chapter 5 of The Handbook of Usability Testing by Jeffrey Rubin and Dan Chisnell (2008) is titled, "Developing the Test Plan," and it contains the primary content for this lesson. Rubin's guidelines might seem a bit intimidating because they are intended for corporations with the usability testing resources, rather than students working on one unit of an online class. Please don't feel intimidated.

    That chapter has been uploaded to the Assignments area within Blackboard for TDPT 510. It is a Required Reading. Please don't miss the lists of sample performance measures and sample preference measures near the end of this reading.


    Planning the Test Environment

    In our class, students likely do not have access to professional usability testing laboratories. There is likely no eye-tracking hardware and software available. But it may be informative to see how they are typically laid out. The following illustrations are of usability laboratories that are specifically designed for computer software evaluation. As you might imagine, a lab to test toddlers' toys would look much different.


    Examples of Usability Test Environments

    Here are some examples of usability test environments (illustrations reside on the Websites indicated)


    1. Indiana University Usability Lab Floor Plan - for more information, visit:

    http://www.indiana.edu/~usable/usability_lab.html


    Microsoft Usability Lab
    2. Microsoft Usability Lab Layout, from:

    http://www.microsoft.com/usability/images/lablayout.jpg

    Observing an Eye-Tracking Session at OCLC's Usability Lab

    3. Mobile usability lab used by Jakob Nielsen, where the user's monitor tends to screen the observers from the user's view. For more, see Nielsen's article (image source): Nielsen, J., (September 10, 2012). Traveling usability lab. Nielsen Norman Group. Retrieved from http://www.nngroup.com/articles/traveling-usability-lab/

    Observing an Eye-Tracking Session at OCLC's Usability Lab

    4. A usability lab at TecEd in Ann Arbor, MI, retrieved from http://teced.com/wp-content/uploads/observer_medium1.jpg


    For a (somewhat dated) article on the results of a 1994 survey of usability labs see the following (optional visit):

    Nielsen, J. (1994). Usability laboratories: A 1994 survey. Freemont, CA: Author. Retrieved from www.useit.com/papers/uselabs.html


    Suggestions for Your Usability Test Environment

    The environment you will use to perform a usability test may not have the controls or observation tools shown above. However, you should give attention to your environment, making sure it does not contain unwanted distractions and is conducive to both taking the test, and recording observations. (Is it a comfortable environment?) Multiple tests should be performed in the same environment if possible. You may wish to get feedback from others on the environment you choose prior to conducting your tests.

    It is imperative that the environment is planned so the user can be observed, but not inhibited by the observer. For example, if you intend to test more than one person at a time, then you will not be able to adequately observe either; test only one subject at a time. If you are "in the face" of the test subject, then the subject might be thinking more about you and your observations than about using the product.


    Instrumentation

    The written tools used to gather data are typically referred to as instruments. In some instances, a research can use an existing instrument that was previously developed and validated, but only where it is appropriate for the current study. Often, the researcher will develop new instrumentation, such as a pre-test questionnaire, an introductory script, a usability test script, a post-test questionnaire, and an interview sheet, and an observer recording sheet.

    Some of the problems with these instruments can be traced back to how the researcher has conceptualized the research problems. Often, if a simplistic research question is used, there is a tendency to make the instrumentation simplistic, resulting in too little meaningful data. However, a researcher who begins with rich research questions, then devises means to uncover rich information will not have this problem.

    Most Common Problem:
    Not Gathering Enough Meaningful Data

    One of the most common problems I see with students' usability assessment test designs is the use of shallow, weak research questions. Please look over the list of research questions you develop, and any questions you have on a post-test or follow-up interview instrument. Look at the following:

    TI-89 CalculatorPoor research question:

    Question: "Will the user be able to turn off the TI-89 graphing calculator?"

    Data: 4 of 5 successfully turned off the calculator.

    Conclusion:
    "Yes, most of the users were able to turn off the calculator."
    (Image source: http://www.ti.com/calc/graphics/89.jpg)

    Action to be Taken by Design Team: None

    Reason: Insufficient detail regarding problem


     
    Richer research question:

    Question: "What problems are encountered when attempting to turn off the TI-89 calculator, and what are their causes?"

    Data:

    • "Subject 1 first pressed the ON button, then scanned the buttons, then noticed that the yellow OFF label meant she had to press "2nd" first.
    • Subject 2 pressed the ON button twice, thinking ti would turn off the calculator. He then pressed ON and ENTER. On the third try, he pressed "2nd" and ON.
    • Subject 3 hesitated at first because she didn't want to erase her calculations by turning the calculator off. But since this was part of the usability test, she turned it off using the correct procedure.
    • Subject 4 incorrectly thought that the calculator turned off when the cover was replaced, and failed to turn off the calculator.
    • Subject 5 first pressed "2nd" and "ESC" because the yellow label by "ESC" said "QUIT." When this didn't work, she scanned the key labels, found "OFF" and successfully turned the calculator off.
    • Follow-up interviews uncovered the following suggestions regarding this issue:
      • Replace "QUIT" with the label, "CANCEL."
      • Since OFF does not lose data, rename this "SLEEP."
      • Have a separate key labeled "OFF."
      • Don't worry about this. The user will soon learn what to do. And the calculator turns off by itself if idle for a few minutes anyway."

    Conclusion/Recommendation: "New users may experience problems in turning the calculator off for the first time, even thinking replacing the cover turns it off. Confusion due to button labeling may account for some errors. Re-labeling "OFF" as "SLEEP" was suggested to indicate data is not lost when the calculator is turned off, but since this is soon realized, the "SLEEP" label may well create additional problems as users search for "OFF," so this relabeling is not recommended. Relabeling "QUIT" as "CANCEL" was suggested, but it is not recommended since the two can be synonymous. A final suggestion was an additional, unique "OFF" button. While this could help the novice turn off the calculator for the first time, it is not recommended because it increases the button count and because of the errors likely when hitting this button by accident. In conclusion, turning the calculator may be initially problematic, but it is not likely to continue to be so, and no design changes are recommended."

    Questions on Instruments

    Extend this to questions on survey instruments, pre-test questionnaires, and especially follow-up questionnaires. Look at your questions. Do they begin with: Can, Do, Is, To determine if, To determine whether? If you do have a question that is answered by a yes/no, ask yourself if it is rich enough. How could you probe more deeply?

    Extend this strategy. If you ask a yes/no, could you ask for counts instead? If you ask for counts, could you ask for different types of counts? If you ask for counts, could you ask for ranks? Could you ask for reasons? Could you seek solutions?

    Your job in a usability assessment test is to furnish the client (such as the manufacturer's product redesign team) with analysis of data from your study that uncovers and explores many, many usability issues. It is not uncommon for there to be 30 different usability issues, though many would likely be minor.

    If your survey or usability test follow-up questionnaire contains lots of yes/no, true-false, and ranking items, expect the product redesign team to ask, "Well, what was the cause? What exactly were they thinking when they answered this? How should the product design respond to this? Give us something we can use to redesign the interface instead of shallow information that we can't use. Tell us about a lot of minor problems that users have with using the interface, and go into great detail about the major errors and problems. Tell us whether you confirmed the problems by discussing your observations with the test subjects after the tests. Suggest specific changes to the interface to alleviate the problems."

    Usability test subjects are a precious resource, and you should use them wisely. Part of the problem I see is that the researcher does not often observe carefully enough, does not record observations, does not probe deeply during a follow-up with each subject, or does not adequately analyze observations. Don't fall into those pitfalls.

    Typically during instrumentation development, many sources are consulted, including previous studies. The researcher should have a defensible rationale for choices made during the creation of instruments. Usability should be kept in mind, so the instruments should be as short as possible, very clear, and easy to use.

    Validity & Reliability

    However, this is not sufficient. An instrument should do what the research claims it will do, and it should do this every time. These two characteristics are validity and reliability. It is typically insufficient for a researcher to merely claim that their newly developed instrument can do something without data to back up that claim. Thus, the draft instrument might be subjected to a number of tests. These can include something as simple as a review by a content expert, or as complex as a statistical analysis if items after a data sample has been collected to compare different items as effective measures. (For example, if Item 3 and Item 8 both gage "technophobia," I would expect them to have similar responses for any individual, and if they do not, there is likely a problem.) But keep in mind that an instrument is not valid in some abstract sense - it is only said to be valid within the context of its particular use. So a questionnaire in Greek that is valid in Greece would not be valid in Mexico.

    Pilot Testing

    The advice to all is: "Conduct a pilot test." Many problems can be spotted by conducting a pilot test or pilot survey. Commonly, the researcher will find that their interpretation of instrument items is not always the same as the interpretation of a test subject, and here the researcher should not become defensive or to try to explain what was meant, but instead should re-write the item so that it is clear and unambiguous.


    Usability Test Script

    Following a script improves testing consistency and is a check that specific points are covered. It can be combined with a format that allows the examiner to record observations. 

    Please visit at least one of the following (any 1 is required):

    1. Kansas State University. (n.d.). Usability test script. Manhattan, KS: Author. Retrieved from http://www.k-state.edu/univpub/webtutorial/parent.pdf
    2. Krug. S. (2010). Usability test script from Rocket surgery made easy. New Riders Press. Retrieved from http://www.sensible.com/Downloads/test-script.pdf
    3. Boag, P. (February 29, 2008). What goes into a user testing script. Boagworks & Boagwoarld. Retrieved from https://boagworld.com/usability/what-goes-into-a-user-testing-script/
    4. US Department of Energy. Usability test script for facilitators (Adapted with permission from Anthro-Tech Inc.) Retrieved from https://www1.eere.energy.gov/communicationstandards
      /docs/ux_test_script.docx

    In these and other web resources, such as the suggestions on writing a usability test script by Cheryl Frost (Frost, C. (n.d.). How to write a usability test script. eHow. Retrieved from http://www.ehow.com/how_4968896_write-usability-test-script.html) you may find some contradictory advice. Be careful. Sometimes there are examples available that should not be followed. For instance, look at the usability test script at http://www.pages.drexel.edu/~sga72/docs/eZMall-%20Usability%20Testing%20prototypes.pdf and find the list of usability tasks. Notice how the user is told to "Select search." Would the user have known to do this without being told? We don't know, and neither does the usability researcher because the tasks for users are too specific, written in terms of user actions rather than user goals. We should not specify, "Select menu," but instead develop tasks that ask users to accomplish a typical objective a user would have with a product, such as, "place an order for a box of chocolates," or "download the pictures you just took into the computer."


    Tips on the Script and on Testing

  • Plan to test one participant at a time.

  • Include an introduction or orientation section of the script, wherein you tell the subject what the purpose of the test is.
     
  • Be sure to emphasize that the person is not being tested, the product is what is being tested.

  • Encourage the subject to "think out loud" during the test so you will be able to somewhat follow their cognitive path.

  • Test the script on an individual who is not one of your usability test subjects. Revise the script as needed.
     
  • In some tests, there would be preliminary documents for participants to read and sign.

  • In some tests, there would be a pretest questionnaire for participants to fill out.

  • Do not record unnecessary or inappropriate information about the test subject. Protect their privacy. Do not mention the subject by name in any report.

  • Do not give them step-by-step instructions on using the product. Although participants may need instructions on what they are to do, try not to include procedural steps on using the technology in your script; that is precisely what you are trying to measure (i.e., how people use it is a dependent variable, not an independent variable.)
  • Do not give them step-by-step tasks, such as "Step 1, turn on the power." Instead, write tasks in terms of what the typical user might want to accomplish, such as, "Now take a picture with having the camera's flash go off."
     
  • If the user gets stuck at one point, you may wish to put the script aside for a moment and help them proceed, but only if that seems to be the last resort. Be sure to record the cause of the problem and the solutions attempted.

  • Keep your script and your usability test short, with a clearly defined beginning, set of tasks, and end.

  • Be sure your script is accompanied by a place for the examiner to record observations.

  • Get the required information from your test subjects. They are precious resources, so make sure you use them wisely.
     
  • Do not limit your observations to what the participants say and write. Typically, participants should be observed. For a discussion of this, check out the following (optional):
    www.useit.com/alertbox/20010805.html 
     
  • In some tests, there is a post-test questionnaire for participants to fill out.

  • Try not to seek yes/no answers. Instead, ask rich questions that require rich answers, and follow-up those questions, as needed.
     
  • Conduct a debriefing follow-up with each participant immediately after the test session. During the debriefing, ask about issues that were not evident. Ask questions as needed to make sure your observations and notes are accurate. Ask probing questions that unearth the reasons behind hesitations, mistakes, questions, references to a manual, etc. Find out what the person was thinking and expecting. Ask the subject about possible product improvements to alleviate some problem they might have noticed.
     
  • After the test, answer any questions the participant has and thank the participant for taking the test.

  • Feel free to abandon any of these guidelines if your best judgment leads you away. (Gee, even this one?) You might be asked to explain your reasoning, but you should exercise your own discretion in designing your test.
  • Most Common Problem:
    Not Gathering Enough Meaningful Data, as previously mentioned

    It is common for someone performing their first usability test to fail to gather sufficient meaningful data. They might rely too heavily on a questionnaire, for example, and miss some critical observations. They aim to answer yes/no, true/false, or simple ratings questions without uncovering the nature and causes of usability problems in detail. Test subjects are precious resources, and it is critical to gather the information they can provide.


    Selecting Usability Test Subjects

    Be sure you report on how you selected and recruited test subjects, and put them into different categories (if that's what you did.) If you are a teacher, it is not a good idea to select test subjects from among your students. For one thing, they are likely not a diverse group (in age and experiences), and for another, the role of a teacher can be at odds with the role of a usability researcher.


    Categorize Test Participants Appropriately

    Your experimental design may specify the nature of test subjects to be selected. For example, you might be interested in testing how PC users and Macintosh users respond to a certain computer software task. In such cases, you may need to categorize potential test subjects, but only where it is appropriate.


    Include Users of Diverse Abilities
    You may wish to have a heterogeneous group of subjects with respect to the dependent variable. That is, if you are testing a camera, you may wish to include an experienced, professional photographer as well as a person who doesn't normally use a camera in your sample of test subjects.


    Select End Users
    The test should be aimed at the end user of a product or service. This may or may not be the individual who purchases the product. Potential categories of anticipated future end users may also be included.


    Select an Appropriate Number of Subjects
    In a rigorous test, the number of subjects would be determined, in part, by the statistical confidence required. In our class assignment, you may wish to limit the number of test subjects using your best judgment. If you decide to use a single subject, you are making a big mistake. But if you think you should test 50 to 100 participants, think again, and read the advice from Jakob Nielsen at (optional reading):

    Nielsen, J. (2000). Why you only need to test with 5 users. Freemont, CA: Author. Retrieved from
    www.useit.com/alertbox/20000319.html

    Rebuttal: In "Relaxing the homogeneity assumption in usability testing," David A. Caulton makes a case for increasing the number of usability test subjects beyond the few (5?) suggested by Nielsen, especially if we don't assume that all users tend to have the same probability of encountering all usability problems. (optional reading)

    Caulton, D. (2001). Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 20(1), 1-7. doi:10.1080/01449290010020648. Available to BSU personnel through http:// search.ebscohost.com.proxy.bsu.edu/login.aspx?direct=true&db=aph&AN=4485852&site=ehost-live

    However, I would recommend that students completing usability tests in the span of a few weeks should not feel they must use a large number of subjects, especially if the purpose is to learn how to conduct a usability test.


    Select a Specific Non-Random Sample
    With much experimentation, it is important that test subjects be selected randomly from a population. However, many usability tests are performed on target non-random subpopulations with specific characteristics. Yet, it is possible to have too little subject variability for your purposes as well. For example, if you wanted to know how pre-schoolers interacted with a new toy, and you happen to have three triplets at home, it would be too narrow to limit your sample to this sibling group.


    Be Nice. Be Ethical.
    Please approach the potential test subjects in such a way that they know you are testing product usability, not them. Put them at ease. Tell them their name and relationship to you will not be used in any report. Let them know they can decide to quit at any time, and at the end of the test they can decide to have their data excluded from your test. When working with minors, be sure to consult their parents or guardians and to offer them the  same information, rights, and protection.

    Endnote

    A usability test assigned as a class assignment is typically not intended to produce a publishable report in a journal of a professional test of a product. If that were the case, the researcher would have to request approval from their Institutional Review Board (IRB), which oversees the protection of human subjects. Research on animals must also be submitted for prior approval.

    If you intend to publish the results of your research in order to inform the field with generalizable knowledge, then your research study comes under the heading of "human subjects research" and typically must be submitted to review by bodies that oversee human subjects research. For more information about human subjects research please visit:

    http://techweb.bsu.edu/jcflowers1/rlo/humansubjects.htm

    It is recommended that students who choose usability topics for testing do not select any usability test that is physically invasive without prior approval from the Institutional Review Board.


    "Planning a Usability Test"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpConducting a Usability Test
    Objectives:

    By the end of this lesson, you should be able to:

    1. Ensure that pre-test conditions have been met for a usability test.

    2. Conduct an effective usability test.


    Conducting a Usability Test

    Before the Test

    Here are some tips on what should be done before a usability tests begins. You may find it useful to add personalized items to this checklist:

    Familiarize yourself with the product.

    Prepare a script.

    Prepare for data collection.

    Determine the "rules" or test protocol.

    Take the test yourself.

    Pilot test your usability test, and revise.

    Prepare the environment.

    Check all equipment, hardware, software, etc.


     
    labA Typical Usability Test Procedure

    Please consider two good sources of information on conducting a usability test. The first of these is the following procedural list from Rubin, (1994):

  • "1. Scan your customized checklist.

  • 2. Prepare yourself mentally.

  • 3. Greet the participant.

  • 4. Have the participant fill out and sign any preliminary documents.

  • reading a script5. Read the orientation script and set the stage.
  •  
  • 6. Have the participant fill out any pretest questionnaires.

  • 7. Move to the testing area and prepare to test.

  • 8. Establish protocol for observers in the room.

  • 9. Provide any prerequisite training if your test plan includes it.

  • 10. Either distribute or read the written task scenario(s) to the participant.

  • 11. Record the start time, observe the participant, and collect all critical data.

  • 12. Have the participant complete all posttest questionnaires.

  • 13. Debrief the participant.

  • 14. Thank the participant, provide any remuneration, and show the participant out.

  • 15. Organize data collection and observation sheets." (p.237)
  • Restricted Information for Ball State Personnel

    A practical guide to usabiity testingThe second source of information is from:

    Dumas, J., & Redish, J. (1993). A practical guide to usability testing. Norwood, NJ: Ablex Publishing.

    A note before you read:

    Do not be discouraged by the timeline presented for a "typical test day" in this reading. Your testing will probably require less time.

    Pay particular attention to Table 19-1.

    Please stop now and read the electronic reserve of Chapter 19: Conducting the Test at the following location (required reading, password protected 537 Kb PDF file opens in a new window):
    http://www.bsu.edu/libraries/protected/ereserves/
    FlowersJ/TEDU510/Dumas1993-Ch19Conducting.pdf

    After the Test

    Hopefully you have made a lot of observations. If the test subject hesitated a bit, looked puzzled, or did something wrong, you should have noted it. But what did that mean? What was the person trying to do, what action were they attempting, and what result did they expect it to have? This is where you should have a debriefing session with the test subject, and after thanking them and recording their initial thoughts, reactions, and suggestions, you could delve more deeply concerning your observations. Just what was going on in each instance? What was the cause? Does a possible solution present itself? In all, your debriefing session should greatly enrich and validate your observations.

    Endnote

    Experimental procedures sometimes do not proceed as planned. Please be sure to accurately and objectively document all relevant observations, deviations of procedure, and other factors that you believe could have influenced results. Do not be swayed by any bias or predisposition you might have had as to what the results might be; instead, remain clinically objective.



    "Conducting a Usability Test"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpReporting the Results of a Usability Test
    Objectives:

    By the end of this lesson, you should be able to:

    1. Analyze the results of a usability test.

    2. Report the results of a usability test.

    3. Compare methods for reporting on usability tests.


    Analyzing Results

    Jennifer Fleming wrote a short article containing general suggestions for usability test analysis; it can be found at (optional reading):

    Fleming, J. (1998). User testing: How to find out what users want. Anchor Productions. Retrieved from http://sqa.fyicenter.com/art/User_Testing.html

    However, the analysis methods you choose should be appropriate to your test. It is recommended that you view multiple usability test reports and evaluate their methods of analysis as well as their content and format.


    Writing a Usability Test Report

    Check out the Usability Report Tips here (optional reading):

    Wilson, C. (1987). Testing techniques: Usability report tips. Usability Interface (August, 1997). Retrieved from
    www.stcsig.org/usability/newsletter/9708-usability-reports.html

    Many resources on usability can be found at the Information & Design website out of Melbourne, Australia:
    http://www.infodesign.com.au/usabilityresources
    There is a separate page on usability report writing at
    http://www.infodesign.com.au/usabilityresources/writingusabilityreports

    But one of the best set of recommendations for reports prepared to inform on-going design is found in the following (strongly recommended):

    Theofanus, M., & Quesenbery, W. (2005). Towards the design of effective formative test reports. Journal of Usability Studies, 1(1), 28-46. Retrieved from http://uxpajournal.org/wp-content/uploads/pdf/formative.pdf

    However, your usability test report should be an original work, and may well deviate from the above suggestions, as you see fit. For one thing, if you are preparing a usability test report that is an academic requirement, it might be wise to follow academic guidelines regarding the citation of academic literature.

    As you write the report, think about who your client is, and write it for them. If you are doing a usability study on a Hewlett-Packard Personal Digital Assistant, for example, you would probably be contracted by management, designers, or engineers at Hewlett-Packard. What would they most like to learn from your test? I'd wager they'd be very interested in finding out specifically where and how their PDA is less than user-friendly, and possibly where they might concentrate their efforts as they design the next model for improved usability (based on your test.)


    What Not to Include

    You are advised not to include any of the following in your report:

    1. The names of participants.

    2. Descriptions of participants from which a reader could deduce identities.

    3. Test hypotheses.

    4. Statistical analysis that violates the assumptions of such analysis, or generalized conclusions that are misleading due to sampling or other factors.

    5. Unsubstantiated opinion.

    6. Biases.

    7. Entire listings of all of the raw data and observations. (Only supply those necessary, and provide summaries as needed.)

    8. First person, second person, colloquialisms, contractions, partial sentences, errors in punctuation, etc., except as they appear in direct quotations and scripts. 


    What to Include

    You are advised to include the following in addition to the other information in your report:

    1. A descriptive title, your name (hyperlinked to your Email address) and the date.

    2. An introduction that explains what the technology and the purpose of your study. What is the product? What is the model and who manufactures it? What manuals and product information is available from the manufacturer? How are you characterizing "usability" in the context of your study? Why is a usability test justified in this instance? Why type of usability test was this? (Did you check reviews on Amazon.com to see if key usability problems were reported by users, if applicable?) If there are related articles, Websites, product descriptions, product comparisons, or other information that would help the reader here, then discuss it and include citations.

    3. If possible, graphics of the product being tested. While an overall graphic may help the introduction, close-ups may best illustrate the points you will later make about usability issues. If there is a menu system, then graphics showing the logic of the menu might help. Include a caption with each that includes a figure number, title, and source citation. Do not copy copyrighted graphics to your Webspace.

    4. A detailed description of the methodology, including, a description of any instruments used including how they were developed and tested, of the environment, of the participants and their selection, and of your procedures. If a script and other instruments were used, add a link to it or place it in an appendix. Note how these instruments were developed or tested. Note what you did. How was data collected? What did you do during the testing? Refer to the literature on instruments and methods. (Hopefully, you read some published documents that helped you to create those instruments, so cite your sources as you discuss instrument development.)

    5. A summary of your analysis of the quantitative and qualitative observations. Do not limit your observations to what participants tell you or write; typically much is learned by watching what they do and then making sense of it. Often, data emerges during a follow-up interview with each test subject, and that also needs to be analyzed.

    6. Your reasoned conclusions based on your analysis of data, leading to specific recommendations that might impact product re-design, product support, consumer education, etc. Specifically, where are there usability problems? What is the nature of each of these problems? What are the root causes of the problems? What solutions would you suggest to remedy them? Be sure to distinguish your recommendations from those of your subjects.

    7. References throughout your report to reputable authors who could shed light on the product, similar products, usability of the product, your methods, or your conclusions. Link these to your reference list.

    Big Tip:

    Make believe you work for the manufacturer (Sony, Ford, Gateway, etc.) and that they have hired you for $10,000 to furnish them with an online report of the usability issues with their product. They need many specifics that will help their design team develop the next model of this product so that it overcomes a number of usability problems in the current m  odel. They need information they can use in this redesign. What are the usability issues with the product, and why are they encountered? Be specific. The person charging you with this suspects your research will turn up dozens of small issues, and maybe three to five big issues.


    Formatting the Report

    You are required to post your report on publicly accessible Webspace on the Internet in html format. Those who have not posted previously web pages should not have anxiety over this requirement. A later learning module will provide instruction on creating web documents.

    Because your report will be on the World Wide Web, please consider the following advantages:

    1. Your report will be immediately published.

    2. Your report will be globally accessible.

    3. You can get feedback on your report from others regardless of their location.

    4. Appropriate and informational graphics, sound, and even video can be used if you have the facilities.

    5. The report may be non-linear.

    6. Hyperlinked resources may be used to give the user a fuller experience. 



    There is a proposed "Common Industry Format for Usability Test Reports" described at (optional reading):
    zing.ncsl.nist.gov/iusr/documents/cifv1.1b.htm

    Unfortunately, that format may be too rigid for some of typical student-written usability test reports. However, it may provide a good starting point from which you may develop your own report format.


    Examples of Usability Test Reports

    Please visit at least one of the following sites for an example of presenting usability testing results (any one is required):

    You may also wish to revisit the usability test reports (cited in other lessons):


    Endnote

    Usability testing is one aspect of a usability engineering program. Many references on usability testing have been provided, not to overwhelm students but to provide resources that may help answer a variety of questions. It is also recommended that students search the Internet for additional information and examples.

    Although specific formats and instructions for usability test procedures are presented in the text of this module and in the hyperlinks, students are advised to use their own judgment in the preparation, execution, and reporting of their usability test; however, it might be necessary for the usability test report author to discuss the logic and justification of such decisions.



    "Reporting the Results of a Usability Test"
    All information is subject to change without notification.
     © Jim Flowers
    Ball State University
    Help Other Use-Related Research

    Objective: 

    By the end of this lesson, you should be able to:

    1. List two types of research related to the use of technology other than usability tests.

    Other Use-Related Research

    Some might think that the only type of research on how people use technology is usability testing. This is incorrect.


    For example, a researcher in California mounted a camera in a car. Drivers were then observed, and their lane-changing behavior was noted. The results were fascinating.

    It seems that drivers in one age/gender group exhibited behavior that was not based on fact, but on a misperception of fact. Here is what happened.

    Drivers in this group responded that they wanted to change lanes (to get into a faster lane) when in fact the lane into which they wanted to move was a slower lane. But they thought it was faster.

    When you pass cars stopped in traffic, you can zip by a dozen cars in just a few seconds. However, when you are stopped in traffic and cars are zipping by, the spacing of the cars means that it takes much longer for a dozen cars to pass you than it took for you to pass a dozen.

    Drivers in this group used an internal chronometer to incorrectly determine that more cars were passing them than they passed. They based their estimate of the count of the cars on their sense of the passage of time.


    Trends in use are another topic of frequent research. However, some might incorrectly infer use from sales data. Furthermore, trends may be suspect if the nature of the data or the recording mechanism has changed.

    As noted earlier, Consumer Reports and a host of other magazines and services offer product comparisons that may or may not be based on function, price, usability, and other criteria. Product comparisons from a particular manufacturer may be suspect because of the obvious bias of a manufacturer toward their product.

    Some studies look at specific technique and attempt to identify improvements. When I first studied classical guitar technique, my teacher had me study the position of my hand at rest and note how it moved in each of the ways required to pluck the strings. Similarly, process engineers or efficiency experts may look at technique improvement. But not all improvements are increases in efficiency - some may improve safeguards.

    Some of those professional teachers in our class have made a study of how to help others learn to use technology. This is a vast area, and there are conflicting theories.

    But technology has presented us with new challenges: Should surgeons be trained using virtual trainers? Is online education appropriate for hands-on skill development?



    "Other Use-Related Research"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University
    HelpUser Surveys

    Objectives:

    By the end of this lesson, you should be able to:

    1. Distinguish user surveys from other forms of research.

    2. List five different purposes of user survey research.

    3. Discuss various approaches to user surveys.

    4. Locate information on survey research, links to using the Qualtrics survey software at Ball State, and examples of reports on survey research.

    5. Devise the methods, and devise and test the survey instrument, making revisions and seeking approval as needed.

    6. Create and assess the appropriateness of an item for inclusion on a survey.

    7. Identify pitfalls in data recording, analysis and reporting.

    8. Create survey items that solicit rich, actionable information, and discard survey items that are shallow or irrelevant when appropriate.

    9. Propose reasoned methods for using the results of a user survey.

    10. Make and defend decisions integral to user survey research.


    User Surveys

    User surveys are not usability tests, nor are they market surveys. A user survey looks at a sample of the population of actual users of a product, system, or service (hereafter, I'll just use "product" for these three.) Because these are actual users, the product is already beyond the development, manufacturing, marketing, and distribution stage.

    But just how much experience should people have with a product when they participate in a user survey? As with usability testing, it might help to have subjects with varying degrees of experience. Of particular interest may be those who have newly experienced a product, before they learn to become comfortable with it.


    Purposes of User Surveys

    Why would a company consider performing a user survey? There are many reasons, including the following:

    1. To discover information about use to:

  • improve products & services

  • aid in forecasts

  • justify products, features, jobs, etc.
  • 2. To obtain information about users to:
  • customize products and services

  • identify current and potential market

  • identify consumer preferences and habits

  • sell or to use the "list of users" in indirect ways
  • 3. To register products with users

    4. To control access to support services

    5. To advertise

    6. To aid in directing future advertising

    7. To sell additional products or services

    8. To improve corporate image

    9. Other

    Sometimes a company may want consumers to think a survey has a certain purpose, such as registration for technical support, when it really has a different purpose, such as creating a database for future marketing.


    Variety of User Surveys

    There are many differences among user surveys. They vary according to:

    • medium (e.g., mail, phone)

    • length

    • rigor

    • quantitative or qualitative
    Surveys also differ regarding the approach taken to the survey sample. Some surveys target a random sample of users, others target a non-random sample. Non-random sampling is typically done:
    • where the respondents are self-selected

    • with "populations of convenience"

    • to look at specific types of users.
    It is also possible for a survey to look at an entire population, rather than a sample (or "proper subset") of that population. Surveys that are intended to produce quantitative results within a given confidence interval may make use of statistical tools to determine minimal sample size.

    Two important issues related to the survey sample are confidentiality and informed consent. Human subjects have rights, and survey developers should be mindful of those rights. Moreover, research on human subjects conducted through universities and other institutions is often subject to prior approval by a committee that oversees protection of human subjects.


    Information on Survey Research

    User surveys are a type of survey research. Much information has been written on this topic.


    References on Survey Research

    A good overview is presented by Karen W. Bauer in the following:
    Bauer, K. (2005, November). Strategies for survey research and techniques for survey design. Paper presented at the meeting of the Australasian Association for Institutional Research, Melbourne, Australia. Retrieved from http://www.powershow.com/view1/4f68c-
    ZWUxY/Strategies_for_Survey_Research_and_Techniques_for_
    Survey_Design_powerpoint_ppt_presentation

    William M. Trochim at Cornell University has written a good introduction to survey research in his Research Methods Knowledge Base that students can access at no charge at:
    www.socialresearchmethods.net/kb/

    One of the most difficult parts of survey research is the development of instrumentation. For most surveys, this means writing a questionnaire. A tutorial on survey design can be found here (optional):
    www.statpac.com/surveys/index.htm

    There are a number of online survey services and tools, such as the offering from Surveyshare.com:
    www.surveyshare.com/


    Qualtrics™ Survey Software

    Students, faculty, and staff at Ball State University have access to the Qualtrics survey software. Browse to:

    http://www.bsu.qualtrics.com/

    Log in with your BSU username and password. Note the "Help and Tutorials" button that brings you to the helpful information at:

    http://www.qualtrics.com/university/researchsuite/

    The "Learn Qualtrics in 5 Steps" tutorial typically takes no more than three hours.


    Examples of Survey Research

    You can find examples of research reports from user surveys at the following locations (optional visit):

    http://www.pewinternet.org/~/media//Files/Reports/
    2012/PIP_mobile_phone_problems.pdf

    http://ntl.bts.gov/lib/45000/45100/45105/
    MDOT_Research_Report_RC1570_387400_7.pdf

    www.lib.berkeley.edu/Staff/wag/pathfinder_user_survey_report.html

    www-static.cc.gatech.edu/gvu/user_surveys/survey-1998-10/

    This last link above, now quite dated, contains a wealth of information from a 1998 survey of World Wide Web users.

    User survey results are reported elsewhere on the Internet, in print, and at presentations. Typically, the inclusion of graphics is better when it is integrated with text that refers to and explains the graphics.

    Surveys often lead to meaningful changes. Visit the following report of the user survey conducted by the National Energy Research Scientific Computing Center, and scroll to the heading called, "Survey Results Lead to Changes at NERSC" on the page numbered 89 (which is the 96th pdf page). (Required visit)

    https://www.nersc.gov/assets/Annual-Reports/annrep0809.pdf

    (A more recent survey is mentioned in the more recent annual report at https://www.nersc.gov/assets/Uploads/
    2014NERSCAnnualReport.pdf, but the 8-90 annual report is the required visit here because of its mention of the changes based on survey results.)

    Those interested in user surveys related to health and wellness may wish to view the following (optional):

    www.cms.hhs.gov/HealthCareFinancingReview/Downloads/98fallpg29.pdf


    Planning

    The following recommendations may help you plan your survey research.

    1. Is it classified as research? Determine if what you are setting out to do is "research" that intends to inform "the field" and the general population, or if it is an exercise by you in a class where the goal is to learn about this method. If it is research, then you would likely need to submit a protocol for human subjects research to your institution's Institutional Review Board (See http://techweb.bsu.edu/jcflowers1/rlo/humansubjects.htm). If it is instead done  to learn about surveying, then it is not research and need not be submitted to the IRB, though you still need to respect the rights of subjects.

    2. Identify the client for the survey's analyzed information, the decisions or types of decisions to be made based on that information, and the possible options for those decisions. If you are doing this as a class assignment, do not identify the teacher of that class as the client. Then think about the type of information that is most needed to make those decisions. But realize that surveys do not typically provide anything more than reports from respondents, and sometimes respondents don't know the accurate information or they lie or withhold it.

    3. Draft out the methods. A big part of this is the method for selecting and recruiting subjects and for how the survey responses will be acquired, stored, and analyzed, though methods for analysis might need to wait until you know what the survey items (i.e., survey questions) are. (Teachers typically want to survey their own students, which can cause many problems concerning validity and coercion.) You should also plan the timeline for announcing, opening, reminding, closing, and analyzing. Plan the way you will store information, how you will code it, and what precautions will be taken to protect human research subjects.

    4. Draft the survey instrument (i.e., questionnaire), keeping in mind recommendations from later in this lesson. In addition to the survey items/questions, draft out any introductions or instructions.

    5. Share the methods and draft instrument with others (if you are in a class, then share them with the teacher and other students) and solicit feedback.

    6. After the survey instrument has been reviewed by others and revised by you, do a "dry run."  For the dry run, make up some survey data, as if you've already collected the data. Some of this should be in line with what you expected, and some should be not in line with what you expected. If at all possible, try to misinterpret each survey item the way people often do, and put that into your data set. Then, perform an analysis on this bogus data and write out a few meaningful conclusions for the client from it. I'll wager that in doing this you'll think, "Gee, I didn't use information from any of these four items, but I now really wish I'd asked them a follow-up question about..." If so, then rewrite your survey instrument as needed.

    7. Pilot test the instrument. Then comes pilot testing, where you would ask someone who will not be in the survey sample to take the survey and let you know if there are items that are not well understood. It is okay to have more than one pilot tester. There would then be revisions after pilot testing.

    8. Seek approval of an administrator or board if it is needed prior to contacting potential survey respondents. You will likely share with them both the survey methods and timeline and the latest version of the tested survey instrument. Like you, they should be concerned with protecting the rights of research subjects. Do not contact potential respondents prior to receiving approval. If the instrument or methods have changed since you received approval, make sure you have provided the administrator or board with the updated instrument or methods and received their approval on those updates before proceeding.

    9. If an administrator or board requests or requires a change, be sure to submit to them all revisions for their subsequent approval, and be sure to get that approval prior to proceeding.


    Survey Question Considerations

    There may be a tendency of new researchers to ask questions that are of little use. Such questions can severely limit the effectiveness of a survey. If, after buying a Subaru station wagon, I got a survey from Subaru that asked me either my income, my sexual preference, or my religion, I would throw it in the trash. Such prying questions would do more harm than good.

    But there are other types of poor questions. Some questions are poor because of a "So what?" factor. For example, why would Subaru want to know what other type of car I might have? (There might be a remote application of this information, but it makes the survey longer and therefore decreased their expected return.) Each question is precious, and none should be trivial.


    Problems with Interpretation
    Another very common type of error has to do with the interpretation of a question: For how long have you used a Motorola Cell-Phone?
    A. Less than two weeks.
    B. More than two weeks, but less than one year.
    C. One year or more.

    Well, I have owned one for four weeks, but I haven't used it yet, what do I answer? What if I have used it for two years, but if you put together all of my minutes, it adds up to less than two weeks of use? Before you discount these as unrealistic interpretations, remember that the job of the questionnaire writer is to write unambiguous questions that are not readily subject to multiple interpretations.


    Problems with Answer Limitations

    Another typical problem relates to the limitations imposed by the choices offered. For example:

    Which is your favorite cola? 
    A. Coca-Cola
    B. Pepsi-Cola

    (Gee, it would have been nice if those weren't the only two choices. Image from www.bevmo.com/115images/65413.jpg )

     


    Problems with Bias
    Bias can also emerge as choices are shown: How satisfied are you with the TI-89 Graphing Calculator?
    A. Mostly satisfied
    B. Very satisfied
    C. Extremely satisfied.

    Problems with Irrelevant or Shallow Items

    There should be a close connection between the purpose of a survey and the nature of survey items. Too often, a novice survey writer creates questions that can be answered by either a yes/no response or by a rating. These are shallow questions, and while they are sometimes appropriate, they can be of less value than the writer initially suspected. The issue here has to do with the relevance of the items to the survey goals. Do these questions support the purpose of the research?

    Let's say that the Prius redesign team at Toyota sends out a survey to current Prius owners for the single purpose of finding out information that will lead to  improve the usability of the car's human interface in future designs. Consider the following possible survey items:

    Example of a Poor Survey

    Prius Owner Survey

    Please respond to each of the following and submit your answers to help us improve your Prius experience.

    1. What is your age?
    2. What is your income?
    3. How long have you owned your Prius?
    4. Would you consider buying another Prius?
    5. What convinced you to  buy a Prius?
    6. How would you rate your satisfaction with your Prius (1 = low; 5 = high)?
    7. Have any problems occurred with using the controls?
    8. How often have you had problems in using the controls?
    9. Are some controls more difficult to understand than they should be (yes/no)?
    10. Have you had to consult the user's manual?
    11. Do you think the user interface of the Prius should be improved?
    12. Thank you for your participation. Please click the Submit button to send in your answers.

    Not one of the items in the above example actionable provides information for the product redesign team. Now consider the following items:

    Example of Survey Items Designed to Solicit Actionable Information
    1. When you first started driving your Prius, there might have been some lights, icons, settings or controls that were confusing, maybe making you refer to the user manual. Please identify what you found to be confusing and why.
    2. If you have difficulties seeing any of the controls or feedback information, list it here and tell us why there is a difficulty, such as a poor icon choice, font size too small, not enough color contrast, or too far away.
    3. How could we change the seat and arm rests so that you would be more comfortable?
    4. To what extent do the controls and readouts distract you while driving (1 = not at all; 5 = far too much)? If you answered 2 or higher, which controls or readouts seem the most distracting?

    Each of these items not only seeks rich information, it also seeks information that is actionable by the product redesign team. If that team was the client for this survey, then they would likely greatly appreciate even a small sample of responses to these deep issues than they would a larger sample of responses to the shallow or irrelevant items in the poor survey example.

    If you are designing a survey that addresses usability, you might think about what constitutes actionable information by the product redesign team.


    Suggestions for Writing Surveys

    Here are some general suggestions for writing survey instruments.

    • Keep the purpose in mind. Who is the client and what type of decision will the client make based on the nature of analyzed survey data?

    • Make the instrument user-friendly.

    • Word items clearly so that they are not wordy or open to multiple interpretations.

    • Make each question count.
    • If you can get the same information from one question instead of three, use that one. If you are not going to use information from a survey item, consider deleting it.
    • Make the survey as short as possible.

    • Use clean graphic and layout principles.

    • Avoid yes/no and simple ratings items if the goal is to gain richer information. Choose open-ended responses that will best solicit needed information.
       
    • Do not attempt to gain information other than "reports from respondents" by surveying. If you wanted to find out, for example, how many miles they drive a day, you would take odometer readings. Except for some rare exceptions, surveys give you only "reports from respondents."

    For more information on writing surveys, with special attention to question construction, you may wish to visit the following (optional reading):

    faculty.css.edu/dswenson/web/question.htm


    Survey Methodology

    A variety of methodologies could be used to conduct a user survey. 

    (Will there be a single- or a double-mailing survey, or will it be a phone or online survey? Will there be telephone follow-ups? Will there be onsite follow-up visits? Will the survey time frame be open-ended?)

    The particular methodology should be determined prior to the development of the survey instrument. Once determined, the method should be rigorously followed. All responses should be objectively and accurately recorded. Other information, such as busy signals, should be documented. A closing date should be established for the survey. Finally, the data should be kept uncontaminated and secure.


    Data Recording

    Before a survey is used, the survey writer should use mock-data and test the data encoding and recording system. While specialized software exists for this, some smaller research is recorded into a database or spreadsheet file developed just for that dataset. Ideally, the data would be easy to record, minimize problems associated with human error, and contain accuracy checks. Sometimes additional data is added to the information from a questionnaire, such as date received. And for those with a thesis or dissertation riding on the outcome, it might be good to produce several backups of the dataset and keep them in different secure locations until the end of the research project.

    Typically, during the coding stage of survey research, individual names are replaced with numeric codes. However, user surveys are most commonly performed by private companies and therefore the inclusion of the names of the users would probably be very important.

    Computer-scanned sheets can make data entry quicker, cheaper and more accurate. Web-based surveys offer these advantages plus immediate access to data without the need for scanning.


    Data Analysis & Reporting Results

    Survey data analysis should follow pre-established procedures, and may include quantitative and qualitative analysis. Statistical methods, such as correlation, should be used, but only where appropriate. Because so many user surveys collect data from self-selected samples, statistical analysis could easily be misleading. 

    Two of the biggest problems with data analysis occur because the researcher over-generalizes, or because the researcher loses sufficient objectivity. Because many user surveys are performed within a company, it may be especially difficult for an employee to tell their boss bad news, especially if it points to a flaw that has been un-noticed for years.

    When reporting results, keep in mind the purpose of the survey. Try to maintain objectivity (don't exaggerate). And make the report attractive and professional, using graphics, tables, and even videos, as appropriate. Of course, data should be presented in such a way as to safeguard the rights of human subjects.


    Story Time

    Following are two true stories regarding data analysis. The first is about survey research, although not user surveying, and it asks you to make a decision. The second raises the notion of research ethics and objectivity.


    Story 1: A Data Analysis Question for You

    A while back, I sent out a survey to all 3203 members of the International Technology Education Association (ITEA) to determine perceived needs for online learning. These people are members of a professional organization geared to technology teachers, although some members are not technology teachers (i.e., what used to be Industrial Arts.) My initial analysis looked at the 838 usable questionnaires that were returned by the cut off date for my preliminary report. When I asked, "Would you like to try teaching online?"

    • 393 answered "Yes,"

    • 371 answered "No," and 

    • 74 did not respond.
    Which of the following conclusions are valid based on this information:

    1. 51% of technology educators would like to try teaching online. [Because 393 / (393 + 371) = 51%] 

    2. 47% of technology educators would like to try teaching online. [Because 393 / 838 = 47%] 

    3. 51% of ITEA members would like to try teaching online.

    4. 47% of ITEA members would like to try teaching online.

    5. 12% of technology educators would like to try teaching online. [Because 393 / 3203 = 12%] 

    6. 12% of ITEA members would like to try teaching online.

    7. 51% of ITEA members surveyed would like to try teaching online.

    8. At least 393 ITEA members would like teaching online.

    Well, did you pick one, two or three of the above as being valid? If you didn't, then stop now and go back and select the valid conclusions. If you still think that none is valid, then stop reading now and think up a valid conclusion.

    This was a mailed survey, so the respondents chose to send it back. What made them choose this, and what made the "non-respondents" choose not to send it back? Could it be that these 838 people are not typical of the population of 3203 ITEA members, or of the population of (10,000?) technology teachers? Could it be that they are more positively disposed to online education, and that is one reason they returned the survey?

    Actually, none of the above numbered conclusions is valid. Even number eight is invalid, because the survey only measured perceptions, and it could be that if some of these people actually taught online, they might not like it.

    However, a valid conclusion would be the following:
    "393 ITEA members reported that they would like to try teaching online." The use of percentages in this case would only be misleading. And if I were to use typical statistical tests to determine levels of significance, the conclusions would be erroneous because this was not a random sample, but a self-selected one.

    (By the way, that survey is reported here.)


    Story 2: That One Rat

    This is a true story, not about a survey, but about how one doctoral degree candidate at a large university decided to deal with a data analysis problem. It is a lesson in ethics.

    The researcher predicted that as the current in an electrified grid was increased, the number of male rats refusing to cross that grid to get to a female rat would drop off according to a normal distribution. Sure enough, with just a little current flowing, most male rats decided to cross the grid. As the current increased, many started to refuse. And at a high current, there were just a few left - a nearly perfect bell curve typical of a normal distribution. The researcher could almost see his diploma on the wall at this point.

    But there was this one male rat that kept on crossing the grid as the current was increased. It was ruining everything. The curve was no longer a perfect bell curve, and the researcher feared that his dissertation results would not be accepted.

    His solution was to crank up the amps all the way. His report contained a perfect bell curve, with an asterisk to a note: "One subject failed to complete the experiment due to mortality." He got his degree.

    You might be first drawn to the questions of research ethics and the ethical treatment of animals related to this story. Or you might be thinking of statistical procedures that take into account the outliers. But also note that it is ironic that this one rat, while atypical, could have taught the researcher quite a bit, yet the researcher chose to ignore this data and misrepresent the experiment to conform to his own and his committee's preconceptions about his hypotheses. He sold out.


    Using the Results of a User Survey

    How can the results of a user survey be used? Because user surveys differ in their purpose, the ways to use the results differ as well.  Typically, user survey results are used in the following:

    • new product development

    • product redesign

    • process redesign

    • advertising and marketing strategy changes

    • publicity and advertising

    • forecasting

    Questions for Analysis

    The following questions are meant to stimulate thought and discussion related to user surveys.
     


    1. To Survey or Not To Survey

    We are all bothered by solicitations over the phone, by mail, or by Email. Companies that ask us to complete surveys often make us feel like a commodity, de-humanizing us with their meddlesome inquiries. However, a company that does not have reliable information about its products’ users may not be able to produce and sell products that are well suited to their users’ perceived needs and wants. If you are hired as a consultant on user research, what advice would you give a company on this problem?


    2. Conversion Model in a Survey?

    Is it possible to use the conversion model of consumer behavior as a model for designing a user survey? If so, what questions might Coca Cola put on that instrument?

    Recall the Conversion Model of Consumer Behavior:

    Users:
    • Entrenched
    • Average
    • Shallow
    • Convertible
      Nonusers:
         
      • Available
      • Ambivalent
      • Weakly unavailable
      • Strongly unavailable

    3. Internet Surveys of Users

    More and more surveys are conducted over the Internet. What are the advantages and disadvantages of this delivery system compared to other means?


    4. Selling Information

    A company just invested quite a bit of money to conduct a voluntary user survey. To recoup some of the expenses, they are considering selling their list of respondents (with name, phone number, address, email address, income, etc.,) to a distributor of marketing lists. Is this ethical?


    5. Critiquing User Surveys

    As a study in user surveys, you might visit one with the purpose of performing a critique rather than participating in a survey. It is not hard to find user surveys online. Try searching for the following text string:

    take our user survey

    or

    online user survey.

    You might find hundreds of thousands of examples.


    Endnote

    User surveys are useful, but they cannot replace usability testing or product engineering. The data from surveys, especially self-selected surveys, may not be as valid as data obtained through more rigorous experimental procedures.

    Sometimes, when survey results are being analyzed, the researcher realizes that the wrong group of people was surveyed. Some companies might gain more information, for example, by surveying non-users rather than users of their products, if the purpose is to determine why the product doesn't appeal to some consumers. A few years ago I surveyed all female members of the International Technology Education Association in an attempt to determine perceived barriers to women in that field. During the analysis of the results, I realized that while this was an appropriate survey sample, I really wanted to also find out why women either left this male-dominated field or did not enter it in the first place - but that would mean surveying those who precisely were not ITEA members.

    Also, there may be a tendency to blur the line between opinion and fact. (If there were a survey in Columbus' time, even if they used the more rigorous Delphi approach, a researcher may well have concluded that the Earth was flat.) But sometimes opinions are even more vital than facts. George Ade, a humorist of the past, once said of consumers, "Give them what they think they want." ;-) 



    "User Surveys"
    All information is subject to change without notification.
    © J. Flowers
    Ball State University
    HelpInstructions for Users

    Objectives:

    By the end of this lesson, you should be able to:

    1. Laugh at ridiculous instructions. ;-)

    2. Discuss issues important to consider prior to developing user instructions.

    3. Distinguish among dangers, warnings, cautions, and notes.

    4. Write and analyze appropriate procedural steps in instructions.

    5. Identify different strategies for timing instruction writing.

    6. Discuss suggestions for the development of user instructions.

    7. Apply Gagne's events of instruction and factors that most influence learning in the development of an instructional guide for users.

    8. Discuss examples of instructions for users.

    9. Identify problems associated with the use of user instructions.


    A Little Levity

    The following humorous examples of instructions from users appeared in an Ann Landers column.

    “On a hair dryer:

    Do not use while sleeping.
    On a bar of soap:
    Use like regular soap.
    On a frozen dinner:
    Serving suggestion - defrost.
    On a hotel-provided shower cap:
    Fits one head.
    On a package of bread pudding:
    Product will be hot after heating.
    On children’s cough medicine:
    Do not drive car or operate machinery.
    On a sleep aid:
    Warning - might cause drowsiness.
    On a … kitchen knife:
    Warning - keep out of children.
    On a string of … Christmas lights:
    For indoor or outdoor use only.
    On a bag of peanuts:
    Warning - contains nuts.
    On an airline packet of nuts:
    Open packet, eat nuts.”

    Sure, these might seem humorous, but chances are when you were reading them you were also thinking about what the manufacturer's intentions were in including these messages. Some may have been included to protect the company from legal or civil action.


    Before Writing Instructions

    Before writing instructions for users, it might help to consider a few questions:

    • What is the purpose of the instructions?

    •  
    • What are the assumptions about the user?

    •  
    • Which media are best to use?

    •  
    • Where will the instructions be located?

    •  
    • What type of instructions are needed?

    •  
    • Should a non-reader (of your language) be able to use the instructions?
    • How do the principles of universal design apply to writing instructions?

    Types of Instructions

    Non-Procedural Instructions

    Most people think of instructions as including only procedural steps. While these are critical, there are other important parts of instructions.

    A number of qualified English professors teach courses that include units on writing instructions. Sharla Shine is one such professor, and in information for a course on technical writing (previously online available at www.terra.edu/academics/distance/eng190/notes1.asp)she wrote that instructions can contain cautions. Or is the proper term "warnings"? Actually, she distinguished among four confusing terms as follows:

    DANGER

    “A Danger
    is used if serious injury or death could occur if the instructions aren't followed.”
    Sharla Shine

    WARNING

    “A Warning
    alerts readers that they could be moderately injured or the equipment damaged  if they don't follow the instructions carefully.”
    Sharla Shine

    CAUTION

    “A Caution
    alerts readers that they may have a poor result if they don't follow the instructions carefully.”
    Sharla Shine

    NOTE

    “A Note
    gives readers additional helpful information.”
    Sharla Shine

    Procedures

    Most instructions are procedural in nature. Sometimes, instructions also contain conceptual aids.  It is possible to embed conceptual aids within procedures. Consider the following situation:

    There is a machine in my school called a rotational molder. Students place plastic powder into a hollow, closed aluminum mold, put the mold into the machine, and turn the machine on. It heats the mold until the plastic flows, and spins the mold around. As the mold cools, it continues spinning so that plastic solidifies against the surface of the mold, and a hollow plastic product is produced.

    As their instructor, it is my job to provide for their learning with this machine. One of the important steps is for them to turn off the timer that activates the heating elements. But how do I word the actual instructions for this step? Here are a few alternatives:


    Step 6. Set the timer for 40 minutes, and wait for it to go off. This is just incorrect. The temperature may not have reached 415 degrees.

    Step 6. Set the timer for 50 minutes

    Step 7. When the temperature reaches 415 degrees, turn the timer to "off".

    This is a basic procedural step, with no explanation of "why". (When should this type of step be used?)

    Step 6: Set timer: 50 minutes

    Step 7: At 415 degrees, turn timer off.

    This is identical to the previous example, except it does not use the wordier sentence format.

    Step 6. Set the timer for 50 minutes.

    Step 7. When the temperature reaches 415, the plastic has melted and you should turn the timer off.

    This example explains why 415 is important.

    Step 6. Set the timer for 50 minutes.

    Step 7. When the temperature reaches the proper softening point for the material, set the timer to "off" to turn off the heating elements.
    Note: the suggested softening point for LDPE is 415 degrees.

    This example is more generic and clearly notes that 415 is not the temperature for all plastics.

    Step 6. Set the timer to 50 minutes, but turn it off when the proper melting temperature is reached (415 degrees for LDPE).  This is a simpler version of the previous example.

    Step 6. Set the timer to a long enough time to heat the material to a proper temperature.

    Step 7. When you think you have heated the material enough for it to flow within the mold, note the temperature and set the timer to "off" to turn off the heater.

    Note: use trial-and-error to establish the proper temperature.

    This example is very wordy, and does not clearly delineate the procedure for operating the machine. It is not concise.

    However, it describes the procedure to figure out what the settings should be.


    Which of these versions appeals to you? Which should I use if my students are to become industrial technologists? Which should I use with students who are becoming technology education teachers? If your evaluations of these differs from mine, how do they differ? What wording would you suggest?


    When to Write a User Manual

    Usually, user manuals are created after a product has been designed. Harold Thimbleby describes an approach that involves creating a manual as the product is being designed. This allows for product modification based on the learning needs of the intended users. His report can be found at (optional reading):

    www.acm.org/sigs/sigchi/chi96/proceedings/shortpap/Thimbleby/th_txt.htm


    Tips on Writing Good Instructions

    Good instructions for one user may not be so good for another user. Still, there are general suggestions regarding the writing of good instructions, such as those from Girill for teaching technical writing in high school science classes (optional reading):

    Girill, T. (2006). Building science-relevant literacy with technical writing in high school - A tutorial. IEEE Transactions on Professional Communication, 49(4), 346-353. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4016268
    alt source: https://e-reports-ext.llnl.gov/pdf/334544.pdf

    Don't be surprised by the seemingly dated information in the next source; although it is from 1993, there are some wonderful guidelines for writing user instructions (recommended visit):

    Backinger, C., & Kingsley, P. (1993). Write it right: Recommendations for developing user instruction manuals for medical devices used in home health care. Washington, DC: US Dept. of Health and Human Services. Retrieved from http://www.fda.gov/downloads/MedicalDevices/
    DeviceRegulationandGuidance/GuidanceDocuments/
    ucm070771.pdf

    Professional illustration can greatly improve an instruction manual. Some manuals are web-based or on CD-ROM, DVD, or streamed from a website include audio and video. But the manual designer must be careful not to let the medium interfere with the message.


    Format

    Many questions should be considered when determining the format of instructions:

    • Which format is better for procedures, a list or a paragraph?

    •  
    • Should grammatically correct language be used?

    •  
    • Should international symbols be used?

    •  
    • Can jargon be used?

    •  
    • Exactly where and how should illustrations be used?

    •  
    • Should the illustrations stand alone, without the need for text?

    •  
    • What feedback can the instructions provide the user?

    •  
    • What media should be used?

    •  
    • What should the hierarchy and navigation of the instructions be? That is, where should there be sub-tasks or groups of tasks, and how do users move seamlessly among them, possibly in a non-linear fashion?

    The designer of instruction manuals should answer these using experience, knowledge about the target users, knowledge about the product, psychology, artistic design, and clear communication skills. The answers will vary.


    Language

    The text of instructions should be immediately accessible by the target audience. If technical terminology is used, there should be provisions to define this terminology. Glossaries, inset definitions, and illustrated parts lists may help explaining this terminology.

    It is strongly suggested that the choice of wording avoids:

    • ambiguity

    •  
    • culturally-based terms

    •  
    • unexplained jargon

    •  
    • racism, sexism, and other forms of bias

    Visuals

    Professional graphic illustrations can make the difference between a user-friendly manual and an unusable one. Special graphic techniques can be used to improve communication, such as pictorial views, exploded views, sectional views, phantom lines and arrows to show motion, color-coding, and labels.

    Photographs should be used where appropriate, but may contain extra information not related to the instructions (and they may not reproduce well.) All visuals should be clearly identified and properly sequenced, and should be referred to in the text of the instructions. They should all seem to fit together stylistically. Visuals should only be used when they inform, and should be designed so they do not distract the users from their objectives.
     


    Location of Instructions

    While you might expect to find instructions in an instruction manual, you might be surprised when you find them on the inside cover of a calculator case, or on your car's visor. Manufacturer's have learned that too often users will not read instruction manuals. Where do people need instructions? Just how much of the instructions do they need there?


    Instruction or Information

    Some people classify web sites as either informational or instructional. According to this classification, well over 95% of the pages on the Internet are informational, providing us with either accurate or inaccurate information or data.

    However, some sites are designed to be instructional in nature. They tend to have some, if not all, of the features you might expect in typical instruction. Robert Gagne listed nine different "events of instruction," and these can be seen in many good classrooms and in instructional web sites:

    1. gaining attention
    2. informing learner of the objective
    3. stimulating recall of prerequisites
    4. presenting the stimulus material
    5. providing learning guidance
    6. eliciting the performance
    7. providing feedback
    8. assessing the performance
    9. enhancing retention and transfer
    (For more information on Gagne's theory, see sites similar to the following and scholarly articles on which they are based. Optional reading): http://www.instructionaldesign.org/theories/conditions-learning.html

    Web sites that provide feedback, or that assess performance, would be more instructional than those that just contain information. This is not to imply, however, that interactivity equals instruction.

    But why am I mentioning this? Well, too often instruction manuals, user guides, and other instructional aids for users tend to be more informational than instructional. Sure, there might be a numbered list of procedural steps, but wouldn't it be nice if a user's manual could do more? Look again at the nine events of instruction listed above. How could you incorporate these into a new design for an instructional manual?
     



    As a teacher educator, I have often referred to the research of Barak Rosenshine with students in my "methods" classes. Rosenshine found that the most important factors that influence student learning were the following:
    1. clarity

    2. task orientation

    3. student opportunity

    4. variety

    5. teacher enthusiasm.

    It seems to me now that these same factors can lead to good instruction manuals:
    1. Are the instructions clear, concise, and unambiguous?

    2. Is it clear what is to be done, and why?

    3. Are users given the opportunity to actually perform tasks and then refer back to the manual?

    4. Is information presented in a variety of formats and media?

    5. Is the manual written in a compelling style.

    Okay, maybe this last one was a bit of a stretch, but you get my point, right?
     

    Nearly Done?

    Before instructions are finalized, they should be scrutinized by legal experts to make sure they provide adequate legal protection.

    Finally, instructions should be pilot tested with users at a variety of levels of competence. (Actually the process of developing instructions usually involves reiterative testing.)


    Examples of Instructions for Users

    Technical manuals deal with complex material, but do little to make sense of the complexity with the judicious use of graphics, feedback, and examples. Some technology has a complex human interface, and so a good user's guide or set of instructions must be more detailed, as Texas Instruments has attempted to do for its TI-89 graphing calculator (optional visit):

    https://education.ti.com/download/en/US/
    FA1DC891957E4700B46A67255850C592/983E
    A8A4BA2A4AE9B2AF5EEEE922E3C1/TI-89_Guidebook_EN.pdf

    The TI-89 has many features that the "casual user" may never use. But what assumptions did TI make about users? Is that guide an easy, quick-start guide, or is it one that will answer an advanced user's question?

    Consider the Apple Watch User Guide at
    http://manuals.info.apple.com/en_US/apple_watch_user_guide.pdf

    What assumptions is Apple making about the user? Would those assumptions be different from their approach to the iPhone, which has been around a lot longer than the watch? Notice how the guide begins with an "A quick look" section. Notice how the information is arranged by function, and compare that to the TI-89 guidebook.

    I was frustrated when I purchased a walk-behind gasoline-powered lawn mower at Sears a few years ago. I couldn't find the manual choke control. The salesperson told me that they eliminated these on all the models they carry because consumers did not know how to use them properly. As a result, they have forced the knowledgeable consumer into the same population as consumers ignorant of chokes, dis-empowering them. Couldn't the company have instead tried a solution related to education and instructions for users? Can you think of other examples of this dis-empowerment? Sometimes the issue can be addressed by instructions, but other times it is a bigger problem:

    • WordPerfect users who migrated to Word would sometimes complain that there was no Reveal Codes command in Word.
    • MS-DOS users who migrated to windows would sometimes complain that they had to go to the old-fashioned C: prompt to be really efficient because there was no easy way in windows to, say, change all the extension names of files in a single directory at once.
    • The AltaVista search engine used to include a "Near" operator, until that feature was disabled.

    In addition to user guides, such as the one below, we should look at briefer forms sometimes called "quick start guides."

    Canon PowerShot A495 User Guide
    http://gdlp01.c-wss.com/gds/1/0300003041/02/PSA495_PSA490_CUG_EN.pdf

    In doing this, let's ask, "Are the instructions linear?" Sometimes they are, and sometimes they are not. Look at the following quickstart guide for Second Life:
    http://secondlife.com/support/quickstart/basic


    The Use of Instructions

    It may frustrate technical writers, product designers, and teachers, but many users do not use manuals at all. Sometimes it would be a needless waste of time to read a manual. Other times, people like to learn by doing, and may consult instructions only where there is a problem they are otherwise unable to solve. (My father says, "when all else fails, read the instructions.")



    "Instructions for Users"
    All information is subject to change without notification.
    © Jim Flowers
    Ball State University

    Usability Research: Student Reports

    from Students in (I)TEDU 510, Technology: Use and Assessment (online) &
    TEDU 206, Using and Assessing Technology
    at
    Ball State University

    All information contained in these reports is the responsibility of the authors.
    Clicking a link will cause a document to open in a new frame or window unless pop-ups are blocked.
    Direct course questions to the instructor, Jim Flowers, at jcflowers1@bsu.edu
    Links to reports will become inactive as students graduate or remove their reports.
    Reports submitted by Email or containing proprietary information are not listed.

     
    Online Graduate Students' Reports 

    TEDU 510, Fall 2015:

    TEDU 510, Spring 2014:

    TEDU 510, Fall 2013:

    TEDU 510, Spring 2013:

    TEDU 510, Fall 2012:

    TEDU 510, Spring 2011:

    TEDU 510, Spring 2010:

    TEDU 510, Spring 2009:

    * Indicates that the student gave J. Flowers permission to share the report with conference attendees

    TEDU 510, Spring 2008:

    TEDU 510, Fall 2006:

    TEDU 510, Spring 2006:

    TEDU 510, Fall 2005:

    TEDU 510, Spring 2005:

    TEDU 510, Spring 2004:

    TEDU 510, Fall 2003:

    TEDU 510, Spring 2003

    TEDU 510, Fall 2002

    TEDU 510, Fall 2001

    TEDU 510, Fall 2000

    • Anna Bick: Voice Mail Usability Test 
      http://www.bsu.edu/web/afbick/webusabilitytest.html
    • Gail Borsenberger: Usability Test of GE Model 7- 4606WHA Alarm Clock
      ftp://publish.bsu.edu/web/gailmarie/usabilityreport.html
    • Jenelle Carter: TestWell's Off-line and On-line Versions
      http://www.jkcpeanut2.homestead.com/usabilitytest1b.html
    • Nick Derado: The Usability of PowerPoint for High School Teachers
      http://www.msdlt.k12.in.us/lawrencecentral/SocialStudies/USHistory/Useability%20Survey.html
    • Cara DeSmidt: Usability Test for Publication Manual of the American Psychological Association, Fourth Edition by
      http://www.bsu.edu/web/cgrahamdesmi/apapage.html
    • Bob Dunn: How Heavy Is Your Backpack?
      http://www.stjsd.org/teched/survey.htm
    • Takeshi Fujii: Exercise Ball Usability Test
      http://www.bsu.edu/classes/fujii/usabilitytest.html
    • Scott Goode: Usability Test: Instruction For Using The Cyberware M15 Laser Scanner
      http://www.bsu.edu/web/smgoode/usabilityresults.htm
    • Shelley Haines: Usability Report on TestWell (Health Risk Appraisal)
      http://www.bsu.edu/web/sdhaines/u.html
    • June Holt: Usability Test: msn Hotmail Sign-In Procedure
      http://junesarea.homestead.com/usability.html
    • Miake Koch: Usability Report of Ball State University, Fisher Institute for Wellness & Gerontology, Total Lifestyle Center Home Pages
      ftp://publish.bsu.edu/web/00mjkoch/Usability%20Report.htm
    • Kevin Kocher: Marion High School ID Card System / Camera Inter-Phase - Sony Mavica MVC-FD7 Digital Still And Inter-Phase Between Polaroid ID Card Maker Version 3.03D Camera
      http://www.bsu.edu/web/KCKOCHER/
    • Karyl McGeath: Usability Test: LUX TX1500 Model Electronic Thermostat
      http://www.bsu.edu/web/kemcgeath/utest.htm
    • Matthew Moniz: Usability Test on the Instruction Handout for Using the Stratasys 1650 Fused Deposition Modeling Machine (FDM)
      http://www.bsu.edu/web/mmoniz/usabilitytest.html
    • Jeff Orcutt: Usability Test: Sony Portable Telephone
      ftp://publish.bsu.edu/web/00jdorcutt/usabilitytest.htm
    • Burt Pena: Emerson EV598 VCR Usability Test Plan
      http://www.TEDU510usability.homestead.com/1.html
    • Abdiel Sosa: Usability Test: Sharp MD/CD Model MD-X5
      http://www.usabilitytest.homestead.com/index.html
    • Sean Stallings: Usability of the Precor EFX Elliptical Rider
      http://seanstallings.homestead.com/secondpage.html
    • Jonathan D. Taylor: Usability Test Report: Skil Warrior 3/8" VSR Cordless Drill
      http://publish.bsu.edu/jdtaylor2/usability.html
    • Shannon Van Hyfte: Hearing Protection Device (HPD) Usability Test
      http://www.bsu.edu/web/smoconnor2/hpdusabilitytest.htm
    • Holly Van Order: Personal Wellness Profile (PWP)
      http://www.bsu.edu/web/hkvanorder/usability_project.htm
    • Philip Welch: Usability Research Activity - Lego Package Instructions: Are They Necessary and Do They Work?
      http://pjwelch.homestead.com/usability.html
    • Sarah Welch: Usability Study Report: NOKIA 5120 Cellular Phone
      http://www.sarahlou23.homestead.com/usability.html
    • Dana Whelan: MVX-480 Cellular Telephone Usability Report
      ftp://publish.bsu.edu/web/DLWHELAN/MVX-480-2.html

    Undergraduate Student Reports

    TEDU 206, Spring 2008

    TEDU 206, Spring 2004

    TEDU 206, Spring 2003