Elaine Heinzman

Content Strategist and Information Architect

User Experience in the Face of Trauma

I’m a very lucky person. I haven’t experienced anything that would qualify as a major traumatic event, and my life isn’t generally a series of inconveniences. Plenty of other people don’t have that kind of good fortune. And since I’m in the business of user experience (UX), I want to use this blog post to explore something I learned about at a recent UXCamp event that I attended: the less frequently considered usability strategy called trauma-informed UX.

Trauma-informed UX most immediately affects people during or after a traumatic experience, but also during a relapse. These are users who come to an organization because they need help dealing with trauma, including:

  • Survivors.
  • Patients living with a serious disease or injury.
  • The loved ones of survivors and patients.

The main secondary audiences include:

  • The greater communities that these survivors and patients will return to.
  • Medical, law-enforcement, legal and social-services workers serving survivor and patient and populations.
  • Donors and financial entities that provide support to these workers.

Trauma-informed UX also should consider those who’ve previously experienced a traumatic encounter with an organization that was supposed to help them. A straightforward example would be a crime survivor who’s had a negative interaction with their local police department or emergency room. A less-obvious example that The Marshall Project recently wrote about: juveniles once held in California detention facilities.

In an online survey, California’s state and community corrections board asked formerly incarcerated children and their families how the state could improve juvenile detention. In addition to “the childishly predictable [comments] — I didn’t get the bunk I wanted; they punished us all as a group,” survey respondents provided thoughtful and detailed recommendations including “more vegetables, more dental care…, [and] an easier system for sending academic transcripts from school to jail and back.”

I love that corrections officials asked for feedback from their users so the state could better serve these families and their communities. Individual interviews are my preferred UX research tool, though in this case, it would have been too expensive and time-consuming to do interviews.

Regardless of the tool you use to get user feedback, with a trauma-informed UX process, there are additional and more delicate considerations that you must address:

  • Are you dealing with a user population that needs to worry about physical or digital surveillance?
  • Can you streamline the experience to give traumatized users more control of the time they spend dealing with your organization?
  • Is a website, an app, or an SMS-based experience the best way to serve users who are concerned about surveillance and time?
  • What legal requirements must your organization meet? This can include patient confidentiality or client anonymity.

While you’re doing user research for a project that will serve users affected by trauma, or getting user feedback after the project launch, focus on speaking to those who already have healed they’ll be more open to sharing their experiences because they’re not currently living the through the trauma.

What other nuanced usability considerations have you come across?

Summer Intern

The Psychology of Web Response Times

Hi!  I’m David Reich, and I’ve been interning at Matrix Group for the summer. I’ll be writing a post summarizing my experience here before I leave, but I’ve still got a bit more time, so today I’m posting some slightly more technical content. clock close up

One of the most interesting things I did for my internship was research. A few times, I was assigned to study and summarize certain aspects of web development.  This meant I got to learn about both my research topics and, just as usefully, how to write professionally.

Recently, I did some research about design and psychology of response times in application development. Not just web app development, either – the principles that apply to Matrix Group’s products are just as applicable to other types of interaction between humans and systems like games, telephones, or even conversations.

Temporal Cognition: how long is too long?

When interacting with a well-designed piece of software, the user enters into a ‘conversational’ mode with it. Users receive useful responses just as quickly as in a verbal interaction, and feel just powerful as when manipulating physical tools. The application moves at the same speed they do, and never interrupts their thought processes, so they can reach a productive state of ‘flow’.

That’s for a well-designed application. What, then, is the quantitative difference between software that works with the user and software that breaks their concentration? In 1993, Jakob Nielsen described three boundaries that separate the two. George Miller, 25 years before, went into greater detail with a similar conclusion. In the context of user-interface design, that research might seem ancient, but in psychology it isn’t. People today have the same temporal cognition that they did forty years ago. Miller’s research isn’t outdated; it’s categorical.

First Boundary

Nielsen’s first boundary lies at 0.1 seconds.

A tenth of a second: this is time that it should take for a character to appear on the screen after being typed, for a checkbox to become selected, or for a short table to sort itself at the user’s request.  When it takes less than a tenth of a second for the user’s command to be executed, the user feels like they’re in direct control of the software – as direct as flipping a light switch or turning a doorknob.  People won’t even register waiting times of less than 0.1 seconds.  If an application takes half a second to run a JavaScript function, though, users will perceive the computer taking control.

Second Boundary

The second boundary is at 1 second according to Nielsen, but 2 seconds by Miller. In either case, it’s the point at which the user is in danger of losing their productive sense of flow. When the user is forced to wait for less than 2 seconds, they’ll notice a delay, but it probably won’t feel unnecessary or distracting. For delays between 0.1 and 2 seconds, a progress indicator is unnecessary, and might even be distracting.

Third Boundary

Neilsen and Miller also disagree over the time of the third boundary. Nielsen puts it at 10 seconds, and Miller at 15. This third boundary is the point when the user loses focus on the application and shifts their attention to something else. It should be avoided whenever possible. Times in between the second and third boundaries – between 2 and 10 seconds – should have some sort of progress indicator.  A spinning cursor is appropriate for times in the lower end of that range. For times above 10 seconds, assume that the user’s focus on their task has been lost, and that they’ll need to reorient themselves when they return to the application. A progress bar that either estimates the percentage of completed processing or provides some feedback about the current task is vital. Waiting times of longer than 10 seconds should only be used when the user has just completed some task, and the user should be allowed to return at their own convenience.

Key Landmarks

Those three boundaries – 0.1, 1, and 10 seconds – are the key landmarks of responsiveness for web applications. I would attribute Nielsen and Miller’s disagreement over precise numbers to the vagueness and context-dependency of the entire question. Nielsen’s numbers, powers of ten, are prettier and easier to remember, but Miller’s may be more psychologically accurate.

A lot of this sounds very academic and theoretical, but it could be meaningful for success of a web-based business: according to WebPerformanceToday.com, 57% of consumers say that they’re likely to abandon a page if it takes more than three seconds to load.

Sources:

Do you agree with these studies? How long do you wait for a task/web page to complete or load?

Kelly Browning

Director of Strategy

Comparative Usability Testing DIY Style

When there’s more than one viable design option to consider, comparative usability computer screenshottesting can help you evaluate competing alternatives. This technique is especially useful at the early stages of a design project, because it allows you to explore options, rather than getting locked down into a single approach prematurely.

Everything is multiplied with comparative usability testing: The effort to design and create prototypes, the time to recruit and schedule participants, the work to facilitate the tests, and the amount of information to interpret and communicate. Some big user experience firms can do comparative usability testing with tons of screens, but most of us don’t have the resources to carry out the process at that level.

Fortunately, comparative usability testing can be effective even when done DIY-style.  Applied on an appropriate scale, it can be the perfect technique to help you nail a few key features.

Keep it Simple

When working with limited resources, the key to success is to limit the scope of your design and testing.

  • One feature or screen at a time. If this is your first time doing this, consider testing just one feature or screen at a time so you don’t get overwhelmed. After a while, you’ll get a sense of how much your team can handle.
  • Something with a big impact. You might have time to do this only once or twice in the early stage of your project, so make it count. It only makes sense to test an important feature that will make a significant difference to your users.
  • Don’t muddy the waters. For instance, if you’re testing three shopping cart layouts, keep the product details the same so you can focus on what you’re testing: The layout.
  • Just a few variations. Again, to keep things simple, you’ll probably want to test just a few design variations. I’d suggest 3 at most for practical DIY-style comparative usability testing.

Be Consistent

You’ll want to design scenarios and tasks as you normally would in a usability test, with this important guideline in mind:

  • Use the same tasks for every design variation. The tasks for each design option should be identical or as close as possible. If “enter your birthday” causes problems in design A, you will need to know if it’s also problematic in design B.

Avoid Bias

Interpretation is a bit more complicated with comparative usability testing, and it becomes that much more critical to keep your test results pure, so to speak. To this end:

  • Mix the order of the design variations. If design A is always first, couldn’t that affect how people tend to react to design B? Switch the order to level the playing field.

And the following best practices deserve special emphasis:

  • Be objective or appear that way. Do you have a horse in this race? If so, avoid comments or non-verbal cues that would influence the participant. Better yet, recuse yourself and have somebody else facilitate the tests.
  • Have another person observe the tests. This is a best practice for all usability testing but especially when you are comparing multiple alternatives. Those competing options may have some egos or politics around them, so it’s important to build objectivity into your process by having at least one other person observe the test.

Have Realistic Expectations

After a few tests, one of two things can happen:

  • A clear winner emerges. This is less likely the more complicated the features and the testing. It’s not a realistic goal for most tests.
  • There’s no  clear winner. This is the most likely outcome. You may see strengths and weaknesses in multiple design variations. This is an opportunity to think critically about what happened, why, and consider next steps:

Remember:  It’s Worth the Effort

Comparative usability testing can help you discover the advantages of multiple design variations. It can be invaluable at the earlier stages of your project, a way to ensure that you are pursuing the strongest design possible, rather than wasting time “perfecting” something that will never be as good as a superior alternative. Even on a shoestring budget, comparing multiple options up front can help you perfect the features that can make or break your product.

As of now, there isn’t much out there about DIY comparative usability testing (also called “competitive” usability testing). However, there are a few more formal research studies and other resources out there that are relevant and may be interesting to you. 

Resources

What kinds of DIY methods have you used/discovered when on a limited design budget?

Sarah Jedrey

Marketing Coordinator / Video Editor

Event Recap: Refresh DC Speakers Offer A Good Lesson in UX and Web Design (Part 1)

Gene Crawford (@genecrawford) speaking for Refresh DC

Gene Crawford (@genecrawford) speaking for Refresh DC

On Sept. 20, Sarah Mills and I went to the Refresh DC event held at Personal’s office in DC. There was pizza (thanks, American University SOC!).  There were laughs (thanks Gene [@genecrawford] and Giovanni [@giodif]!).  There was a heck of a lot of brainpower in one cozy space (thanks, well, everyone!).

We came back with one custom Tarot deck, one web design book, and lots of UX and conceptual design knowledge richer. The cards and book were rewards for being curious and speaking in public – let me tell you, those cards were a real incentive! – but getting to network with people in our industry and getting other professionals’ opinions on pertinent topics were really valuable.

Being as new as I am, this gave me chances I don’t often have to network and learn.  Much of what got covered was info I’ve found while researching other things here at Matrix Group; hearing it reiterated by real live people and hearing questions from fellow audience members is really helping that information stick.

Gene Crawford, editor of unmatchedstyle and organizer of ConvergeSE (good grief, look at that parallax T. Rex!) spoke mainly about conversion points.  I, being the n00b I am, found out that that was where companies ask users to provide something like personal information or money – account registration, transactions, and the like.

Continue reading