Elaine Heinzman

Content Strategist and Information Architect

Designing for Users with Autism

Website design and usage is getting more challenging for a lot of us. In addition to more older Americans accessing the internet via smartphones only, more young people than before are living with diagnosed cognitive disabilities like autism spectrum disorder (ASD), which the Centers for Disease Control and Prevention says affects 1 in 42 boys and 1 in 189 girls.

Researcher Cheryl Cohen recently shared those numbers in a UXDC Conference session about web accessibility for teens and adults with autism that I was able to attend back in April. Cohen gave an overview of the cognitive traits that can affect users with autism and some recommendations for improving websites and apps to better meet their needs. This was very eye-opening to me!

What should we know about autistic users, and how can we design websites and apps to give them the best user experience? Here are the considerations and solutions that Cohen shared:

  • Contextual misunderstanding: Whether presented in words or in imagery, idioms and metaphors can be confusing to some people with autism.
    • Use more intuitive, less symbolic icons. Include descriptive text, which helps improve SEO, too.
    • When you’re writing for your website, keep the language simple. This might include shorter sentences or a conversational tone.
  • Visual processing: When looking at a lot of information all on one screen, some with autism become confused or distracted. So they simply focus on one specific item and ignore the rest of the page.
    • More white space, more visuals. Too much stuff crammed onto a screen distracts users and can add unnecessary steps to an otherwise simple task.
    • Fewer words, more bulleted lists. Large blocks of text make it difficult to find and focus on what is most important on a page.
    • Does your website feature rapid animation only viewable by Flash player? Get rid of it. It’s hard to look at and process fast-moving visuals.
  • Auditory processing: From voices to machines to their environment, some people with autism focus equally on multiple sound sources.
    • Sound quality matters. If your audio content or videos feature muddy or distorted sound, someone with autism will have a harder time discerning voices.
    • Captions improve comprehension. Mentally matching the sound they’re hearing with the images they’re seeing can be more difficult for a person with autism. Add captions to your videos and images as often as possible.
  • Different way of mentally organizing items: Inconsistencies can make it challenging for a person with autism to use web interfaces, especially if that person has trouble getting past mistakes or exceptions within a website.  
    • Watch how you design forms. In Cohen’s research, she found that teens with autism had a hard time filling out web-based forms. The biggest culprit? Inconsistent spacing between labels and input boxes.

The teens she interviewed and observed will, perhaps, grow up to become members of our clients’ organizations — but at the very least, they will be, or already are, consumers and users of other online content and resources. Improving accessibility for these users improves the digital experience for all users, so why not always design with these user needs in mind?

To learn more about designing for those with cognitive challenges, check out these resources from the good folks at Web Accessibility in Mind

Are you considering these factors when designing web or apps? What other specific user accessibility considerations have you come across that improve the UX for all users?

Elaine Heinzman

Content Strategist and Information Architect

Site Search Best Practice: Make the Search Box Bigger

Search drives almost everything online. While lots of us bookmark pages or click on links that take us from one website to another, typing keywords into a search engine and hitting ‘Return’ is how most web users, most of the time, try to find what we’re looking for.

When using search on a specific website (versus a search engine), we want an input box that allows us to see most, if not all, of the words we type in for our search. Yet you’ve probably had the experience of typing search terms into a too-small input box. Maybe the box is too short, so the text shows up looking tiny. Or just as frustrating, your query runs too long and scrolls out of sight.

User-experience gurus Nielsen Norman Group have the data to prove that these small search boxes are not just your imagination: “The average search box is 18-characters wide, [and] 27% of queries were too long to fit into it.”

Better to design a search-input box — or really, any kind of box where the user types in text — to be too wide than too short. And on the taller side, as well, so that there’s some white space around the words.

Don’t box in your users; give them the space they need to quickly review and revise their query before they submit it.

Elaine Heinzman

Content Strategist and Information Architect

User Experience in the Face of Trauma

I’m a very lucky person. I haven’t experienced anything that would qualify as a major traumatic event, and my life isn’t generally a series of inconveniences. Plenty of other people don’t have that kind of good fortune. And since I’m in the business of user experience (UX), I want to use this blog post to explore something I learned about at a recent UXCamp event that I attended: the less frequently considered usability strategy called trauma-informed UX.

Trauma-informed UX most immediately affects people during or after a traumatic experience, but also during a relapse. These are users who come to an organization because they need help dealing with trauma, including:

  • Survivors.
  • Patients living with a serious disease or injury.
  • The loved ones of survivors and patients.

The main secondary audiences include:

  • The greater communities that these survivors and patients will return to.
  • Medical, law-enforcement, legal and social-services workers serving survivor and patient and populations.
  • Donors and financial entities that provide support to these workers.

Trauma-informed UX also should consider those who’ve previously experienced a traumatic encounter with an organization that was supposed to help them. A straightforward example would be a crime survivor who’s had a negative interaction with their local police department or emergency room. A less-obvious example that The Marshall Project recently wrote about: juveniles once held in California detention facilities.

In an online survey, California’s state and community corrections board asked formerly incarcerated children and their families how the state could improve juvenile detention. In addition to “the childishly predictable [comments] — I didn’t get the bunk I wanted; they punished us all as a group,” survey respondents provided thoughtful and detailed recommendations including “more vegetables, more dental care…, [and] an easier system for sending academic transcripts from school to jail and back.”

I love that corrections officials asked for feedback from their users so the state could better serve these families and their communities. Individual interviews are my preferred UX research tool, though in this case, it would have been too expensive and time-consuming to do interviews.

Regardless of the tool you use to get user feedback, with a trauma-informed UX process, there are additional and more delicate considerations that you must address:

  • Are you dealing with a user population that needs to worry about physical or digital surveillance?
  • Can you streamline the experience to give traumatized users more control of the time they spend dealing with your organization?
  • Is a website, an app, or an SMS-based experience the best way to serve users who are concerned about surveillance and time?
  • What legal requirements must your organization meet? This can include patient confidentiality or client anonymity.

While you’re doing user research for a project that will serve users affected by trauma, or getting user feedback after the project launch, focus on speaking to those who already have healed they’ll be more open to sharing their experiences because they’re not currently living the through the trauma.

What other nuanced usability considerations have you come across?

Summer Intern

The Psychology of Web Response Times

Hi!  I’m David Reich, and I’ve been interning at Matrix Group for the summer. I’ll be writing a post summarizing my experience here before I leave, but I’ve still got a bit more time, so today I’m posting some slightly more technical content. clock close up

One of the most interesting things I did for my internship was research. A few times, I was assigned to study and summarize certain aspects of web development.  This meant I got to learn about both my research topics and, just as usefully, how to write professionally.

Recently, I did some research about design and psychology of response times in application development. Not just web app development, either – the principles that apply to Matrix Group’s products are just as applicable to other types of interaction between humans and systems like games, telephones, or even conversations.

Temporal Cognition: how long is too long?

When interacting with a well-designed piece of software, the user enters into a ‘conversational’ mode with it. Users receive useful responses just as quickly as in a verbal interaction, and feel just powerful as when manipulating physical tools. The application moves at the same speed they do, and never interrupts their thought processes, so they can reach a productive state of ‘flow’.

That’s for a well-designed application. What, then, is the quantitative difference between software that works with the user and software that breaks their concentration? In 1993, Jakob Nielsen described three boundaries that separate the two. George Miller, 25 years before, went into greater detail with a similar conclusion. In the context of user-interface design, that research might seem ancient, but in psychology it isn’t. People today have the same temporal cognition that they did forty years ago. Miller’s research isn’t outdated; it’s categorical.

First Boundary

Nielsen’s first boundary lies at 0.1 seconds.

A tenth of a second: this is time that it should take for a character to appear on the screen after being typed, for a checkbox to become selected, or for a short table to sort itself at the user’s request.  When it takes less than a tenth of a second for the user’s command to be executed, the user feels like they’re in direct control of the software – as direct as flipping a light switch or turning a doorknob.  People won’t even register waiting times of less than 0.1 seconds.  If an application takes half a second to run a JavaScript function, though, users will perceive the computer taking control.

Second Boundary

The second boundary is at 1 second according to Nielsen, but 2 seconds by Miller. In either case, it’s the point at which the user is in danger of losing their productive sense of flow. When the user is forced to wait for less than 2 seconds, they’ll notice a delay, but it probably won’t feel unnecessary or distracting. For delays between 0.1 and 2 seconds, a progress indicator is unnecessary, and might even be distracting.

Third Boundary

Neilsen and Miller also disagree over the time of the third boundary. Nielsen puts it at 10 seconds, and Miller at 15. This third boundary is the point when the user loses focus on the application and shifts their attention to something else. It should be avoided whenever possible. Times in between the second and third boundaries – between 2 and 10 seconds – should have some sort of progress indicator.  A spinning cursor is appropriate for times in the lower end of that range. For times above 10 seconds, assume that the user’s focus on their task has been lost, and that they’ll need to reorient themselves when they return to the application. A progress bar that either estimates the percentage of completed processing or provides some feedback about the current task is vital. Waiting times of longer than 10 seconds should only be used when the user has just completed some task, and the user should be allowed to return at their own convenience.

Key Landmarks

Those three boundaries – 0.1, 1, and 10 seconds – are the key landmarks of responsiveness for web applications. I would attribute Nielsen and Miller’s disagreement over precise numbers to the vagueness and context-dependency of the entire question. Nielsen’s numbers, powers of ten, are prettier and easier to remember, but Miller’s may be more psychologically accurate.

A lot of this sounds very academic and theoretical, but it could be meaningful for success of a web-based business: according to WebPerformanceToday.com, 57% of consumers say that they’re likely to abandon a page if it takes more than three seconds to load.

Sources:

Do you agree with these studies? How long do you wait for a task/web page to complete or load?

Kelly Browning

Director of Strategy

Comparative Usability Testing DIY Style

When there’s more than one viable design option to consider, comparative usability computer screenshottesting can help you evaluate competing alternatives. This technique is especially useful at the early stages of a design project, because it allows you to explore options, rather than getting locked down into a single approach prematurely.

Everything is multiplied with comparative usability testing: The effort to design and create prototypes, the time to recruit and schedule participants, the work to facilitate the tests, and the amount of information to interpret and communicate. Some big user experience firms can do comparative usability testing with tons of screens, but most of us don’t have the resources to carry out the process at that level.

Fortunately, comparative usability testing can be effective even when done DIY-style.  Applied on an appropriate scale, it can be the perfect technique to help you nail a few key features.

Keep it Simple

When working with limited resources, the key to success is to limit the scope of your design and testing.

  • One feature or screen at a time. If this is your first time doing this, consider testing just one feature or screen at a time so you don’t get overwhelmed. After a while, you’ll get a sense of how much your team can handle.
  • Something with a big impact. You might have time to do this only once or twice in the early stage of your project, so make it count. It only makes sense to test an important feature that will make a significant difference to your users.
  • Don’t muddy the waters. For instance, if you’re testing three shopping cart layouts, keep the product details the same so you can focus on what you’re testing: The layout.
  • Just a few variations. Again, to keep things simple, you’ll probably want to test just a few design variations. I’d suggest 3 at most for practical DIY-style comparative usability testing.

Be Consistent

You’ll want to design scenarios and tasks as you normally would in a usability test, with this important guideline in mind:

  • Use the same tasks for every design variation. The tasks for each design option should be identical or as close as possible. If “enter your birthday” causes problems in design A, you will need to know if it’s also problematic in design B.

Avoid Bias

Interpretation is a bit more complicated with comparative usability testing, and it becomes that much more critical to keep your test results pure, so to speak. To this end:

  • Mix the order of the design variations. If design A is always first, couldn’t that affect how people tend to react to design B? Switch the order to level the playing field.

And the following best practices deserve special emphasis:

  • Be objective or appear that way. Do you have a horse in this race? If so, avoid comments or non-verbal cues that would influence the participant. Better yet, recuse yourself and have somebody else facilitate the tests.
  • Have another person observe the tests. This is a best practice for all usability testing but especially when you are comparing multiple alternatives. Those competing options may have some egos or politics around them, so it’s important to build objectivity into your process by having at least one other person observe the test.

Have Realistic Expectations

After a few tests, one of two things can happen:

  • A clear winner emerges. This is less likely the more complicated the features and the testing. It’s not a realistic goal for most tests.
  • There’s no  clear winner. This is the most likely outcome. You may see strengths and weaknesses in multiple design variations. This is an opportunity to think critically about what happened, why, and consider next steps:

Remember:  It’s Worth the Effort

Comparative usability testing can help you discover the advantages of multiple design variations. It can be invaluable at the earlier stages of your project, a way to ensure that you are pursuing the strongest design possible, rather than wasting time “perfecting” something that will never be as good as a superior alternative. Even on a shoestring budget, comparing multiple options up front can help you perfect the features that can make or break your product.

As of now, there isn’t much out there about DIY comparative usability testing (also called “competitive” usability testing). However, there are a few more formal research studies and other resources out there that are relevant and may be interesting to you. 

Resources

What kinds of DIY methods have you used/discovered when on a limited design budget?