Elaine Heinzman

Content Strategist and Information Architect

Creating Savvy Surveys for Better Member Feedback

Conference Attendee Feedback Here in the D.C. metro area, midsummer means high tourist season — and the middle of convention-planning season. Several of our association clients are ramping up for their annual conventions in August, September, or October, and that planning includes the convention post-mortem: Association staffs return home and look to their members, sponsors, and vendors to see what went right–and wrong–at this year’s meeting.  

Associations Now recently shared ideas for questions that are often missing from post-convention surveys. As someone who’s attended conferences on user experience, content strategy, and journalism, I’d love it if the folks behind the conventions would prompt us attendees with more specific questions about why we even went in the first place. Associations Now says the reason we go is primarily “to make connections and get practical ideas that [we] can implement once [we’re] back in the office,” but is that always the case?

Here are some other suggested questions that can help to give your organization better insight:

  • What were your top goals heading into the conference? Encourage attendees to get specific about why they registered in the first place or what they wanted to achieve, like sitting in on a certain workshop, learning about a new tool, or getting warm introductions to potential clients.
  • How well were you able to meet those goals? Do attendees express frustration about missing multiple sessions because they were programmed within the same time slot? Did they have a hard time getting into a really crowded happy hour event? The answers here can give you insight into how you might adjust the schedule for next time.
  • What sessions/events did you find the most useful for your goals? For making connections with people? These questions drill down into what worked and what didn’t. If a once-popular event drew little traffic this year, it’s time to rethink repeating it next year.
  • What were the most meaningful conversations that you had? What were the most meaningful connections that you made? These questions can be a way to get the pulse on what people were most interested in or concerned about. Maybe it’s an industry-wide issue, or maybe it was a subject specific to the convention.

To make it easier for people to answer these questions, make sure to create boxes that allow a greater number of characters (for the more long-winded respondents) and that use a larger, sans serif font.

And it’s always nice to offer an incentive for folks to complete the survey, like a discount code for a webinar or a chance to a win free or discounted conference registration for next time.

What other questions do you think should always be asked post-conference?

Elaine Heinzman

Content Strategist and Information Architect

Designing for Users with Autism

Website design and usage is getting more challenging for a lot of us. In addition to more older Americans accessing the internet via smartphones only, more young people than before are living with diagnosed cognitive disabilities like autism spectrum disorder (ASD), which the Centers for Disease Control and Prevention says affects 1 in 42 boys and 1 in 189 girls.

Researcher Cheryl Cohen recently shared those numbers in a UXDC Conference session about web accessibility for teens and adults with autism that I was able to attend back in April. Cohen gave an overview of the cognitive traits that can affect users with autism and some recommendations for improving websites and apps to better meet their needs. This was very eye-opening to me!

What should we know about autistic users, and how can we design websites and apps to give them the best user experience? Here are the considerations and solutions that Cohen shared:

  • Contextual misunderstanding: Whether presented in words or in imagery, idioms and metaphors can be confusing to some people with autism.
    • Use more intuitive, less symbolic icons. Include descriptive text, which helps improve SEO, too.
    • When you’re writing for your website, keep the language simple. This might include shorter sentences or a conversational tone.
  • Visual processing: When looking at a lot of information all on one screen, some with autism become confused or distracted. So they simply focus on one specific item and ignore the rest of the page.
    • More white space, more visuals. Too much stuff crammed onto a screen distracts users and can add unnecessary steps to an otherwise simple task.
    • Fewer words, more bulleted lists. Large blocks of text make it difficult to find and focus on what is most important on a page.
    • Does your website feature rapid animation only viewable by Flash player? Get rid of it. It’s hard to look at and process fast-moving visuals.
  • Auditory processing: From voices to machines to their environment, some people with autism focus equally on multiple sound sources.
    • Sound quality matters. If your audio content or videos feature muddy or distorted sound, someone with autism will have a harder time discerning voices.
    • Captions improve comprehension. Mentally matching the sound they’re hearing with the images they’re seeing can be more difficult for a person with autism. Add captions to your videos and images as often as possible.
  • Different way of mentally organizing items: Inconsistencies can make it challenging for a person with autism to use web interfaces, especially if that person has trouble getting past mistakes or exceptions within a website.  
    • Watch how you design forms. In Cohen’s research, she found that teens with autism had a hard time filling out web-based forms. The biggest culprit? Inconsistent spacing between labels and input boxes.

The teens she interviewed and observed will, perhaps, grow up to become members of our clients’ organizations — but at the very least, they will be, or already are, consumers and users of other online content and resources. Improving accessibility for these users improves the digital experience for all users, so why not always design with these user needs in mind?

To learn more about designing for those with cognitive challenges, check out these resources from the good folks at Web Accessibility in Mind

Are you considering these factors when designing web or apps? What other specific user accessibility considerations have you come across that improve the UX for all users?

Elaine Heinzman

Content Strategist and Information Architect

Site Search Best Practice: Make the Search Box Bigger

Search drives almost everything online. While lots of us bookmark pages or click on links that take us from one website to another, typing keywords into a search engine and hitting ‘Return’ is how most web users, most of the time, try to find what we’re looking for.

When using search on a specific website (versus a search engine), we want an input box that allows us to see most, if not all, of the words we type in for our search. Yet you’ve probably had the experience of typing search terms into a too-small input box. Maybe the box is too short, so the text shows up looking tiny. Or just as frustrating, your query runs too long and scrolls out of sight.

User-experience gurus Nielsen Norman Group have the data to prove that these small search boxes are not just your imagination: “The average search box is 18-characters wide, [and] 27% of queries were too long to fit into it.”

Better to design a search-input box — or really, any kind of box where the user types in text — to be too wide than too short. And on the taller side, as well, so that there’s some white space around the words.

Don’t box in your users; give them the space they need to quickly review and revise their query before they submit it.

Elaine Heinzman

Content Strategist and Information Architect

User Experience in the Face of Trauma

I’m a very lucky person. I haven’t experienced anything that would qualify as a major traumatic event, and my life isn’t generally a series of inconveniences. Plenty of other people don’t have that kind of good fortune. And since I’m in the business of user experience (UX), I want to use this blog post to explore something I learned about at a recent UXCamp event that I attended: the less frequently considered usability strategy called trauma-informed UX.

Trauma-informed UX most immediately affects people during or after a traumatic experience, but also during a relapse. These are users who come to an organization because they need help dealing with trauma, including:

  • Survivors.
  • Patients living with a serious disease or injury.
  • The loved ones of survivors and patients.

The main secondary audiences include:

  • The greater communities that these survivors and patients will return to.
  • Medical, law-enforcement, legal and social-services workers serving survivor and patient and populations.
  • Donors and financial entities that provide support to these workers.

Trauma-informed UX also should consider those who’ve previously experienced a traumatic encounter with an organization that was supposed to help them. A straightforward example would be a crime survivor who’s had a negative interaction with their local police department or emergency room. A less-obvious example that The Marshall Project recently wrote about: juveniles once held in California detention facilities.

In an online survey, California’s state and community corrections board asked formerly incarcerated children and their families how the state could improve juvenile detention. In addition to “the childishly predictable [comments] — I didn’t get the bunk I wanted; they punished us all as a group,” survey respondents provided thoughtful and detailed recommendations including “more vegetables, more dental care…, [and] an easier system for sending academic transcripts from school to jail and back.”

I love that corrections officials asked for feedback from their users so the state could better serve these families and their communities. Individual interviews are my preferred UX research tool, though in this case, it would have been too expensive and time-consuming to do interviews.

Regardless of the tool you use to get user feedback, with a trauma-informed UX process, there are additional and more delicate considerations that you must address:

  • Are you dealing with a user population that needs to worry about physical or digital surveillance?
  • Can you streamline the experience to give traumatized users more control of the time they spend dealing with your organization?
  • Is a website, an app, or an SMS-based experience the best way to serve users who are concerned about surveillance and time?
  • What legal requirements must your organization meet? This can include patient confidentiality or client anonymity.

While you’re doing user research for a project that will serve users affected by trauma, or getting user feedback after the project launch, focus on speaking to those who already have healed they’ll be more open to sharing their experiences because they’re not currently living the through the trauma.

What other nuanced usability considerations have you come across?

Summer Intern

The Psychology of Web Response Times

Hi!  I’m David Reich, and I’ve been interning at Matrix Group for the summer. I’ll be writing a post summarizing my experience here before I leave, but I’ve still got a bit more time, so today I’m posting some slightly more technical content. clock close up

One of the most interesting things I did for my internship was research. A few times, I was assigned to study and summarize certain aspects of web development.  This meant I got to learn about both my research topics and, just as usefully, how to write professionally.

Recently, I did some research about design and psychology of response times in application development. Not just web app development, either – the principles that apply to Matrix Group’s products are just as applicable to other types of interaction between humans and systems like games, telephones, or even conversations.

Temporal Cognition: how long is too long?

When interacting with a well-designed piece of software, the user enters into a ‘conversational’ mode with it. Users receive useful responses just as quickly as in a verbal interaction, and feel just powerful as when manipulating physical tools. The application moves at the same speed they do, and never interrupts their thought processes, so they can reach a productive state of ‘flow’.

That’s for a well-designed application. What, then, is the quantitative difference between software that works with the user and software that breaks their concentration? In 1993, Jakob Nielsen described three boundaries that separate the two. George Miller, 25 years before, went into greater detail with a similar conclusion. In the context of user-interface design, that research might seem ancient, but in psychology it isn’t. People today have the same temporal cognition that they did forty years ago. Miller’s research isn’t outdated; it’s categorical.

First Boundary

Nielsen’s first boundary lies at 0.1 seconds.

A tenth of a second: this is time that it should take for a character to appear on the screen after being typed, for a checkbox to become selected, or for a short table to sort itself at the user’s request.  When it takes less than a tenth of a second for the user’s command to be executed, the user feels like they’re in direct control of the software – as direct as flipping a light switch or turning a doorknob.  People won’t even register waiting times of less than 0.1 seconds.  If an application takes half a second to run a JavaScript function, though, users will perceive the computer taking control.

Second Boundary

The second boundary is at 1 second according to Nielsen, but 2 seconds by Miller. In either case, it’s the point at which the user is in danger of losing their productive sense of flow. When the user is forced to wait for less than 2 seconds, they’ll notice a delay, but it probably won’t feel unnecessary or distracting. For delays between 0.1 and 2 seconds, a progress indicator is unnecessary, and might even be distracting.

Third Boundary

Neilsen and Miller also disagree over the time of the third boundary. Nielsen puts it at 10 seconds, and Miller at 15. This third boundary is the point when the user loses focus on the application and shifts their attention to something else. It should be avoided whenever possible. Times in between the second and third boundaries – between 2 and 10 seconds – should have some sort of progress indicator.  A spinning cursor is appropriate for times in the lower end of that range. For times above 10 seconds, assume that the user’s focus on their task has been lost, and that they’ll need to reorient themselves when they return to the application. A progress bar that either estimates the percentage of completed processing or provides some feedback about the current task is vital. Waiting times of longer than 10 seconds should only be used when the user has just completed some task, and the user should be allowed to return at their own convenience.

Key Landmarks

Those three boundaries – 0.1, 1, and 10 seconds – are the key landmarks of responsiveness for web applications. I would attribute Nielsen and Miller’s disagreement over precise numbers to the vagueness and context-dependency of the entire question. Nielsen’s numbers, powers of ten, are prettier and easier to remember, but Miller’s may be more psychologically accurate.

A lot of this sounds very academic and theoretical, but it could be meaningful for success of a web-based business: according to WebPerformanceToday.com, 57% of consumers say that they’re likely to abandon a page if it takes more than three seconds to load.

Sources:

Do you agree with these studies? How long do you wait for a task/web page to complete or load?