Elaine Heinzman

Content Strategist and Information Architect

Creating Savvy Surveys for Better Member Feedback

Conference Attendee Feedback Here in the D.C. metro area, midsummer means high tourist season — and the middle of convention-planning season. Several of our association clients are ramping up for their annual conventions in August, September, or October, and that planning includes the convention post-mortem: Association staffs return home and look to their members, sponsors, and vendors to see what went right–and wrong–at this year’s meeting.  

Associations Now recently shared ideas for questions that are often missing from post-convention surveys. As someone who’s attended conferences on user experience, content strategy, and journalism, I’d love it if the folks behind the conventions would prompt us attendees with more specific questions about why we even went in the first place. Associations Now says the reason we go is primarily “to make connections and get practical ideas that [we] can implement once [we’re] back in the office,” but is that always the case?

Here are some other suggested questions that can help to give your organization better insight:

  • What were your top goals heading into the conference? Encourage attendees to get specific about why they registered in the first place or what they wanted to achieve, like sitting in on a certain workshop, learning about a new tool, or getting warm introductions to potential clients.
  • How well were you able to meet those goals? Do attendees express frustration about missing multiple sessions because they were programmed within the same time slot? Did they have a hard time getting into a really crowded happy hour event? The answers here can give you insight into how you might adjust the schedule for next time.
  • What sessions/events did you find the most useful for your goals? For making connections with people? These questions drill down into what worked and what didn’t. If a once-popular event drew little traffic this year, it’s time to rethink repeating it next year.
  • What were the most meaningful conversations that you had? What were the most meaningful connections that you made? These questions can be a way to get the pulse on what people were most interested in or concerned about. Maybe it’s an industry-wide issue, or maybe it was a subject specific to the convention.

To make it easier for people to answer these questions, make sure to create boxes that allow a greater number of characters (for the more long-winded respondents) and that use a larger, sans serif font.

And it’s always nice to offer an incentive for folks to complete the survey, like a discount code for a webinar or a chance to a win free or discounted conference registration for next time.

What other questions do you think should always be asked post-conference?

Rich Frangiamore

Systems Admin

Testing Tools: Free IE tools from Microsoft

modernie

Great news for developers and testers everywhere. Recently, on Microsoft’s Modern.IE site (which houses tools and resources for IE devs), the company released free, pre-built, fully functional virtual machines (VMs) specifically built for testing Internet Explorer.

You can download several different VMs, each with various versions of Windows OS (from Vista to 8) and various versions of IE (from 7 to 10). These are fully functional installations of Windows (not emulators) and are optimized for speed. One caveat, each instance is restricted with a time limit (within a single session).

There are versions prepared for Windows, OSX, and Linux, via VMware Player, VMware Fusion, and VirtualBox, respectively.

What testing resources do you love?

Kelly Browning

Director of Strategy

Comparative Usability Testing DIY Style

When there’s more than one viable design option to consider, comparative usability computer screenshottesting can help you evaluate competing alternatives. This technique is especially useful at the early stages of a design project, because it allows you to explore options, rather than getting locked down into a single approach prematurely.

Everything is multiplied with comparative usability testing: The effort to design and create prototypes, the time to recruit and schedule participants, the work to facilitate the tests, and the amount of information to interpret and communicate. Some big user experience firms can do comparative usability testing with tons of screens, but most of us don’t have the resources to carry out the process at that level.

Fortunately, comparative usability testing can be effective even when done DIY-style.  Applied on an appropriate scale, it can be the perfect technique to help you nail a few key features.

Keep it Simple

When working with limited resources, the key to success is to limit the scope of your design and testing.

  • One feature or screen at a time. If this is your first time doing this, consider testing just one feature or screen at a time so you don’t get overwhelmed. After a while, you’ll get a sense of how much your team can handle.
  • Something with a big impact. You might have time to do this only once or twice in the early stage of your project, so make it count. It only makes sense to test an important feature that will make a significant difference to your users.
  • Don’t muddy the waters. For instance, if you’re testing three shopping cart layouts, keep the product details the same so you can focus on what you’re testing: The layout.
  • Just a few variations. Again, to keep things simple, you’ll probably want to test just a few design variations. I’d suggest 3 at most for practical DIY-style comparative usability testing.

Be Consistent

You’ll want to design scenarios and tasks as you normally would in a usability test, with this important guideline in mind:

  • Use the same tasks for every design variation. The tasks for each design option should be identical or as close as possible. If “enter your birthday” causes problems in design A, you will need to know if it’s also problematic in design B.

Avoid Bias

Interpretation is a bit more complicated with comparative usability testing, and it becomes that much more critical to keep your test results pure, so to speak. To this end:

  • Mix the order of the design variations. If design A is always first, couldn’t that affect how people tend to react to design B? Switch the order to level the playing field.

And the following best practices deserve special emphasis:

  • Be objective or appear that way. Do you have a horse in this race? If so, avoid comments or non-verbal cues that would influence the participant. Better yet, recuse yourself and have somebody else facilitate the tests.
  • Have another person observe the tests. This is a best practice for all usability testing but especially when you are comparing multiple alternatives. Those competing options may have some egos or politics around them, so it’s important to build objectivity into your process by having at least one other person observe the test.

Have Realistic Expectations

After a few tests, one of two things can happen:

  • A clear winner emerges. This is less likely the more complicated the features and the testing. It’s not a realistic goal for most tests.
  • There’s no  clear winner. This is the most likely outcome. You may see strengths and weaknesses in multiple design variations. This is an opportunity to think critically about what happened, why, and consider next steps:

Remember:  It’s Worth the Effort

Comparative usability testing can help you discover the advantages of multiple design variations. It can be invaluable at the earlier stages of your project, a way to ensure that you are pursuing the strongest design possible, rather than wasting time “perfecting” something that will never be as good as a superior alternative. Even on a shoestring budget, comparing multiple options up front can help you perfect the features that can make or break your product.

As of now, there isn’t much out there about DIY comparative usability testing (also called “competitive” usability testing). However, there are a few more formal research studies and other resources out there that are relevant and may be interesting to you. 

Resources

What kinds of DIY methods have you used/discovered when on a limited design budget?

Wesley Harris

Software Tester

Troubleshooting Google Analytics Tracking Code: There’s a Chrome Extension for That

ga_debug.js is a pretty rad tool for troubleshooting the Google Analytics tracking code in your development or testing environment. But Google slaps this big fat caveat onto the documentation:

Important: You should not modify your production site to use this version of the JavaScript. The ga_debug.js script is larger than the ga.js tracking code and it is not typically cached. So, using it in across your production site will slow down your site for all of your users. Again, this is only for your own testing purposes.

A key aspect of our quality practice at Matrix Group is some final testing in our production environment (non-destructive, duh). Developers and SysOps will SWEAR that a deployment was flawless, but I don’t believe it until I see it (sorry fellas, I’m a skeptic).

Browsing the site and waiting for analytics data to show up is a suboptimal solution to this dilemma. The data won’t appear immediately, and at Matrix, we don’t track traffic from our IP range anyway.

So how do we verify that Google Analytics tracking is working properly on the live site? Enter the Google Analytics Tracking Code Debugger (*trumpets*).

This extension for Google Chrome (we’ll call it GA Debug) enables ga_debug.js without having to serve it from your production environment.

Installation is simple. Using Google Chrome, navigate to the extension’s entry in the Chrome Web Store and click Add To Chrome.

Now you’ve got the GA Debug button in your Chrome toolbar. Congratulations.

Click it to enable ga_debug.js. Open up the web developer tools (on my Mac it’s View > Developer > Developer Tools), and click the Console tab to get the javascript console.

Let’s see what they’re tracking over at I Can Has Cheezburger:

debug sample

Click to embiggen

The extension has very helpfully parsed out and decoded the otherwise nigh-inscrutable parameters to the GET request to _utm.gif, which is the actual tracking beacon. Looks like they are tracking a custom variable PageType, which in this case has the value Index. Hopefully that’s what they were expecting.

Grrr, but now we have another problem. For one of our clients, we do some fine-grained testing of page layouts, which in practice means tracking clicks on items in sidebars and other content blocks as events. If those clicks open a new page (ours do), you’ll need to be Mr. Speedy to read the message on the console before the new page clobbers it.

The solution is to persist the console log on navigation, which helpfully is an option in the developer tools settings. On my Mac, I found these settings by clicking the little gear in the bottom-right corner of the developer tools. In the Console section, tick the box for “Preserve log upon navigation.”

If you aren’t using Google Chrome, WHY AREN’T YOU USING GOOGLE CHROME??? OK fine; looks like there’s a Firefox extension to accomplish something similar.