Ever wonder how software ends up working like it does (or doesn’t), and what the software developers were thinking about when they created it? Are you not satisfied with your software’s operation and want to do something about it? You might be interested in learning about and participating in usability studies.

Start With the End Goal in Mind

Based on some of the common industry buzzwords surrounding the software development process, like usability, intuitiveness, and learning curve, software should end up being:

  • Useable: The software user needs to be able to use the software to complete a given task.
  • Intuitive: The software user should be able to understand the software operation almost immediately, without the need for conscious reasoning, but rather with an instinctive feeling.
  • Easy to learn: The software operation should leverage the software user’s existing experiences and skills leading to a short learning curve and quick adoption of the software methodology and operation.

Usability Studies

Now that we have an understanding about how to frame the software development cycle, let’s look at a process that helps ensure the usefulness, intuitiveness, and learnability of the finished software product: The Usability Study.

A usability study, which Abila employs on an ongoing basis, is often a one-on-one session between a usability analyst and a volunteer test participant. The analyst walks the volunteer through a series of software tasks to test how well the software works for the volunteer test participant. It is important to remember that this isn’t a test of the volunteer’s software usage abilities, but rather a test of the software’s usability, intuitiveness, and learnability.

The usability analyst records his or her observations and any other feedback provided by the volunteer test participant. Normally, a usability study has eight to 10 sessions on a specific piece of software functionality, with a different volunteer for each session. After the completion of all the sessions, the usability analyst compiles all the collected data, and provides specific recommendations and examples on how the software can be improved to the software developer and the entire software development team.

This feedback can include:

  • Usability issues: Observations on challenges the volunteer test participant faced while using the software to complete a task.
  • Runtime issues: Report on the issues the volunteer test participant encountered when the software did not operate as expected or designed.
  • Enhancement requests: Lists of improvements cited by the volunteer that would make the software easier to use or more productive.

Once all the data is compiled and communicated, the goal becomes making the software easier to use for software users of any level.

Last year, the Abila Usability Team conducted 44 usability studies (144 percent growth over FY 2014), with 412 volunteer test participants (161 percent growth over FY 2014).

AUDC 2016

The Abila User and Developer Conference (AUDC 2016), slated to take place March 3-5, is an ideal opportunity for us to get feedback from clients. So, during the annual event, a plethora of activity will be focused on the interaction between our software users, software, and developers. A number of usability studies will be conducted as part of our ongoing effort to improve Abila software usability, and we’ve even planned a special reception for those who participate in our studies. Take a look at our comprehensive Session Guide for details.