Friday 17 June 2011

Aims and objectives: BHO usability case study

Goal
Identify mitigating strategies for the problem of information overload across three generic types of site facility: category listings, product listings, and search (form and results). This will enable the project to show an improved return on investment to all of its funding sources by leading to greater and more informed use of the site as a whole, strengthening its case for sustainability.

The choice of goal maximises the impact of the project by increasing return on investment for BHO (essential for its own funding arrangements), as well as giving insight to other resource owners across the field as the functions are generic and implemented widely across the field (part of the Institute of Historical Research's own broader remit to encourage innovation in research). Our primary outcome for this project is the noticeable improvement of click-through ratios for each function; and secondly, to produce recommendations for how the identification of issues could be built into the ongoing managerial process behind British History Online (i.e. adopting lessons learned).

Success measures
Produce evidence of improved quantitative ratings and qualitative feedback on revised designs in each of the areas under review. Secondly, reflect on the specific conditions under which the tools and techniques used generate the most value. Success will be measured by evaluating the difference in successful click rates, and also looking at qualitative measures such as annotation tests and the System Usability Scale (SUS).

All the usability components outlined in the Approach section are used to baseline performance during the initial analysis phase. After prototyping, remote testing will be used, but will include both quantitative and qualitative strands with the intention of comparing the two sets of results.

The measures are clear enough to be understood by different roles within the organisation, i.e. they can be used to justify change with business managers as much as indicate development areas to the information architect or developer. Doing the research within the project means the ambition of the changes proposed is realistically linked to the amount of resources which the project has its disposal, leading to recommendations that are practicable to implement.

The project could have included looking at a set of websites rather than just one; however, where user outcomes cannot be compared, it becomes impossible to judge where resources should be assigned to the maximum effect (are two medieval historians better than one early modern?).

The project could also have focussed wholly on canvassing either qualitative or quantitative feedback and extended the depth of consultation. However, that would be to assume that what people say and what people do is materially equivalent, which would not necessarily be true.

Approach
The following techniques will be used throughout the project: Interviews, remote testing (e.g. click, annotation, labelling of system designs), user groups, and the SUS).








































What people sayWhat people do
Initial analysis





  • Individual Interviews



  • User group



  • SUS








  • Click tests


Post-prototyping





  • Annotation tests



  • SUS








  • Click tests



  • A/B tests




The initial analysis phase will result in a number of identified usability issues which will be presented as report cards. The report card device is easily understood and lends itself not just for use in the same way in other projects, but as a starting point for discussion into usability issues. This may be critical to the widespread recognition of usability as a core component of academic information service provision.

Thursday 16 June 2011

Reflections on practice following first interview

Just finished my first user interview yesterday and a few things came up that I thought other practitioners might benefit from.

Reviewing the current state of the site went well - lengthy and detailed discussion, focussed on page outcomes with only a skeletal set of prompts needed from me. Started by using the home page which was outside the scope of the project and I found that following discussions on the in-scope pages went faster as the interviewee was aware of the topics that I was interested in, and approached them straightaway.

We have a set of proposed changes to pages that we came up with here - I went through these asking the interviewer just to stop me if they felt strongly about something. Make sure that you really separate yourself from your feelings about the project (not easy if you've been working on it a long time) - avoid the temptation to say 'you agree with this idea, don't you?' because any assent here would be based on pressure rather than a real recognised need.

My point here is that the project will need to prove that it didn't try to gain a mandate for something which wasn't volunteered by the interviewee. This could be in the form of expanded quotes from the interview which are approved by the interviewee, and reporting on whether a planned improvement was already on file before the usability consultation process took place, or a ranking system. Whilst this point is quite fine in nature, it signals whether the investigation is truly objective in practice.

And, if you can, record the interview and play it back whilst you are writing up the notes as it's easier to reflect on the sense and meaning of the points made. In addition, you can rate your own performance and think about other ways to pose questions to make interviews a more powerful part of your toolkit.

Monday 6 June 2011

Programme Meeting for usability

Last Wednesday, we had a JISC programme meeting which all of the projects attended.

There were some really high-profile projects there but I was suprised that so few 'researchers' attended - it was dominated by software professionals (I should probably include myself in that description!). Therein lies the first observation - one of the current perception of usability, that it's an engineering discipline. This situation really isn't healthy - usability affects the success of the whole project, initially and throughout its life.

Now I consider myself (correctly or otherwise) a proficient user of systems, meaning I can work my way around most things and am perfectly content to read help files to that aim. However, during the presentatons, I got the distinct feeling that I'd be pretty much lost if I had to take a usability test on these other websites and products. So my second observation is to be really careful about who we invite to take tests. We need intelligence not statistics - that means being able to segment the feedback along research classifications. Are we, as engineers, the right people to be left doing this function?

This programme is looking at taking usability and proving its worth to a set of people (researchers) who may not even know it exists as a field. At the moment, they don't even speak the language, and monitoring how that changes will be an interesting secondary benefit of undertaking this research.