In article <ubabj.28682$_m.21896@xxxxxxxxxxxxxxxxxxxxxx>,
Linonut <linonut@xxxxxxxxxxxxx> wrote:
> > Here's a description of the process:
> >
> > <http://www.microsoft.com/usability/studies.mspx>
> >
> > What's invalid about that?
>
> 1. The entity doing the testing is highly interested in the outcome.
>
> 2. The tested entity must sign a non-disclosure agreement.
>
> 3. The test engineer includes subjective information in his
> observations.
>
> 4. The payment is a company product, as opposed to the small sum of
> money that is customary payment in lab studies.
>
> 5. The criterion for the assessment is not explained (perhaps it is
> in the follow-on links)
>
> 6. There are no hypotheses stated (to be capable of refutation).
>
> 7. The statistical analysis (data plot layouts and analytical
> methods) is not explained.
>
> While this may be a somewhat valid method of provide some guidance in
> product development, it is hardly a study. Microsoft may well find
> itself fooled by its own biased observations.
It looks like you are hung up on the word "study". That's just what
this kind of testing is usually called, although I've seen usability
test also used. It's not a study like what, say, drug companies do when
testing a new drug.
And yes, it is for product development. Usually, for developing the
user interface to a product. There is no hypothesis to refute--you are
just trying to see if people have trouble with your proposed product.
Typically, you give the people lists of tasks you want them to perform,
and just watch them, (and listen to them if you've asked them to talk
out load about what they are thinking as they try to figure it out).
So, if you were, say, amazon.com, testing some changes to your web site,
you might give them tasks like:
Find a hardback edition of "Moby Dick", and purchase it, using
credit card 4512345678901234, with shipment to 123 Fake Street,
Seattle, WA, 98102.
and then you just look to see if you see anything that people have
problems with. You might notice things like people are having to poke
around a bit to find the books search among all the other categories, or
that they get confused about how the site handles shipping addresses, or
things like that, or that they can't figure out how to find different
editions of the book.
It's not something that you are doing so you can write a journal paper.
What you get out of it are observations like "Gee...a lot of the people
found the paperback, but couldn't find the hardback...we need to find a
way to make it easier to find other editions".
You aren't going to need much statistical analysis. At most, you just
need simple summaries of how many completed the task, and where the
failures occurred. But mostly what you are getting is a chance for your
engineers to *see* people using the product.
--
--Tim Smith
|
|