June 1, 2009A Bit More About Blind Listening
Tests
Last month, when I
wrote about the benefits to audio reviewing of blind listening tests, I ended by saying
that although I believe strongly in such tests, theyre not always the most easy and
practical thing to do -- which is why, in the past, we havent conducted these tests
in the way Id like. This article details some of the challenges involved in
conducting blind tests, and explains what were currently considering and working on
to overcome them.
Environment and setup
Two of the biggest obstacles to conducting blind listening
tests are creating the proper listening environment and having a proper setup. The room
must be of sufficient size and quality that the component under test remains the focus and
isnt overshadowed by the anomalies of the room. As well, the component being
evaluated must be integrated into the system without adding any more variables to the
setup, or modification to the component, that could obscure the results. Basically, you
want to be able to assess only the component under evaluation.
The testing of source components, cables, and other
electronics doesnt present that many obstacles -- its not difficult to set up
a test thats fair to the component under test and produces results that hold up
under scrutiny. Ive done blind tests of CD players, D/A converters, and cables in my
own room, using the products exactly as their designers intended them to be used, and with
little modification of my system. The results have been quite telling. For example, we
recently connected two DACs to one CD transport and one preamplifier, matched the volume
levels, and were able to switch between the DACs while playing a single CD in the same CD
player, without knowing which DAC was which. Weve tested some cables with a similar
setup.
Not all components are easy to test this way; certainly,
speakers arent. Provided the room is appropriate, the biggest obstacle in testing
speakers is setting them up -- the sound a speaker makes is highly dependent on where
its placed in the room. Furthermore, no two speakers can occupy the same space at
the same time; even if you were evaluating identical speakers, theres a good chance
the two samples would sound different solely by virtue of their placement in the room.
This is a difficult issue to overcome, and few solutions
have been ideal. Canadas National Research Council (NRC) deals with it by using
multiple trials in which the speakers positions are changed; they end up being
listened to from different points in the room. Listeners rate the speakers sound in
each position, and the results are then usually averaged. Its not perfect, but it
works fairly well, giving a broader representation of how the test speakers perform. But
there are better solutions.
The folks at Harman International, who are big believers in
blind tests, recognized this limitation of speaker testing years ago and took it into
consideration in the design of their current listening room. Harman built what they refer
to as a "speaker shuffler": a fully automated system that, with the flick of a
switch, moves one speaker out of a certain spot and the next speaker into that same spot.
Therefore, all speakers are listened to in the exact same position. Smart -- but also
expensive, and difficult to build. To my knowledge, Harman is the only company that has
such a thing, although as far as I can tell, its the best solution so far.
In Harman Internationals blind-listening room,
test speakers are listened to from behind a visually opaque but acoustically transparent
curtain, unseen by listeners; an automated "speaker shuffler" moves test
speakers into and out of position, so that each occupies precisely the same position in
the room.
|
When it comes to setting up the SoundStage!
Network testing environment, were carefully considering all of this to ensure that,
when we do blind tests, we have the best setup possible, that products are assessed
fairly, and that the results are valid. I plan to visit Harmans facility to evaluate
their speaker-testing system before we finalize the room here, and some other component
categories will present their own challenges. Issues such as these are why, up till now,
we at the SoundStage! Network havent been able to implement blind testing as we
would have liked to -- its not something you just set up and run with.
While assessing these challenges, we also recognize that
the typical testing done today by the "sighted reviewing community" doesnt
hold itself to the same sort of standards or rigor. Ive often read reviews of
speakers that were assessed in the wrong size room (a small speaker in very big room, or a
big speaker in a small one); or of components whose reviewers hooked them up in such a way
as to introduce more variables than did the component itself, making the results from
their testing even more suspect. I find it ironic that so many are happy to poke holes in
blind tests, no matter how carefully set up, but so few criticize sighted tests, which can
have so many flaws that theyre laughable.
Proximity and convenience
Another problem has to do with the number of blind-testing
rooms we can afford to set up and the number of people who will do the listening,
particularly when our writers are so widely scattered across North America. We will be
fortunate if we can get one room set up well, but that will greatly limit the number of
people who can listen there.
The only thing I can hope for is that the idea of blind
testing will appeal to enough reviewers and readers that theyll be willing to take
the time to come and listen, and well be able to afford to bring them here. I also
hope that it will bring more credibility to our publications, and that, therefore, growth
will occur, and that we then might be able to set up more rooms. For now, the focus is on
creating one listening/testing room.
Resistance
The last obstacle to point out has to do with the
surprising level of resistance still met by even well-thought-out blind tests. After my
first article, in which I merely discussed the blind-testing process, went live on
May 1, there was a flurry of activity about it on the various audio forums. Some posters
were appreciative of the article, but many others werent so kind. In fact, some of
the posts were quite hostile, and interactions among some participants got nasty. What
most interested me about this was how emotional some people get over this subject.
Youd swear I was writing about politics or religion, not the evaluation of audio
components.
Therefore, if we do more of these -- which I believe
will be a great thing for the reviewing community -- I suspect well see the same
sorts of responses: some will applaud our efforts, others will dismiss it, and others will
react with anger. But while the latter two responses will be obstacles, theres
nothing we can do about them. Besides, our goal should not be to try to convert to our way
of thinking those who are steeped in their own. Rather, our goal is to present the best
information we can to our readers, let them do with it as they please, and let the cards
fall where they may. Blind testing will help us do that.
To be continued . . .
In these two articles, Ive talked as openly as I can
about the subject of blind testing -- the first detailed why I believe in the methodology,
and this one has focused on some of the obstacles we have to overcome in order to do it in
a meaningful way. But at least for now, I think, Ive said enough on the subject. My
next step is to work behind the scenes to make blind testing happen here at GoodSound!,
and perhaps elsewhere in the SoundStage! Network, at some time in the
future. The time frame for implementation will depend largely on how difficult some of
these obstacles are to overcome. Look for more updates on this topic, likely in the fall.
. . . Doug Schneider
editor@goodsound.com
|