|
Post by jamesgasson on Feb 8, 2017 20:46:12 GMT
Hi Ethan - I am a big fan of yours and I think the work you do in debunking audio mythology is incredibly inspiring, not least because it allows aspiring engineers to gain confidence in their own subjective perception, rather than be intimidated by quackery and marketing bullshit. I would easily site you as a significant influence to the principles of what I do for a living and for this I thank you! In particular I loved your ADC loopback tests, and, following a particularly frustrating conversation with a colleague about the importance (or not!) of expensive microphone preamps, I am very keen to put him - as well as music students in the university where I work - to the test. Of course, on mention of this he immediately discounted the validity of such listening tests with squinted eyes and incoherent rambling about why such a test "just wouldn't work", however I would very much like to run a series of perceptional experiments, inspired by your loopback tests, that test perceived differences in the following areas:
- 16 vs 24 bit
- 44.1 vs 48 vs 96 vs 192 kHz
- Cheap vs expensive microphone preamps
- Recording/rendering in Pro Tools vs Cubase
And so I guess the reason I'm telling you this is to see if you have any opinions on how to best set up such experiments to keep them as scientifically valid as possible. When it comes to testing audio files with different resolutions, suddenly the idea of having test subjects download them at their own leisure becomes void, since they can just inspect the properties of the files. I'd love to know if you had any thoughts about any of this! Kind regards, James.
|
|
|
Post by arnyk on Feb 10, 2017 13:16:19 GMT
Hi Ethan - I am a big fan of yours and I think the work you do in debunking audio mythology is incredibly inspiring, not least because it allows aspiring engineers to gain confidence in their own subjective perception, rather than be intimidated by quackery and marketing bullshit. I would easily site you as a significant influence to the principles of what I do for a living and for this I thank you! In particular I loved your ADC loopback tests, and, following a particularly frustrating conversation with a colleague about the importance (or not!) of expensive microphone preamps, I am very keen to put him - as well as music students in the university where I work - to the test. Of course, on mention of this he immediately discounted the validity of such listening tests with squinted eyes and incoherent rambling about why such a test "just wouldn't work", however I would very much like to run a series of perceptional experiments, inspired by your loopback tests, that test perceived differences in the following areas:
- 16 vs 24 bit
- 44.1 vs 48 vs 96 vs 192 kHz
- Cheap vs expensive microphone preamps
- Recording/rendering in Pro Tools vs Cubase
And so I guess the reason I'm telling you this is to see if you have any opinions on how to best set up such experiments to keep them as scientifically valid as possible. When it comes to testing audio files with different resolutions, suddenly the idea of having test subjects download them at their own leisure becomes void, since they can just inspect the properties of the files. I'd love to know if you had any thoughts about any of this! Kind regards, James. Good starting point: Testing MethodsLook at some of their threads and faqs - these people know how to do this sort of thing right.
|
|
|
Post by Ethan Winer on Feb 13, 2017 17:15:32 GMT
I guess the reason I'm telling you this is to see if you have any opinions on how to best set up such experiments to keep them as scientifically valid as possible. First, thanks for your nice comments. Second, Yes, people who hold beliefs that they know deep down are probably wrong always have an excuse for why their beliefs can't be tested. The key to all of your test proposals is to use the same source. You can't record someone singing at 44.1 KHz then singing a second time at 96 KHz. So to compare bit depths or sample rates you need to record the same source into two different systems using the same hardware and software. This is tricky, and expensive, so you could instead record at 192/24, then down-grade the files to 96/24 and 96/16 and 44/16 etc and compare the files blind. Comparing preamps is tricky but possible if you average a number of tests. See the "Page 90" addition to my Audio Expert book's web page: ethanwiner.com/book_errata.htmSame for comparing DAW software. You need to split the microphones or other sources with "Y" splitters to identical systems with the same brand and model sound card. Or you can use re-amping to record the same source through a loudspeaker. All of this is described in my Audio Expert book.
|
|
|
Post by jamesgasson on Feb 14, 2017 16:01:58 GMT
Thank you both for your input. Very much appreciated. I am in the process of creating a website that consolidates all the information and features a self-test questionnaire, whereby participants can play two simultaneous audio files and easily flip between them, followed by a multiple-choice question relating to their perception. This test will be repeated a number of times, and the results delivered to me for analysis. The problem here is in having two simultaneous files playing that are two different sample rates. I'm not entirely sure how most machines handle this. Is it possible for a machine to natively play two different sample rates simultaneously? Moreover, if I play a 192KHz file in Windows through a generic PC sound chip, does it actually play it at 192? Or does it get down-sampled to the Windows default?: I also worry that many audio interfaces display the sample rate at which they are currently playing back. It's for these reasons that I wonder whether the only way a sample rate comparison is truly possible is by physically setting up the experiment in a controlled environment and having participants come in to be tested. This method also presents certain challenges. Alternatively, I wonder whether it may be viable to down-sample all recordings to 44.1, with the justification that all music ultimately ends up at this sample rate anyway, but this is slightly troubling, since it somewhat defeats the object of the test, which is to tell the difference between 44.1 and 192. ---- As for the recordings themselves, it seems to me that a good way to approach this is to create a huge bank of super high resolution recordings - everything from female vocals to full drum kits, from low volume to high volume - and play these back individually through a speaker. I will then place a microphone in the room and record several passes of the playback of all of these recordings, always with an identical signal path, but flipping only the variable in question (sample rate, bit rate, etc). Do you think this methodology would be sufficient, or do you think that there would be those who would bemoan this approach for not being a "real" recording of an actual instrument, and therefore not a "real world" test? J
|
|
|
Post by arnyk on Feb 14, 2017 19:51:58 GMT
Thank you both for your input. Very much appreciated. I am in the process of creating a website that consolidates all the information and features a self-test questionnaire, whereby participants can play two simultaneous audio files and easily flip between them, followed by a multiple-choice question relating to their perception. This test will be repeated a number of times, and the results delivered to me for analysis. The problem here is in having two simultaneous files playing that are two different sample rates. I'm not entirely sure how most machines handle this. Is it possible for a machine to natively play two different sample rates simultaneously? Moreover, if I play a 192KHz file in Windows through a generic PC sound chip, does it actually play it at 192? Or does it get down-sampled to the Windows default?: I also worry that many audio interfaces display the sample rate at which they are currently playing back. It's for these reasons that I wonder whether the only way a sample rate comparison is truly possible is by physically setting up the experiment in a controlled environment and having participants come in to be tested. This method also presents certain challenges. Alternatively, I wonder whether it may be viable to down-sample all recordings to 44.1, with the justification that all music ultimately ends up at this sample rate anyway, but this is slightly troubling, since it somewhat defeats the object of the test, which is to tell the difference between 44.1 and 192. ---- As for the recordings themselves, it seems to me that a good way to approach this is to create a huge bank of super high resolution recordings - everything from female vocals to full drum kits, from low volume to high volume - and play these back individually through a speaker. I will then place a microphone in the room and record several passes of the playback of all of these recordings, always with an identical signal path, but flipping only the variable in question (sample rate, bit rate, etc). Do you think this methodology would be sufficient, or do you think that there would be those who would bemoan this approach for not being a "real" recording of an actual instrument, and therefore not a "real world" test? One fairly common way to handle this comparison is to start out with a so-called high-resolution recording, downsample it and then upsample it back to its original sample rate. The comparison is then of two recordings that appear to be identical in every obvious way, but in fact one of them has suffered the purported indignities of the so-called low-resolution format. Please be aware that you are about 10,000 miles from doing any work that hasn't been done before dozens of times many different ways by better equipped people who obtained pretty consistent results - no audible differences. There are a number of ABX Comparators on the web that can facilitate your work starting with the ABX Plugin to the FOOBAR music player. Here is a paper that summarizes a goodly number of previous attempts that turns out to be a frebie download even though most of the papers on the site cost money: www.aes.org/e-lib/browse.cfm?elib=18296 This paper has failed to convince many skeptics due to its procedural weaknesses. Here is a paper that summarizes what the current scientific consensus is: people.xiph.org/~xiphmont/demo/neil-young.htmlIt may seem that it would be easier to redo the experiments yourself than to engage proper scholarship and find and understand what has been done before, but in the end probably not so much so. Well, when its over you might have more appreciation for the phrase "Fool's Journey". ;-)
|
|
|
Post by jamesgasson on Feb 14, 2017 21:04:58 GMT
Great comments again, and thank you for pointing me towards academic literature on the topic. That's all very useful information.
Yes, I appreciate that these tests have been run before, somewhat conclusively, but nonetheless it's still an interesting and enlightening endeavour to experience such things first hand, not least because it provides a concrete, first-hand basis for future conversations on the topic. I'm sure it will also be a fun and useful exercise for many of the music students at the university where I work, as demonstrating by experiment often drills home the point far better than simply reading online articles, and so, in the interest of the dissemination of a scientific mindset and the propensity to question received wisdom, I'd say that this was one "Fool's Journey" that I'm only too happy to embark on. ;-)
|
|
|
Post by rock on Feb 15, 2017 0:59:24 GMT
I'll chime in here and in my very humble opinion, I don't think it's a waste of time, if what's implied by "fool's journey"... (although I believe the term that is more commonly used to describe that is "fool's errand", but never mind).
Many theories have been tested over and over again (relativity etc.), so if a demonstration of a know principle is used as a teaching aid, I'd say go for it!
Cheers, Rock
|
|
|
Post by arnyk on Feb 15, 2017 4:09:06 GMT
I'll chime in here and in my very humble opinion, I don't think it's a waste of time, if what's implied by "fool's journey"... (although I believe the term that is more commonly used to describe that is "fool's errand", but never mind). Many theories have been tested over and over again (relativity etc.), so if a demonstration of a know principle is used as a teaching aid, I'd say Please give a short account of the tests like these that you have personally performed.
|
|
|
Post by rock on Feb 15, 2017 13:32:22 GMT
|
|
|
Post by arnyk on Feb 15, 2017 16:11:11 GMT
My point which you have illustrated doubly by not responding conclusively to is that: (1) You are not speaking from personal experience. You've fallen into the trap of trying to control other people to do something that you have not done yourself. And if there is any confusion, I have a long and public track record of performing all kinds of double blind tests both on my own, while working with other individuals, and also with and in small groups. I've done this particular test many times all of these ways starting in the late 1990s.. Starting in 2001 I had a web site where people could among other things download the software for running tests, files for tests, and detailed instructions. You can still find it on the Wayback site as www.pcabx.com. (2) The problem with tests relating to sample rates and data formats in excess of the standard CD format (44 KHz sampling, 16 bit LPCM) is that they can reasonably be expected to have negative outcomes, which is to say no conclusive outcome at all. You get the same results as if you made any of dozens of different mistakes, even though your tests may be just fine. Here's the point. All of the other tests you have mentioned so far have positive and conclusive results. They make fine instructional tools because of their positive results. If you get negative or inconclusive results, it is a strong indication that you made one or more mistakes. To repeat: All of the interesting tests related to high resolution audio, if done right will have negative and inconclusive results. They make horrible instructional tools because of their negative and meaningless results. If you get negative or inconclusive results, it is a suggestion that you didn't make any mistakes. A good example of what I'm saying. Follow the relatively simple instructions in that site properly and you will obtain a positive and conclusive result. This is good instructional practice and will generate positive reinforcment, especially for people who are starting out with this kind of experiment. In contrast the audio tests that people are talking above about will fail on all of these critical points. They will be frustrating, lead to self doubt, not be very encouraging in general, and build up confidence in yourself or science in general. I've personally seen many people fail doing this kind of audio experiment and veer away from good scientific thought and practice in general.
|
|
|
Post by rock on Feb 15, 2017 19:29:05 GMT
Well alright then.
|
|