A sound wave has an intensity that is proportional to the square of the wave pressure amplitude.This amplitude is a gauge amplitude indicating how much the air pressure goes above and below the local atmospheric pressure. The sound intensity can also be expressed in dB using the pressure amplitude pmin of the weakest and barely audible by humans. A microphones connected to a sound processing software converts the pressure of a sound wave into an analog voltage signal which the software displays in terms of its time-varying intensity. The analog voltage signal is sampled (the higher the sampling rate the better) and converted to a digital signal of a certain bit depth (8, 16, 24 bits, etc. the higher the bit depth the better ). The software then represents the sound intensity I in db values with 0 dB corresponding to the largest intensity (or pressure variation). Lower pressure values are assigned negative dB values. How can we set the software/microphone so that the dB intensity values that the software reports are the same dBs commonly provided by a noise meter, i.e. dB values referenced to the faintest sound hearable? What kind of calibration is necessary? I know the microphone and software have gains, etc... What should I be tweaking?
If I understand your question, I believe you want to calibrate your own microphone. So first you need a basic SPL meter. Then play pink noise through a loudspeaker and set the volume to read some level on the SPL meter. Then put your own microphone in the exact same place as the SPL meter, and set your preamp to get the same dB reading on the preamp's meter. If your preamp doesn't have a meter, you'll need to send the preamp into something else with a VU meter. And then you need to note that device's volume control too. Or just use the SPL meter!
I'll add that higher sample rates aren't always better. If you intend to capture audio, then sample rates higher than needed for audio just waste disk space and bandwidth. Same for bit depth. I see people record using 32 bits, when the theoretical maximum dynamic range fits into about 20 bits. So I won't argue against 24 bits, but 32 is never needed for audio storage. (However, 32 bits is useful for mix summing and plug-ins etc.)
Post by Michael Lawrence on Nov 5, 2018 20:45:10 GMT
I'll just add that microphones respond to pressure modulations, not intensity. (Acoustic) intensity is defined as the power carried by sound waves per unit area in a direction perpendicular to that area and measured in watts per square meter, and is something that really never comes up in day to day audio engineering, unlike SPL which is referenced to Pascals (newtons / square meter). These are related concepts but not the same thing. Software displaying SPL can be technically be configured to use any mic as long as the mic's sensitivity is known. This will be included in the manufacturer's data for the mic. There's more than meets the eye to this issue, however. Free field vs random incidence mics will behave very very differently in the high frequencies and so a single number sensitivity rating isn't all we need to be accurate. There is a good explanation here:
Thank you! All clear. Just need to get a SPL meter with reasonable resolution, I guess, and calibrate.
In db, the dynamic range of human hearing is about 120dB and it is the difference between the softest sound we can perceive and the loudest. When you say that the maximum dynamic range fits into 20 bits, I think you mean that the smallest and the largest pressure values can be well represented using a 20 bit binary number. I thought that the 20 bit would be the bit depth which is essentially the possible "resolution". For example, given the pressures p_min =0 and p_max=5, a higher bit depth can represent smaller pressure variations within that range. so the bit depth does not deal with the difference between the smallest and largest pressure but with the smallest signal variation that can be represented.
In general, a microphone converts the pressure wave p(t) into a voltage wave V(t). For fidelity (assuming no distortion and noise) the shape of the voltage signal V(t), which is output from the microphone, should be exactly the same as the shape of the signal p(t) except for a scaling factor. That means that there must be a linear relationship between the pressure and voltage.Sound systems always have amplifiers to increase the voltage signal V(t) size but must maintain linearity otherwise the voltage signal will look different and not represent faithfully the pressure signal. Is that correct? The human ear works differently (it is said it works nonlinearly, logarithmically) so pressure changes don't produce linear changes in the perceived loundness...but that is different...