|
Post by skandinav on Jul 21, 2023 23:33:36 GMT
Here i will post my points of disagreement. This topic will be quite philosophical. I have been banned on all mainstream forums for expressing some radical ideas like gearspace, kvraudio, audiosex because it was a major kick to corps and their trend to lie.. i will number my statements so that those disagreeing could refer to the numbers instead of quoting. In some way this is another attempt to disclose some scientific "facts" and to show their essential relativity but not absoluteness. Like black holes from highly promoted andron collider already forgotten hardly exist so some scientific facts can be confidently questioned. Some info about myself: due to self-education and overcoming forums' trolls' attempts to direct me the wrong way i have eleborated my own vision on sound which disagrees with what's promoted to masses, and especially with american and british schools of audio engineering. Since the current observable trend is aimed at freezing progress I hope these ideas will do the opposite. 1. Songs should be mixed to meet the fletcher-munson curve. On the contrary songs should be mixed to sound well at low to moderate levels. Why should people, especially young ones, spoil their hearing by loud levels? Any reasons beyond malicious intents? Low and high freqs should be powerful enough at low and moderate levels. In this case they will correspond to sound levels of acoustic non amplified performance. Orientation against that curve will change your entire way of mixing making it more human friendly.
|
|
|
Post by skandinav on Jul 21, 2023 23:56:11 GMT
2. Lowering mid instead of boosting highs gives better quality and does the job. Imo, there is no difference. If you lower mids track's volume lowers too, so you will have to raise its volume to get the initial volume level and thus you do the same boost of highs which you could do initially with consequent lowering volume of the track to meet initial volume level. Also if you lower mids instead of boosting highs the level of highs remains the same vs other instruments while what you usually want is making this instrument more easily percieved among other instruments. Thus, the best way is to combine lowering mids with boosting highs. Besides loss of highs occurs in all mics including condenser ones so you have to return their energy and boosting them seems the only way to do it. Also if lowering mids noise in high freqs will be less masked which is the same as to noise as boosting high freqs. The only method to boost highs noiselessly is to use a tool like arboretum ionizer from 1997 where you do not boost anything lower than -60db as noise lies in that region. Also,i suspect that today's chinese mic do not capture high freqs at all substituting them by noise instead. My opinion is based on video comparisons of rode nt1a vs cheap chinese mics with rode sound much more airy. I suspect that old mics on sphere like m50 indeed captured high harmonics and perhaps all neuman condensers still do it. It's hard if not impossible to boost highs from chinese mics because they seem to be absent there. I suspect that even from old dynamic shure sm57 you can get more highs than from chinese condensers but this is just a supposition based on listening to recordings from 80-90s where non chinese mics were generally used.
|
|
|
Post by skandinav on Jul 22, 2023 0:43:35 GMT
3. All daws are adequate for pro studio sound. I have tested 80% of daws. It's not like that. Their resamplers/summators are different and shown on infinitewave.ca As to reaper, for example, it has always seemed to me that the rendered song sounded worse than playing back of the tracks. The main difference between pro and mediocre sound is some hardness in sound which lacks in most daws. What they produce sounds soft like a photograph made via antialiasing filter vs crisp image from phase one digital backs and the lost details do matter whatever the null test would show. For that reason, mixing individuals in russia in 00s used mostly emagic logic for summing though separate tracks could be finished in cubase or other soft. With emagic logic 2002 the summed song sounds exactly the same as when playing back and sound is not softened. Others use digital performer and claim it as good as old logic. Once i mixed same clips in reaper and another daw (saw studio to my memory or maybe logic), then compared waveforms: transients in reaper were in some places noticeably higher. Saw studio uses integer math, while reaper by default fp (though i don't remember which settings were set in reaper then). Digital performer developers claim their product preserves phase coherence better than other daws. It can be so. Plugins usually process within 150-300 microseconds and this time is reported as zero latency though there is never 0 latency possible. Those microseconds are possibly not compensated for which reason some guy the inventor of brauerizing or multibass compresson used hw for mixing. So if same clip is on a track and a bus with some plugin on the bus you probably get comb filtering not recognizable by correlometers on master track since difference is not between left and right but is impinted at the input to master track by summator. It's like summing mono with time shifted same mono. For that reason, imo, logic and dp give hard crisp sound.
|
|
|
Post by skandinav on Jul 22, 2023 1:31:32 GMT
4. Speakers should be placed in equlateral triangle or at 45 degrees along the shorter wall of the room. And the worst place are corners of the room.
I would say that you should place them along the longest wall and exactly in the corners with much wider angle consequently. In this case stereo image is wider and closer to how you hear in headphones. Also it will help against excessive panning of instruments. Also mono music will sound wide and closer to stereo as room will add phase differences in channels. Also bass will boost and walls will act like amplifier to your music making it more surround like. Scientific info claims that you have to have at least 4 speakers for genuine reproduction of reverberant sound field. So this placement kind of creates a soundfield around you. Walls become continuation of speakers but you have to create good calibration curve via smth like roomeq vst to get similar frequency response throughout the room. But such placement of speakers requires song to be mixed with minimum phase difference in channels and rely more on volume difference if your room can't be corrected well by the calibration curve, otherwise you come across disppacement of instruments in stereo field to sides with empty center. The futher phase from +1 and the more instruments panned to the sides the worse will be localization of instruments in this case. All modern mixing is based on 60 degrees formula which is questionable since nowadays people listen more in headphones...
|
|
|
Post by skandinav on Jul 24, 2023 14:31:59 GMT
5. People hear phase shift only due to comb filtering (frequency change) not phase shift itself. Not only by comb filtering. If delaying one of the channels by 0.1 to 0.8ms sound gradually shifts exactly to one ear which can be called binaural haas or microhaas and you do not hear change in frequency response. What is changed is localization of instrument in space. Our hearing is based on that as skull's width is about 17cm which is about 0.5ms. These microshifts are responsible for total bustardizing position of instruments in space and if taking into account that vsts process sound within 0.1-0.4ms and pdc fails to compensate this or somehow channels are proccessed within different time or u used not 100% wet processing in daw's ui than you get binaural haas. All aforesaid means that people hear phase shift and have to hear. Phase shifters do the same work as readelay with 0.1-1.2ms delay.
|
|
|
Post by skandinav on Aug 3, 2023 0:07:23 GMT
6. You must always mix on monitors. Headphones are out of the question. I would say the opposite because for mixing on speakers you need well treated room which is costly and just inconvenient for living + on headphones you hear all without crosstalk and perfectly see stereo stage aside artifacts. In order to implement this you must calibrate your headphones of course with measurement mic. Calibrate with cheap chinese measurement mic, then calibrate speakers for the room and voila: headphone mixes sound exactly the same on speakers in non treated room and translate well to any other speakers and rooms. If it sounds good on calibrated cans it will sound even better on any speakers but not vice versa.
|
|
|
Post by skandinav on Oct 7, 2023 13:18:44 GMT
The forum seems dead as well as audio industry no longer required for brainwashing. No possibility to earn anything on music....
|
|
|
Post by Hexspa on Oct 16, 2023 13:05:36 GMT
The forum seems dead as well as audio industry no longer required for brainwashing. No possibility to earn anything on music.... Not totally dead. Your reference level has no way of "meeting" the "fletcher-munson curve". The fact is our hearing is not linear in terms of frequency-to-amplitude. While most experienced mixers agree that the bulk of mixing is best done at moderate levels, you have to turn the audio up to hear the frequency extremes. Nobody is trying to "meet fletcher munson" as much as there's a framework and you have to tell beginners something. It's a complex issue that you can't expect a beginner to understand. Even for someone experienced, it takes concerted effort to appreciate the nuance to this topic. You might be referring to the axiom of "better to cut than boost when eqing". Technically and in isolation there's no difference between creating a curve via negativa or via positiva but within the context of a mix you have to consider headroom. Plus, to your point #1, louder sounds better and boosting is adding loudness thus deceiving you. In many creative fields, you're always battling noise to some extent. Digital has helped but even then you have physical sources of noise. Saying that "chinese mic do not capture high freqs at all substituting them by noise instead" is a gross generalization which can be easily disproven. You then say "cheap chinese mics" but then that is still too vague. If you want to talk about the response of a particular mic then that will be much more fruitful. In the mean time, there are hundreds of mics on audio test kitchen complete with frequency response plots created in collaboration with Harman. You will not find a more comprehensive source anywhere - especially with audio examples. Based on your rendering settings, you can get a different result with a printed file than you hear live. Things like lossless vs lossy encoding, sample rate and bit depth (in extreme cases) as well as resampling modes can all affect the sound. However, if you expect to be taken seriously by anyone who can do anything then you need to speak in objective terms. All these subjective words you use mean nothing to anyone but you. "Hard, crisp, worse, soft" etc. are all sales terms used to bamboozle the foolish out of money. You've come to one of the few places on the internet where this kind of discourse will not fly.
Why would you want mono to sound "wide"? The whole point of mono is that it is not wide. Some listeners do prefer more 'live' implementations of their listening room but not all. If you want a source for this kind of approach, check out noted expert Floyd Toole.
I'm not a psychoacoustics nor hearing expert but how people hear is pretty well known. Interaural Level Differences (ILD) and Interaural Time Differences (ITD) account for all of our frequency range in varying amounts, I believe. William Moylan's Art of Recording goes into this in some depth. Mixers can use speakers or headphones. I use both and use speakers as the final tiebreaker. Saying that "if it sounds good on calibrated cans it will sound even better on any speakers but not vice versa" is using subjective language and is probably misleading at best. You also say a "well treated room ... is costly and ... inconvenient for living" is another subjective and potentially misleading statement.
Whether you think you can or you can't - you're right!
|
|
|
Post by skandinav on Nov 16, 2023 22:53:05 GMT
I have read the entire book by Ethan, so there is no need to repeat the statements from the book, especially about subjectivity. Nulling tests don't prove anything by one important secret reason: say reaper daw is secretely designed to spoil sound (which is true imo!), you put dualmono file on track, invert phase on one channel and voila - sound disappears. Does it prove anything? Only that channels were identical. But it does prove that reaper has not made any adverse manipulations to the sound? No! Because nulling is made between l and r channels but not between original left channel and same channel after import into reaper. So it explains why daws produce very different timbres due to different resamplers and plugin algorithms. And those of reaper are of very low quality. The majority of delay plugins do not work in the honest way and you can detect it only in one way: if you try sweeping from 0.1 to 10 ms - comb filtering should be severe and very audible. If you test readelay you will find that comb filtering is very weak and barely heard with same crap happenning when shifting clip's duplicate - i.e. some muddy hazy sound instead of crisp comb filtered result. And 99% of delays do the same crap. I won't discuss drawbacks of other reapack's crappy plugins. I just wanted to point out that unfortunately Ethan's book does not reveal all secrets and manipulations from corporations. I can only recommend using very old plugins. The secret malicious trend by corps is proposing complex plugins like multiband crap, etc, instead of giving simple but honest tools to people. Re point #5. I have tried recently to create a comb filter as 5-band readelay with bands created by 5 12-pole filters and 5 readelays. All this worked in dualmono and to my suprise sound just desintegrated even when delays were set to 5ms difference and changed timbre even with 0.1ms delays. You may try this test yourself. Sound remains mono but it kind of loses focus like if you record reverb instead of direct sound and all this happens in mono! Unbelievable stuff! Correlometer won't show anything. So, you again can't detect phase shift but you hear it perfectly. Same crap happens with min phase eqs... for that reason linear phase eqs are better even with ringing.... And why honest delays are hard to find? Because in comb filtering mode they can fix some crappy room comb filtering which was recorded in spaces less than 100m2 or just crappy timbre. You can't fix it by equing as it requires thousands cuts, so corps make people buy more and more eqs in the hope to find a magic wand. And any daw/tool does not reveal its own spoiling caveats when being used for nulling tests. Perhaps old hw oscilloscopes are the way to go. I just want to underline the fact that my own listening experiments have revealed all described above crap and nulling tests helped in no way. I would switch to some old daw from 1997-1998 if i were a beginner. At that time malicious algorithms had not yet been implemented imo. Yes, there were aliasing artifacts and 16-bit cpu friendly processing but those algorithms were honestly working as good old hardware. And by the way i am the first to have disclosed the truth about such simple yet powerful tool as delay and corps' malicious evil deeds. So, just repeat the tests yourself if anyone is hesitant. By the way i agree with Ethan that sonitus and Budde's plugins were not that bad and Budde has even created first real vector sampling software which promptly was monopolized by corp acustica audio selling now pseudovector software. Re #2 I don't see the point for debate. In some cases lowering mids produces better result as it decreases collision of mids of different instruments where raising highs would not help - the simple truth not formulated by anywhere for a strange reason. My ideas about cheap chinese mics: there is like can timbre with them which speaks of some comb filtering. As Ethan have written their plots' resolution sucks and indeed how will you show thousands of peaks and nulls for highs in a viewable plot but the averaged plot may boast of raised crisp highs. Same crap about chinese plastic guitars and their strings: they may be resonant, solidly made and painted but with some crappy toy like sound. It's like genetically modified food: you can eat it but the results will be adverse. Aaron Russo's interview about the elite is quite informative about present-day products and intents: corps just do not care and do harm to people and it's done in all fields including audio.
|
|
|
Post by skandinav on Nov 16, 2023 23:17:06 GMT
Re #4 "Why would you want mono to sound "wide"? The whole point of mono is that it is not wide." Ok, you are sitting a concert hall and there is only one piano played and you hear it by both ears and it sounds like stereo, right? So why would you want it to sound unnaturally narrow if you have only one speaker like old vinyl players? You would definitely place the player somewhere to have a wider sound. The thing is that room itself is a powerful stereorizer but if you put speakers in equilateral triangle along the longest side it will be much worse (no stereoscene at all) than putting in corners with some diffusion by carpets and bookshelves. Plus a benefit for listening old mono records. Plus you can more freely move, shift your head from the centerline with less adverse effects. Say you are listening together with a couple of friends. The central image can be stretched though to turn singer into a giant but it can be to some degree fixed by dampening and diffusion. Another benefit as i have written before is that all your apartment has the same frequency response and you enjoy music from any square meter,not just as part of that triangle. Cinema houses work this way by the way... you should try getting the same effect and i firmly suspect it's possible even with one speaker.
|
|
|
Post by Hexspa on Nov 20, 2023 14:12:37 GMT
I have read the entire book by Ethan, so there is no need to repeat the statements from the book, especially about subjectivity. Nulling tests don't prove anything by one important secret reason: say reaper daw is secretely designed to spoil sound (which is true imo!), you put dualmono file on track, invert phase on one channel and voila - sound disappears. Does it prove anything? Only that channels were identical. But it does prove that reaper has not made any adverse manipulations to the sound? No! Because nulling is made between l and r channels but not between original left channel and same channel after import into reaper. So it explains why daws produce very different timbres due to different resamplers and plugin algorithms. And those of reaper are of very low quality. The majority of delay plugins do not work in the honest way and you can detect it only in one way: if you try sweeping from 0.1 to 10 ms - comb filtering should be severe and very audible. If you test readelay you will find that comb filtering is very weak and barely heard with same crap happenning when shifting clip's duplicate - i.e. some muddy hazy sound instead of crisp comb filtered result. And 99% of delays do the same crap. I won't discuss drawbacks of other reapack's crappy plugins. I just wanted to point out that unfortunately Ethan's book does not reveal all secrets and manipulations from corporations. I can only recommend using very old plugins. The secret malicious trend by corps is proposing complex plugins like multiband crap, etc, instead of giving simple but honest tools to people. Re point #5. I have tried recently to create a comb filter as 5-band readelay with bands created by 5 12-pole filters and 5 readelays. All this worked in dualmono and to my suprise sound just desintegrated even when delays were set to 5ms difference and changed timbre even with 0.1ms delays. You may try this test yourself. Sound remains mono but it kind of loses focus like if you record reverb instead of direct sound and all this happens in mono! Unbelievable stuff! Correlometer won't show anything. So, you again can't detect phase shift but you hear it perfectly. Same crap happens with min phase eqs... for that reason linear phase eqs are better even with ringing.... And why honest delays are hard to find? Because in comb filtering mode they can fix some crappy room comb filtering which was recorded in spaces less than 100m2 or just crappy timbre. You can't fix it by equing as it requires thousands cuts, so corps make people buy more and more eqs in the hope to find a magic wand. And any daw/tool does not reveal its own spoiling caveats when being used for nulling tests. Perhaps old hw oscilloscopes are the way to go. I just want to underline the fact that my own listening experiments have revealed all described above crap and nulling tests helped in no way. I would switch to some old daw from 1997-1998 if i were a beginner. At that time malicious algorithms had not yet been implemented imo. Yes, there were aliasing artifacts and 16-bit cpu friendly processing but those algorithms were honestly working as good old hardware. And by the way i am the first to have disclosed the truth about such simple yet powerful tool as delay and corps' malicious evil deeds. So, just repeat the tests yourself if anyone is hesitant. By the way i agree with Ethan that sonitus and Budde's plugins were not that bad and Budde has even created first real vector sampling software which promptly was monopolized by corp acustica audio selling now pseudovector software. Re #2 I don't see the point for debate. In some cases lowering mids produces better result as it decreases collision of mids of different instruments where raising highs would not help - the simple truth not formulated by anywhere for a strange reason. My ideas about cheap chinese mics: there is like can timbre with them which speaks of some comb filtering. As Ethan have written their plots' resolution sucks and indeed how will you show thousands of peaks and nulls for highs in a viewable plot but the averaged plot may boast of raised crisp highs. Same crap about chinese plastic guitars and their strings: they may be resonant, solidly made and painted but with some crappy toy like sound. It's like genetically modified food: you can eat it but the results will be adverse. Aaron Russo's interview about the elite is quite informative about present-day products and intents: corps just do not care and do harm to people and it's done in all fields including audio. I'm not going to address speculation on Cockos' hidden agendas. That's called conspiracy theorizing and I'm not into that. Regarding how delays work, there's probably reasons for why they work how they do. Obviously digital delay works different than analog because the processing has to be broken into buffer chunks; that's as much as I know about it. As far as Reaper's resampling being "very low quality", I suggest you talk about it on the Reaper forum where someone there can challenge your ideas. Posting here about it is like someone complaining to a pastor about a politician - go to the arena where they fight the battles! Go to a DSP forum if you think you have a valid contribution. Instead you are tearing everyone down outside of their presence and that is cowardly. As far as the null test is concerned, simply import a file into two separate daws and render it with identical settings. If they don't null, you'll have a leg to stand on. Until then, it's all idle speculation and I will no longer contribute to it.
|
|
|
Post by Hexspa on Nov 20, 2023 14:15:58 GMT
Re #4 "Why would you want mono to sound "wide"? The whole point of mono is that it is not wide." Ok, you are sitting a concert hall and there is only one piano played and you hear it by both ears and it sounds like stereo, right? So why would you want it to sound unnaturally narrow if you have only one speaker like old vinyl players? You would definitely place the player somewhere to have a wider sound. The thing is that room itself is a powerful stereorizer but if you put speakers in equilateral triangle along the longest side it will be much worse (no stereoscene at all) than putting in corners with some diffusion by carpets and bookshelves. Plus a benefit for listening old mono records. Plus you can more freely move, shift your head from the centerline with less adverse effects. Say you are listening together with a couple of friends. The central image can be stretched though to turn singer into a giant but it can be to some degree fixed by dampening and diffusion. Another benefit as i have written before is that all your apartment has the same frequency response and you enjoy music from any square meter,not just as part of that triangle. Cinema houses work this way by the way... you should try getting the same effect and i firmly suspect it's possible even with one speaker. Sound in the real world isn't stereo; stereo is a mechanical reproduction via two channels. Mono is a function of the same repro system so any correlation is bound to be incomplete between a physical source and a stereo one. For example, a piano or a vacuum cleaner has many sources of sound, each contributing to different parts of a composite whole. There are "sound cameras" which let you visually observe this. In contrast, a loudspeaker generally has a tweeter and a woofer and they're close together. It's obvious that the two, a piano and a loudspeaker, can never produce the same exact sound in a space. It's well-established that stereo has its limitations - therefore there's no disagreement with "official science".
|
|
|
Post by skandinav on Nov 24, 2023 1:51:08 GMT
I have to disagree with your statement that there is no stereo in real world. I would say the opposite that there is no mono sound in real life. In fact in real life sound is infinitely multichanneled, it's always surround sound... but we have only 2 ears, so we always hear it as stereo unless you block one ear hole... So mono is unnatural, just listen to mono with headphones. In real world you never listen to same sound by both ears as with headphones and unless you are in anechoic chamber you will always hear stereo even with mono recording. Also any resonating object resonates by multiple surfaces with different frequences into different directions. So source itself is not mono. But some people want to believe world is flat but not round. It's their choice but it's far from real science...
Ps. You just create proper acoustic field in the room to your liking. You can't turn your room into anechoic chamber anyway, room will always act as eq / comb filter. What i wanted to say is that triangle formula is questionable if your purpose is enjoying music throughout the room since you have to create pleasant acoustic field in as much sq meters as possible. Also keep in mind that different mixing people mix differently, some prefer wide with broken phase, others do narrower mixes based on volume differences for which you can getbetter result with wider angle between speakers + mono benefit mentioned. I've never had a chance to enjoy martin logan's flat speakers but people have reported much more enjoyable sound. Just compare radiating surface of a tweeter vs several sq feet panels. The idea is that high freqs stay uniform across a larger area. Triangle can be good for mixing purposes to be susceptible to stereo panning but in real world we don't hear those ping pongs and similar stuff, at least not so dot-focused. Better mix with cans to that purpose. If you can do it well with cans it will sound even better on speakers... As was proven by Audio Expert book each sq inch of room sounds differently, so it's meaningless to use any triangle or similar setup which focuses sweet spot to a tiny bit of a room. Better do some compromises for some benefits. You can test what i say by oxford reverb vst 4 geometric shapes of the room to hear the differences and make a proper decision.
|
|
|
Post by Hexspa on Nov 25, 2023 18:21:19 GMT
You agreed with me that there's no stereo in the real world. Sure, we have two ears but I was referring to sources.
|
|
|
Post by skandinav on Jan 29, 2024 15:53:02 GMT
After coming across some posts i stick to the point that there are no mono sources in reality because they have dimensions in space and various radiation patterns + any premise works as a stereorizer. In fact mono is unnatural artificial thing. The problem is with mono compatibility. I have not met anyone so far to postulate a law how to get that compatibility with preserving differences in channels so far. Old studios for mono were deader than for stereo. In fact reverb turns any mono into mess unless it's some kind of cunningly raped reverb of a secret type. Lack of reverb turns stereo into crap and vice verse for mono. By the way if anyone has new stereo soundbook in pdf please share with me. There are some tabooed books like photoshop channel chops. This is one of them and i collect tabooed books in digital form. By the way, do you know any cpu light additive synths with say ~16 harmonics? I have met some quilcom additive synths but they are too cpu heavy. Making additive synths based on modular ones is cpu heavy, they choke already with 8 harmonics, kind of strange for modern cpus' power.
|
|