[RWP] Latency, is shorter always better?
Chris Belle
cb1963 at sbcglobal.net
Mon Jul 21 23:39:21 EDT 2014
Very interesting and creative way to look at an issue which is usually a
nuisance and a bother when we are recording Jim.
But sometimes we must use daw based monitoring and not direct monitoring.
You can see how phasing afects your tracks too by time delaying two
identical tracks from one another,
and seeing which frequencies get filtered.
Very cool.
This is why I'm such an old stick in the mud about using analog paths
when I can to move audio, you get 0 latency or so low it's practically 0,
and phasing isn't an issue.
On 7/21/2014 9:33 PM, Jim Snowbarger wrote:
> Now and then, I feel like a slight departure from topic.. And, this
> is one of them. So, stand bye with your delete key ready as I carry on.
>
> This probably belongs over on MidiMag. But, I don't feel like joining
> just so I can post this once in a blue mooner.
>
> One of the great things that digital audio processing has brought to
> us is so-called latency. You might just call it delay. but, in the
> 21st century, we like to use clever names. It makes us feel smarter.
> So, let's co-opt the term latency, which had a totally different
> meaning before the techno-gods got hold of it. And, let's now define
> latency as the act of being late. But, however you slice it, it comes
> down to delay.
>
> Digital devices impose delay mostly because data consumers, like sound
> cards, or recording devices, have learned to be defensive, knowing
> full good and well that data providers, such as input sound cards, or
> other streaming devices, can not be counted on to keep up a steady
> stream of data. Internet congestion, or scheduling congestion inside
> your own machine, can temporarily block the normal flow of things.
> Sound playback requires a rock-solid comsumption rate of the data. The
> sampels need to keep flowing. You might not get that next buffer load
> of data in time. so, it pays to keep a backlog. The more backlog, the
> safer you are. But, if the backlog is too great, you get, latency,
> that annoying delay.
>
> I recently picked up one of those fine Computers Chris is always
> talking about from StudioCat.com. That is one very fine box. And, now
> that I also own Chris's Delta 1010, I was enjoying fine-tuning my
> latency down to acceptable levels, not carefully measured, but
> clearly less than 10 milliseconds.
>
> Most of the recording work I do involves a microphone and headphones.
> I am quite typically listening to my own voice as I speak. If you
> have listened to the Snowman Radio Broadcasts, you know the kind of
> multi-track microphone work I'm guilty of.
> When living on machines where such short delays were not possible, my
> habit was to listen to my own foice direct out of the mixer, and not
> going through Reaper. So, I kept the reaper monitor off. What was
> annoying about that is that, if I panned my various character voices
> in the stereo mix, then, my direct microphone sound would not be
> panned the same as the character voice track I was recording into.
> So, when it played back, it came from elsewhere, and was more than a
> little bit confusing.
>
> But, with delay this short, I find that I switch off the direct sound,
> and now can monitor the signal coming back from reaper with the
> monitor turned on. So, I'm now listening to a delayed version of my
> voice, and it is panned to the same place where that character voice
> sits, which helps me keep track of who I am supposed to be right now.
> And, I can more easily tell now whetehr a track is armed, and even if
> one is armed that should not be. It's nice to be able to work like
> that, just listening to reaper's output.
>
> But, here is the cool thing. The exact amount of latency you provide
> affects the quality of what you hear in your headphones.
>
> No matter how good your phones, the sound that you hear when you are
> listening to yourself speaking live into a microphone, is actually the
> composite of at least two signal paths, and maybe more. Yes, there is
> the direct signal coming through Reaper. Then, there is bone
> conductivity, the sound of your own voice coming through the structure
> of your head, which will very somewhat with density. If you don't get
> any of that, you might wonder about that density stuff.
> And maybe even, there is leakage around the ear muffs. In all, it is
> a complex sond that actually reaches your ears. And, the phase
> relationship between all of the various contributors will affect the
> frequency response of the final signal that you hear.
>
> In the old days, we knew about the affect that phase would have on
> such things. Having your head phones out of phase with your
> microphone left you feeling empty headed, due to the phase
> cancellation that took place.
> But, since delay was in the nanoseconds, we didn't get to know so much
> about the effect that delay would have, despite our compulsive
> preoccupation with tape delay.
>
> Phase is mostly a frequency independent phenomenon. Yes, we know that
> some systems, especially mechanical transducers, or even cheap
> equalizers, which will have a reactive component to their impedance,
> introduce a variable amound of phase shift, depending on frequency.
> But, usually those effects are at the far ends of their usable range.
> In general, especially in mixer land, where things are nice and
> linear, and where impedances are strictly non-reactive, if you put
> something 180 degrees out of phase, you will get perfect cancellation,
> all across the frequency band.
>
> Enter the digital age, and the new innovation, latency.
> The relationship between signal phase, and a delay is frequency. For
> example, a delay of 4 milliseconds is one full cycle of a 250 Hertz
> tone. But, it is only half a cycle of a 125 hertz tone. It is all
> still a 4 millisecond delay. But, the phase impact depends on the
> frequency. Combining the pre and post delays of these two tones with
> that 4ms delay will have completely different effects. The 125 hertz
> tone would be nulled out. The 250 hertz tone would actually see a 6
> db increase.
>
>
> The result is that, if you put a delay in front of your headphone mix,
> you will cause what is referred to as a comb filter effect on the
> perceived headphone signal. It is a filter that has a frequency
> response curve that looks like a rola coaster, with hills and
> valleys. If you were listening to an audio tone sweep, one that you
> would actually need to sing, in this case, in order to get that bone
> conductivity thing happening as well, As you move steadily up in
> frequency, the sound would be much stronger at some frequencies, and
> much weaker at others. As the tone rises, you would hear rising and
> falling of the net response. And, changing the amounbt of delay
> slides that comb up and down the audio spectrum.
> Depending on several things, the frequency range of your voice,
> response of your headphones, your ears, the density of your grey
> matter, your preferences, and on and on, you might have preferences
> about the optimal position of that comb. What frequencies do you like
> to accentuate? And, which to attenuate.
>
> The cool thing is that, by fine-tuning your headphone latency, you can
> position that comb how you like, and can optimize your headphone
> experience. The latency needs to be short enough to not give you a
> delay echo effect. But, beyond that, the shortest possible latency may
> not give you the headphone experience you like. Instead, relax it a
> little, and see what enriching tones come your way.
> Silly you. And you always thought shorter was better. And now you know.
> TROTS.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> RWP mailing list
> RWP at reaaccess.com
> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>
More information about the Rwp
mailing list