[RWP] Latency, is shorter always better?

Chris Belle cb1963 at sbcglobal.net
Tue Jul 22 01:46:29 EDT 2014


One more thought  about this latency thing.

And us old farts who used to play with tape decks will remember this.

How about those 3 head decks where you could listen to play-back while 
recording?

YOU could hear the input while it was going down, and listen to your 
playback head, and some of those were moveable.

So you could change the latency between when something got recorded and 
played back by moving the head closer or further away from the recording 
head.

That was on the commercial machines.

I never had one of those, but i did have a very nice 3 head cassette deck.

This is, in fact, somewhat similar to what hapens in your daw,
even if your not recording but just listening to the signal comming back 
from your daw once it goes through the internal processing, and any 
plug-ins you might have,
and they add their own latency you can bet, and most modern daws 
compensate for that under the hood.

Sonar had automatic plug in delay compensation
way before many daws, including protools ever had it.

Yes, you go back and listen to first episodes of the home recording show 
on protools 9 and 10,
and you can hear them talking about lining up tracks manually after the 
fact, to make the audio come out right, after going through all the 
processing plugs.

Boy howdy, now, isn't that a real pain in the posterior?

INteligently keeping up with when in the time line a recording starts, 
and how to play it precisely in the right way to account for plug-ins 
latencies, and then play it properly again when you take plugs of is not 
an easy task, but your daw does that all for you under the hood, if it's 
worth a squat.

People sometimes get in real trouble even with this automatic
stuff going on by not routing their monitoring right,
because there are certain ways of routing and recording which makes it 
impossible for your daw to implement delay compensation properly.

So this is why I tend to like to not do plugs until after I've laid my 
audio.

Not always possible, you can't lay that heavy rock guitar track easily 
only hearing plink, plunk, twang,
but you can believe your daw is doing the latency shuffle dance when you 
have many tracks playing and you are laying guitar amp simms which has 
latency going both ways, because remember, you are going audio in, and 
audio back out,
and this is why with mastering plugs which cause a lot of latency, 
especially multi-band compressors with look ahead and back in my early 
early days of learning this stuff it used to drive me nuts, why are my 
midi tracks being delayed so much when I press a note but they play just 
fine on playback?

Well, it's that delay compensation working for you.

Imagine having to figure out how much delay you had and fixing all that 
manually?

You can get interesting things happening when using reverb in projects 
by turning off delay compensation,
you get a built in pre-delay, which is a setting on high quality reverb 
units, the reverb doesn't start right away, and
this helps make room in the mix when you don't wan the verb in the way,
and it kicks in after the initial atack of your audio.

Or do we remember real world latency,
and the days when distructive editing was the only kind you did, if you 
wanted to process an equalizer, or chorus fx, you hit the button, and 
then go have a sandwich and waited for your 486 to process that track, 
and you'd come back 10 minutes later and maybe have a wet and dry track.
and you could do interesting things with that by time delaying the wet 
track 'grin'.

When I do drum replacement by generating midi tracks from transient 
points of an audio drum track and then feeding it to a audio bus with 
samples, I have to time align the new track to match the old one,
at least in the old days we had to do more of that before delay 
compensation was automatic.
in most daws.

Still, most daws will only do this in a certain range, see above, where 
I mention mastering plugs,
linear phase equalizers are also notorious for introducing way too much 
delay to use them in real time.

So are transient  processors, shapers.

Maybe when we get processors running at 30 gigahertz
we'll be able to do that stuff in real time, and did I hear silly people 
want to make a daw out of an ipad?

Right now in 2014, an ipad will just barely run a guitar simm with low 
enough latency
\to be playable.

Well, what do you expect from a little baby toy computer?


On 7/21/2014 9:33 PM, Jim Snowbarger wrote:
> Now and then, I feel like a slight departure from topic..  And, this 
> is one of them.  So, stand bye with your delete key ready as I carry on.
>
> This probably belongs over on MidiMag.  But, I don't feel like joining 
> just so I can post this once in a blue mooner.
>
> One of the great things that digital audio processing has brought to 
> us is so-called latency.  You might just call it delay.  but, in the 
> 21st century, we like to use clever names.  It makes us feel smarter.  
> So, let's co-opt the term latency, which had a totally different 
> meaning before the techno-gods got hold of it. And, let's now define 
> latency as the act of being late.  But, however you slice it, it comes 
> down to delay.
>
> Digital devices impose delay mostly because data consumers, like sound 
> cards, or recording devices, have learned to be defensive, knowing 
> full good and well that data providers, such as input sound cards, or 
> other streaming devices, can not be counted on to keep up a steady 
> stream of data.  Internet congestion, or scheduling congestion inside 
> your own machine, can temporarily block the normal flow of things.  
> Sound playback requires a rock-solid comsumption rate of the data. The 
> sampels need to keep flowing. You might not get that next buffer load 
> of data in time. so, it pays to keep a backlog.  The more backlog, the 
> safer you are.  But, if the backlog is too great, you get, latency, 
> that annoying delay.
>
> I recently picked up one of those fine Computers Chris is always 
> talking about from StudioCat.com.  That is one very fine box. And, now 
> that I also own Chris's Delta 1010, I was enjoying fine-tuning my 
> latency down to acceptable levels,  not carefully measured, but 
> clearly less than 10 milliseconds.
>
> Most of the recording work I do involves a microphone and headphones.  
> I am quite typically listening to my own voice as I speak.  If you 
> have listened to the Snowman Radio Broadcasts, you know the kind of 
> multi-track microphone work I'm guilty of.
> When living on machines where such short delays were not possible, my 
> habit was to listen to my own foice direct out of the mixer, and not 
> going through Reaper.  So, I kept the reaper monitor off. What was 
> annoying about that is that, if I panned my various character voices 
> in the stereo  mix, then, my direct microphone sound would not be 
> panned the same as the character voice track I was recording into.  
> So, when it played back, it came from elsewhere, and was more than a 
> little bit confusing.
>
> But, with delay this short, I find that I switch off the direct sound, 
> and now can monitor the signal coming back from reaper with the 
> monitor turned on.  So, I'm now listening to a delayed version of my 
> voice, and it is panned to the same place where that character voice 
> sits, which helps me keep track of who I am supposed to be right now.  
> And, I can more easily tell now whetehr a track is armed, and even if 
> one is armed that should not be. It's nice to be able to work like 
> that, just listening to reaper's output.
>
> But, here is the cool thing.  The exact amount of latency you provide 
> affects the quality of what you hear in your headphones.
>
> No matter how good your phones, the sound that you hear when you are 
> listening to yourself speaking live into a microphone, is actually the 
> composite of at least two signal paths, and maybe more.  Yes, there is 
> the direct signal coming through Reaper. Then, there is bone 
> conductivity, the sound of your own voice coming through the structure 
> of your head, which will very somewhat with density.  If you don't get 
> any of that, you might wonder about that density stuff.
> And maybe even, there is leakage around the ear muffs.  In all, it is 
> a complex sond that actually reaches your ears.  And, the phase 
> relationship between all of the various contributors will affect the 
> frequency response of the final signal that you hear.
>
> In the old days, we knew about the affect that phase would have on 
> such things.  Having your head phones out of phase with your 
> microphone left you feeling empty headed, due to the phase 
> cancellation that took place.
> But, since delay was in the nanoseconds, we didn't get to know so much 
> about the effect that delay would have, despite our compulsive 
> preoccupation with tape delay.
>
> Phase is mostly a frequency independent phenomenon.  Yes, we know that 
> some systems, especially mechanical transducers, or even cheap 
> equalizers, which will have a reactive component to their impedance,  
> introduce a variable amound of phase shift, depending on frequency.  
> But, usually those effects are at the far ends of their usable range.
> In general, especially in mixer land, where things are nice and 
> linear, and where impedances are strictly non-reactive, if you put 
> something 180 degrees out of phase, you will get perfect cancellation, 
> all across the frequency band.
>
> Enter the digital age, and the new innovation, latency.
> The relationship between signal phase, and a delay is frequency. For 
> example, a delay of 4 milliseconds is one full cycle of a 250 Hertz 
> tone. But, it is only half a cycle of a 125 hertz tone.   It is all 
> still a 4 millisecond delay.  But, the phase impact depends on the 
> frequency. Combining the pre and post delays of these two tones with 
> that 4ms delay will have completely different effects. The 125 hertz 
> tone would be nulled out.  The 250 hertz tone would actually see a 6 
> db increase.
>
>
> The result is that, if you put a delay in front of your headphone mix, 
> you will cause what is referred to as a comb filter effect on the 
> perceived headphone signal.  It is a filter that has a frequency 
> response curve that looks like a rola coaster, with hills and 
> valleys.  If you were listening to an audio tone sweep, one that you 
> would actually need to sing, in this case, in order to get that bone 
> conductivity thing happening as well, As you move steadily up in 
> frequency, the sound would be much stronger at some frequencies, and 
> much weaker at others.  As the tone rises, you would hear rising and 
> falling of the net response.  And, changing the amounbt of delay 
> slides that comb up and down the audio spectrum.
> Depending on several things, the frequency range of your voice, 
> response of your headphones, your ears, the density of your grey 
> matter, your preferences, and on and on, you might have preferences 
> about the optimal position of that comb.  What frequencies do you like 
> to accentuate?  And, which to attenuate.
>
> The cool thing is that, by fine-tuning your headphone latency, you can 
> position that comb how you like, and can optimize your headphone 
> experience. The latency needs to be short enough to not give you a 
> delay echo effect. But, beyond that, the shortest possible latency may 
> not give you the headphone experience you like.  Instead, relax it a 
> little, and see what enriching tones come your way.
> Silly you.  And you always thought shorter was better.  And now you know.
> TROTS.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> RWP mailing list
> RWP at reaaccess.com
> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>





More information about the Rwp mailing list