[RWP] Time alignment of audio from multiple sources

Chris Belle cb1963 at sbcglobal.net
Thu Dec 18 06:19:17 EST 2014


Patrick, I did this once with two analog tape recorders, it was actually
our wedding.

It was recorded on a portable from out in the audience, and also off the 
board.

I got the bright idea of combining them, and of course you know what I 
ran in to.

So I ended up doing a lot of hand editing, and nudging during dead parts,
and I made it work, but it took a while.
I was using sonar too, imagine the tedium,

I get these beats from rappers that I need ti edit vocals and such so 
it's good to make a tempo map and make things line up.

Even one thing which is supposed to be at a given tempo doesn't always 
match up,
god knows where it was exported from, somebody's aging mpc, or mac 
internal sound card, or
so here I am going through and making tempo marks ever so often to 
define when the actual bar line is, that clock drift is just part of the 
way things work in the real world, and it's the same in analog and digital.
both.

For different reasons, but it's just part of what we have to deal with.
I hope there is a plug-in someplace
that aligns audio.

Now isn't there something in protools called vocal line which lines up 
comped vocals that are off from each other?

I might not be calling it right, but a student of mine was talking about 
it once.




On 12/18/2014 3:19 AM, Scott Chesworth wrote:
> hahaha, yeah, that does seem to be the order of the day. Ah well, it's
> a learning experience I guess. Thanks Zoom.
>
> On 12/18/14, Patrick Perdue <patrick at pdaudio.net> wrote:
>> I've done this multi-device recording gig before. Even if you carefully plot
>> out a constant sampling rate offset, it still won't quite work out. Some of
>> these things don't even drift at the same rate from other sources.
>> Potentially lots of manual nudging from time to time is in your future, even
>> if you get things lined up for a while.
>>
>>> On Dec 17, 2014, at 8:19 PM, Jim Snowbarger <snowman at SnowmanRadio.com>
>>> wrote:
>>>
>>> I would guess that, even though they will be quoted as being identical,
>>> the
>>> frame rates actually are different.  The fact is that they were recorded
>>> with devices each of which had it's own time base crystal, and thus it's
>>> own
>>> idea of the actual length of a microsecond.  One always has this problem
>>> in
>>> sinking recordings that were made on separate devices.  They don't have a
>>> common time base.
>>> You might be able to eventually tame that down by taking one of them into
>>> sound forge and changing the sample rate, setting the sample rate only.
>>> The hard part is how much to change the sampel rate, and in what
>>> direction,
>>> in order to get compliance.  The way I do it is to position them so they
>>> start in sink, and then go to the end, are try to judge how much time lag
>>> has developed, over how long an interval.
>>> That workks ok for long clips, but not so well for short ones.
>>> But then, it is the long one's that give you enough time for the mismatch
>>> to
>>> become apparent.
>>> Once a significant mismatch has developed, you can often learn to judge
>>> the
>>> number of milliseconds involved just by the sound of the composite mix.
>>> Make one signal louder thant the other.  So, you can tell just by
>>> listening,
>>> whether the lould signal comes first, or the soft one.  The later is a
>>> bit
>>> like reverse reverb effect.  So, now you know who is slow and who is
>>> fast.
>>> Next, is to estimate the number of milliseconds involved.  A handy place
>>> to
>>> judge that might be where there is a sharp edge, like a wood blick hit,
>>> or
>>> other sharp percussive,  listen to how the sharp edge is spattered by the
>>> two competing signals.  If you guess right, then divide that estimate to
>>> the
>>> total length of the sample, and that will tell you by what percent the
>>> clocks differ.  Change the sample rate by that percentage, setting the
>>> sample rate only, and see what happens.  Your results will vary.
>>>
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: RWP [mailto:rwp-bounces at reaaccess.com] On Behalf Of Scott Chesworth
>>> Sent: Wednesday, December 17, 2014 6:22 PM
>>> To: Reapers Without Peepers
>>> Subject: [RWP] Time alignment of audio from multiple sources
>>>
>>> Hey folks,
>>>
>>> I'm sat here mixing a multitracked live recording. There are a bunch of
>>> close mics and a couple of DI sources that were tracked with a Zoom R16,
>>> and
>>> the band would like me to use the audio from a Zoom Q3 camera as ambient
>>> mics. But here's the thing, I'm having no luck at all getting the audio
>>> from
>>> the Q3 synced with the audio from the R16, and I've not struggled with
>>> this
>>> task before, so thought I'd ask whether anyone here had a decent workflow
>>> for getting the job done. At the moment I'm just nudging in small
>>> increments
>>> and listening. I can get the alignment so that it sounds fine on one part
>>> of
>>> a song, but then skipping forward a minute or so I'm hearing flamming on
>>> pronounced drum fills etc. It's almost like the frame rates of the two
>>> sources are different or something.
>>>
>>> Any thoughts or advice?
>>>
>>> Scott
>>>
>>> _______________________________________________
>>> RWP mailing list
>>> RWP at reaaccess.com
>>> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> RWP mailing list
>>> RWP at reaaccess.com
>>> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>>
>> _______________________________________________
>> RWP mailing list
>> RWP at reaaccess.com
>> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>>
> _______________________________________________
> RWP mailing list
> RWP at reaaccess.com
> http://reaaccess.com/mailman/listinfo/rwp_reaaccess.com
>





More information about the Rwp mailing list