First let me state what I want to know, then I'll explain why I want to know it. Cubase has a track delay in miliseconds you can use to hopefully sync up audio, or move it precisely in place. The smallest increment you can move a track is a hundredth of a millisecond. Now, if my audio track is sampled at 96 kHz, that would mean there is one sample every .01041666... milliseconds - this means you cannot move a track by one sample length, but by a little less than that. The more hundredths of a ms you move a track by, the more out of sync its samples would be with respect to another track that was previously in sync with that track. Similarly, a delay unit on a multi-fx processor lists delay times in milliseconds. These delay times are generally in milliseconds, but similar to above, the sample rate doesn't fit neatly into 1 ms intervals. With a multi-fx unit synchronized to a common clock for 2 outputs (like a stereo signal over SPDIF), how can that be? Are the neat .01 ms or 1 ms increments just a guideline and the program or multi-fx unit just gets as close to that increment as possible while maintaining all samples in sync with the sample rate clock? Can Cubase actually move the audio by the increments it states, so that each sample for two different audio tracks can actually be slightly out of sync? Ok...so the reason I'm asking is that I have a Pod HD and tend to find the onboard cabs a little lacking, especially for a thick yet crisp metal tone. I like to use dual amp tones, but use the same amp and relatively the same settings on each amp. I just use different cab/mic's for each of the two available channels. Thus, I can combine a cab with a good low-end response with one with a good high end response. I noticed some combinations sounded dull and muffled. I then noticed adding an EQ with completely neutral settings to just one of the channels could improve the tone. My theory on why this worked was because the mic distances modeled in the cab/mic selections were slightly different, causing a comb filter effect. The EQ applies a very small delay to the signal, syncing up the two channels (maybe not completely, but closer to optimal). All of a sudden, the bright highs are back in the tone. I want a more precise way of handling this. I don't want to have to stick neutral EQ's in my patches, eating up precious DSP. And it's difficult to determine if this process could work even better if I had a more precise means of time-shifting one of the channels. I panned each channel hard left/right, and recorded each channel as separate tracks in Cubase, hoping to identify the exact amount of time-shifting necessary to achieve the best possible phase-correction. I tried to use Cubase's .01 ms precision increments but every change made the tone worse - nothing remotely resembling using an EQ in the Pod. So I'm wondering why an EQ effect in the Pod works, but Cubase's time-shifting is unable to produce the same effect. Either my entire theory about the channels being out of sync is wrong, the delay from the EQ is < .01 ms, or there's some mismatch between the .01 ms increments and the actual sample length. Any insight into this is much appreciated.