Stability and latency

Parent chapter  Previous page  Next page

 

Ever since audio production has found its way inside the computer, new problems concerning issues of stability and latency have arisen.

 

Pre-computer digital audio gear introduced the concept of delays through devices, which hadn't usually been the case with analogue equipment.  This was an inevitable consequence of sampling the audio, and passing the samples through multiple layers of buffering during conversion, processing and interfacing operations.  However, the 'latency' (buffer delay) was generally quite short and didn't usually cause problems even in delay-sensitive applications such as live sound or over-dubbing.  Reliable operation was generally guaranteed, since the digital devices were essentially 'sausage machines' performing nothing but the same limited series of operations repeatedly.

 

When general-purpose computers began to be used for audio production, problems with latency and stability suddenly had to be addressed.  The reason is that computers are always busy doing other things than processing audio, even in situations where the operator is only interested in performing that dedicated task.  Because of this, the computer generally accumulates a large buffer of incoming audio samples, which are then processed whilst a new buffer is being collected.  Even though the required processing can  (hopefully) be accomplished faster than real-time (i.e. the sample processing rate is faster than the sample rate), there is always the possibility that the computer may be called upon to interrupt its processing of the audio in order to deal with some other essential routine task, such as maintaining screen graphics, moving data on and off disc, servicing other programs etc.  In non-optimized systems, tasks such as collecting emails, virus-checking and countless low-importance system operations can interrupt audio processing.  Without the accumulation of sample buffers, any interruption taking longer than about one sample period (1/fs) would cause incoming audio samples to be missed, resulting in disruption of the audio signal. Nearly every kind of interruption is long enough to do this.  However, with a large enough buffer, the interruptions don't cause audio to be disrupted so long as the computer has enough time available during the buffer period to process the entire buffer.  This problem doesn't only happen for incoming samples: audio outputs from the computer must likewise be buffered so that a continuous output stream can be maintained even when the processor is called away for a while.

 

Why is this a problem?  First of all, the amount of latency required in order for a particular computer with a particular audio processing and non-audio workload not to suffer audio disruptions can be problematically large.  This is particularly the case in live sound and over-dubbing situations where the delay between the computer's input and output has to be essentially imperceptible.  This is often difficult or impossible to achieve, unless the computer has a powerful processor, a lot of memory, a heavily audio-optimized operating system workload, an efficiently written audio processing program, and not too many audio channels, not too much audio processing complexity, and not too high a sample rate.  The operator merely has to make sure that all these conditions are met, and all will be well!

 

But how do you do that?  Even if we worry only about the computer and operating system themselves, the duration and frequency of  interruptions is very non-deterministic: something can happen very infrequently which causes a huge interruption.  This might not be  a problem: you can always run that track again (assuming you noticed the glitch) - but what if you're recording an important one-off live event?  Even worse, the onset of trouble is greatly affected by audio factors such as number of tracks, sample rate, how many EQs are in use, etc.  This makes the onset of instability even harder to predict reliably.

 

On the other hand, situations where latency is critical are relatively few, so it is normally OK to operate generous buffers - such as in the live recording example.

 

In the case of Atlas, problems of latency and stability are improved by a couple of useful features:

 

First of all, the operator can control the buffer delays within the Mac and Windows drivers directly, irrespective of what buffering is employed by the user's particular audio software.  It is generally recommended that these buffer delays are set long, in order to provide best stability.  However, for the user with a powerful and tightly-optimized setup, who has contained audio processing tasks and needs low latency, the buffer delay can be minimized.  For more information, see the Unit settings section.

 

For foldback and over-dubbing situations, all of Atlas's outputs (analogue 1-4, S/PDIF, ADAT, DO, and headphone outputs) have a comprehensive mixer capability which can mix any of the unit's inputs with each output's computer feed in order to build a dedicated  monitor mix with extremely low latency.  Incoming audio to the mix doesn't have to go in and out of the computer at all - the mix is handled within the Atlas hardware itself.  For more information, see the Outputs tab and Mixer tabs sections.