We have talked about latency on the blog before, there is this excellent article by communty member David Finnamore ‘Tracking Latencies: Why Hardware Buffer Settings Of 256 Or Less Don’t Have To Be A Major Problem’
However. it’s an inescapable fact that while modern computers are blisteringly fast, it still takes time for a computer to do its work and if you put a sound into a computer it will take a certain amount of time for it to come out again. The amount of time this takes is referred to as the system’s latency (i.e. “lateness”). A really nice visual example of this is the noticeable lag you will see when using a smartphone’s camera. You move, the camera catches up. The difference between the two is the camera’s latency. In the same way as you couldn’t use a camera with the latency of a smartphone to film sport but it would be acceptable for a talking head shot, high latency in audio systems is acceptable under some circumstances but less so in others.
Whether recording or playing back, your computer has to calculate the output of your system correctly and at the right time. If it can’t keep up the audio will be interrupted, resulting in a missing sample - at best a click and at worst an unstable, crash-prone system. To give the system the time it needs to “stay ahead” and keep up with the calculations it is being asked to perform, the system has a buffer - a defined amount of time to offer the system some time in which to calculate the output.
Relationship between buffer size and CPU
To put it another way, if I read out a list of simple sums to you and I give you up to 10 seconds to answer before the next sum, you would probably keep up. If I were to only give you two seconds to finish each sum you would eventually (or in my case, immediately) run out of time before the next sum. What is happening here is that the CPU (your brain) is processing the audio (the sums) with a buffer of 10 seconds and 2 seconds respectively. If you think about in which example the brain is having to work hardest then another interesting thing becomes apparent. If your buffer is too low the system will fail to keep up, if you have a long buffer then the system will keep up but the latency will increase. Therefore as latency decreases, CPU load increases and vice versa. To put it another way, a high (long) buffer gives your computer “free” CPU power at the expense of latency.
Playback engine settings and when to use high or low buffer
While low latency is desirable, it costs - literally. A fast computer is expensive and latency only affects a system when the lag between input and output is noticeable. If you are recording audio into a system and monitoring the output of that system then a significant lag is very noticeable and very distracting. Similarly if I am playing through a software instrument and the output is late it is very off-putting. If I have already let go of a key before I hear the note I played I can’t realistically play in time. However if I am mixing then a small delay when I press play isn’t very important.
In a native system it is necessary to manage your buffer settings, working with a low buffer while tracking, to reduce latency at the point in the project when it would have the greatest impact, and increasing the buffer size to minimise the CPU load at the mixing stage when latency is less important but all the available CPU power is needed for processing and plugins. The buffer size can be adjusted in the playback engine dialogue which is found under the setup menu.
Role of DSP systems in latency
All these compromises caused by latency, while an inevitable consequence of using computers, are something of a backward step for audio as this is something which would have been meaningless to an engineer in the 70’s but is one of the biggest technical issues facing engineers today. The professional solution is to use a DSP based system, using dedicated (and expensive) hardware. Something like an HDX system eliminates these problems almost completely and if you are recording a group of musicians simultaneously then this approach offers significant benefits. However the right system for you is the most appropriate system for your needs and for most users a native system with some compromises caused by the inherent latency of native systems is most appropriate.
This graphic neatly illustrates how DSP systems offer solutions to users for whom cost is less important than deterministic (i.e. consistent) performance at low latencies.
Other approaches in native systems
There are several ways to minimise the effect of latency when tracking. This is a big subject but in brief you can take the expensive/professional route and use a DSP system such as HDX. Latency becomes largely irrelevant but the cost of HDX/HD Native systems make them unsuitable for many users. There is a middle ground between wholly native systems and HD systems in the form of hardware accelerated systems such as the UA Apollo but in a native system there are ways to mitigate the effects of latency when tracking. These will be investigated in another Pro Tools Fundamentals coming soon.