Getting your audio setup to work properly is the first step toward audio recording bliss. Studio One guru, Paul Cecchetti, walks us through configuring your audio device so you'll have more time for what's important...making music!
The Audio Setup dialogue is where you’ll configure your audio hardware. Here you can set options for sound card, sample rate, bit depth, multicore processing and many other useful settings.
Here you will choose which soundcard you want to use with Studio One. You may use either an internal sound driver, or an external audio interface if you have one. In each case, you’ll need to configure the device using the proprietary software associated with the interface.
In this example, I’m using the ASIO4all driver on a windows machine. The ASIO4all dialogue allows you to adjust input/output devices, buffer size, latency and other options.
If you are using an external audio interface, you will need to install drivers from your device manufacturers’ website. After installing, the device should appear here, and you will be able to configure it via its dedicated control panel.
Block size is identical to buffer size, which is the delay time (in samples) between input and output. You can set the block size in the audio device control panel. Lower block sizes mean less delay between information in and audio out. If you are experiencing an audible delay (latency) in your monitors, it’s probably because your block size is set too high. Lower block sizes take more CPU power but if CPU isn’t an issue it’s worth recording at as low a block size as possible.
This parameter refers to how accurately S1 generates audio signals.
We know that sound is a form of energy which travels as a wave. When we record audio into a computer, this audio has to be converted to and stored in a digital format. This process is called Analogue-to-Digital conversion (ADC).
Audio, being a natural phenomenon, exists as a continuous stream of energy. Digital systems cannot process this type of information, so it must convert it to a digital format. The computer does this by taking samples of the sound wave at regular intervals (typically 44,100 times a second). Thus, the information is stored as a large quantity of samples, which, when played one after the other, appear to re-create the original wave (more or less faithfully).
Digital systems, in their recreation of the original signal, must round data up or down to the nearest unit (a consequence of the computer requiring discrete values rather than continuous streams of information) This process of rounding and approximation is known as quantization, and it introduces quantization errors which distort the recreation of the original sound wave.
The maximum quantization error is constant, and its value expressed in dB relative to 0 dB FS is:
e = 20 * log (1 / 2^n)
32 bits -156dB
64 bits -331dB
Studio One supports 32 bit and 64 bit quantization.
As we have seen, the quantization noise (error) introduced for 32 bit float is -156 dB with respect to each sample value. This is well outside our hearing range, and since it is also relative to the signal level, we cannot hear it. Why then, would we want to use higher precision values?
With 64 bit floating point numbers the quantization error is -331 dB relative to sample value, which gives us really huge headroom. With 64 bit process precision, we can do a much larger number of computations (processes), while keeping the accumulated error well out of hearing range.
32 bit floating point allows a sufficiently large number of computations so that quantization noise is rarely audible, but in cases where the audio is being heavily processed, it is possible to introduce audible noise in 32 bit mode. So yes, in some cases 64 bit processing can make an audible difference, but only where a lot of processing is being done, or with audio where noise or distortion may be more noticeable. In most cases you should not expect it to be easily noticeable, if at all.
Studio One supports multicore processing, and can therefore take full advantage of powerful, multi-core processors such as the Intel core i-series. It is usually a good idea to enable multi-processing, as it will increase the speed of the program, render times, and offer numerous other performance benefits.
However, if you are multitasking with other applications alongside Studio One (not recommended), this means there will be less processing power available for them, and may cause conflicts and system instability. Some plugins (Such as NI Kontakt) also support multicore processing in their own right. If you are using a plugin that allows multicore processing, it is recommended that you disable it in the plugin, and enable it in the S1 options pane, as this may avoid conflicts and other issues.
Input / Output Latency
Latency is the delay between input and output, and it is the cause of many a ruined recording session due to headphone delay. Studio One will translate the block size to latency in milliseconds, so you can have an accurate, meaningful idea of how much latency there will be in your session. There will always be a certain minimum amount of latency, but with the right settings, it can be managed to the point where it is virtually unnoticeable. For tracking purposes:
· 10ms or less is ideal
· 15ms is acceptable
· 30ms or more is probably going to cause problems
· 50ms or more is unacceptable
Lower latency takes more computation power, and in large sessions, low values may cause glitching, artifacts and other errors.
Sample rate is the number of samples of audio carried per second, measured in Hz or kHz. CD quality standard is 44,100Hz (or samples per second), DVD quality standard is 48,000Hz (or samples per second).
When we record audio into a computer, this audio has to be converted to and stored in a digital format via Analogue-to-Digital conversion.
Audio exists as a continuous stream of energy. Digital systems cannot process this type of information, so it must convert it to a digital format. The computer does this by taking samples of the sound wave at regular intervals (typically 44,100 times a second). Thus, the information is stored as a large quantity of samples, which, when played one after the other, appear to re-create the original wave (more or less faithfully).
The more samples taken per second, the closer the system comes to a faithful representation of the audio. Thus, higher sample rates will usually mean a digital system stores audio more accurately.
In digital audio, bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample.
Bit depth is only meaningful in reference to a PCM (Pulse Code Modulation) digital signal. Non-PCM formats, such as lossy compression formats like MP3, AAC and Vorbis, do not have associated bit depths (These formats use bitrate instead, or the number of bits per second encoded in the MP3 file.)
Higher Bit Depth will result in a higher signal-to-noise ratio (The difference in volume between the loudest and quietest sounds). Specifically, bit depth affects the number of discrete amplitude values a sample may have, thus, bit depth allows for a more dynamically nuanced digital representation of the original sound.
However, 32 bit audio is 50% bigger than its 24 bit counterpart, and 100% bigger than 16 bit audio, so recording at 32 bit will take up significantly more hard drive space.