Have you ever wondered how in the days of tape, long before DAW’s, audio engineers could do seemingly simple things like reliably locate specific points in a song? Or even automate an entire mix? How about spotting a sound effect in post-production and working on a film set to get accurate synchronisation of location audio?
We take these things for granted now with our sophisticated non-linear editing environments with 64 levels of ‘undo’, 999 memory locations and cloud collaboration but in the mid 70’s, the early days of tape and desk synchronisation, a protocol called SMPTE timecode began to be used in professional audio circles.
What Is SMPTE Timecode?
Developed by the Society of Motion Picture and Television Engineers in the late 60’s, SMPTE timecode is essentially a series of numerical codes that can provide integrated systems and their human users with a reliable and accurate positional reference.
It achieves this by indexing every single frame of video with a unique code or address laid out like the diagram below :
The frame number corresponds to each individual still picture used to make up a moving image. For example, in traditional cinema films you actually see 24 still images every second which change in front of your eyes so fast that your brain can no longer process them individually giving the illusion of smooth motion. You would therefore say that film has a frame rate of 24 frames per second (abbreviated to fps).
As each frame of video elapses the code then changes to the next available timecode position. When using 24fps the portion of code which corresponds to seconds would increase between frame :23 and :00.
Basic synchronisation is achieved by making sure the video and accompanying audio have the same embedded code running at the same time during recording and playback. One device would act as a ‘master’ and generate the timecode during recording and the other ‘Slave’ device(s) would record the generated timecode at the same time.
Then when you want to sync your audio and video you just find the matching timecode positions and ‘et voilà’ – your project should be in sync.
Ok, ok, but that still doesn’t answer how an audio engineer working in a music studio in the 70’s is able to automate a mix – there’s no video present so what gives?
It’s probably a good time to talk about the different ways in which timecode information is transmitted between systems. Having a handy visual readout is great for us humans but when working with systems using entirely analogue mediums such as magnetic tape how can you store this information and have it play back in unison with your audio?
That’s where Linear Timecode* or LTC comes into play. (* sometimes referred to as ‘longitudinal timecode’ but means the same thing)
What Is LTC?
LTC is an encoding of SMPTE timecode data into an audio signal. This is achieved using a clever system called ‘biphase mark code’ which translates changes in the audio phase to binary 0’s and 1’s which a computer can read and vice versa.
In the days before MIDI and sample accurate editing an LTC based positional reference provided a robust, reliable and repeatable tool for audio professionals to operate with. Top mixing desks such as the SSL E series had an on-board computer which could read the LTC from the tape as it played thus providing the engineer with a means to accurately set timecode marker positions. These timecode positions could could indicate the start or end of a verse, chorus or even the whole song, saving a lot of time and guess work when rewinding or fast forwarding the tape!.
Often one of the studio assistant or tape op’s first jobs of the day after making the coffee was to pre record LTC onto the final track of the 2 inch tape** commonly referred to as ‘striping the tape’ in preparation for the days session.
**The last or edge track was used to minimise the spill or bleed that usually occurred between tracks when using tape with the theory being that your most important musical elements would usually not be affected having been recorded onto tracks 1 – 22.
Before the advent of timecode the process of synchronisation was a lot more archaic and labour intensive with systems such as ESG (shown in the video below) being used in post production.
Coupled with the advent of VCA circuits and NECAM moving fader technology in the mid 70’s, having LTC enabled the mixing desk computer to accurately record fader movement over time and then playback your ‘automated’ mix. This type of system was revolutionary at the time and remained in mainstream use until more modern sequencers and DAW’s such as Pro Tools took over automation duties.
Listening to the LTC signal itself isn't the most pleasant experience as it is just a square shaped waveform operating between 960 – 2400 Hz. Don't just take my word for it though, have a listen to the sample below of a 24fps LTC track running from timecode position 09:59:50:00 to 10:00:10:00.
It sure ain't pretty, but it works…
LTC is still used to this day to transmit timecode data however it’s not foolproof. It does require some pre-roll before it begins to work and can only be read if the tape or audio is actually running at speed. For video professionals often working slowly, scrubbing between one frame at a time, LTC alone does not provide all the answers so other timecode solutions such as VITC and BITC are often used in combination.
What Is VITC?
VITC, which stands for 'Vertical Interval Timecode', is a SMPTE standard for storing timecode data in the ‘vertical blanking interval’ (empty space) between two frames of video. This is useful as it can be reliably read when the tape or video is moving very slowly or even stopped. VITC however is unreadable when a tape is rewinding or in fast forward so professional video equipment would intelligently switch between reading embedded LTC and VITC depending on how fast the tape was moving.
What Is BITC?
BITC stands for ‘Burned In Timecode’ and is the human readable form of the SMPTE standard. An overlay is inserted or burned into the original video image with the timecode information present for each frame. BITC became incredibly useful as it allowed offline editing to occur using much cheaper non-broadcast equipment and formats such as VHS tape. An editor would only need to jot down the timecode locations present on the screen for the beginning and end of each cut on a piece of paper. This paper edit or EDL (edit decision list) would then be used to assemble the finished product on the more expensive VTR’s saving huge amounts of time and money.
Join me in part two of this timecode series where we’ll talk more about more formats, the different timecode frame rates and why they exist.