The consolidation of MIDI CC based automation to Region Automation has been a significant improvement in Logic since version 10.4. One often voiced complaint, however, is that it is not possible to display and edit the MIDI data simultaneously alongside multiple lanes of automation. Here is a simple workaround.
In this video, I want to share with you what I consider to be a much underused and under-hyped tool in Logic Pro X. It works great when put to work on a parallel drum bus. I’m talking about Logic’s Phat FX. Added relatively recently, it is a multi-purpose multi-effects processor. But just because it is a multi-effects processor, that doesn’t mean you have to use all the multiple effects available all the time.
In this free tutorial, we show you how to use Logic’s exclusive solo mode. Normally soloing a channel strip adds to the selection of what is solo’d. Engage exclusive solo mode to hear only a single channel strip at a time. This allows you to quickly listen to individual channel strips in isolation.
Every DAW has its urban legends about how mixes sound different after being bounced down to a final stereo track. Talk of fixed-point versus floating point summing equations is usually introduced as evidence to prove one side of the argument or the other. Here’s how to scientifically test your bounced file with your multi track Logic project to determine if the differences are real or not.
In this video we use a mix of several UAD plug-ins to bring out the best character and tone of Logic’s Neo Soul producer Kit, as played by the Logic Pro X Drummer Curtis.
We often tend to take for granted what’s right in front of us. Particularly when it comes to effects plug-ins. Love the one your with before straying! Logic’s venerable Stereo Delay is often overlooked, but hopefully not forgotten. In this video, I show you how I use it on a mono guitar solo to add a nice stereo thickening quality before the signal is sent to reverb.
Deciding how long a reverb tail to use in your mixes encompasses a lot of variables. What is the tempo of the song? How busy are the parts? Are there a lot of open spaces between phrases, or are they densely packed together? Do you want to convey a sense of an intimate closer space, or a larger distant space? These are just some of the considerations that play an important role in communicating the intended emotion of the music. In this video I use Logic’s ChromaVerb with tempo synced decay and pre-delay values, to create a lush enveloping reverb on a lead vocal focused pop ballad.
One of the endearing and ubiquitous qualities of Rhodes sounds is the ability to use the tremolo knob to pan the sound from side to side. We’ve heard it on a million records and love it. It creates a nice wide moving stereo spatial effect that adds a sheen of polish and sophistication to the sound. For an interesting variation, why not modulate the reverb’s position in the stereo field instead of the source?
I love food analogies and I think it will be a fun and effective way to shine some light or at least, change the perspective of the light we see on how we record and mix today. In particular, there are 2 food trends we see a lot of today, both in the restaurant scene and in home cooking media, that I feel worthy of some discussion. We’ll then look at how this may be in an analogous manner, affecting our choices when it comes to music production.
With Black Friday around the corner, we are all no doubt salivating at the prospect of plentiful new inexpensive plug-ins to slap across all our Logic tracks in the hopes that a new shade of lipstick will make our music beautiful. It’s a rabbit hole we all fall down to some degree or another.
I’d like to share with you an article I recently wrote with some mixing tips that start from a different perspective. Rather than looking at each track as a raw unfinished piece of the puzzle waiting to be painted and made up before the whole can be considered complete, top-down mixing takes a less is more approach. It’s a different mindset where we try and use as little additional processing as necessary. rather than trying to use as much as possible.
We see lots of articles on compression techniques and feature lists for both hardware and software refer to a blend mode or a control to mix the compressed signal with the dry signal as parallel or “New York” compression. The idea of parallel compression is that a blend of the sustain of the compressed signal and the attack of the dry signal combine in manner to give a result that is the best attributes of both signals. Prior to boxes with a dedicated parallel function, the technique to do this was to “mult” (split) the signals, and then mix them back together again on separate console strips. It is widely accepted that this technique was first used in one or more reknown studios in New York. The problem is that it isn’t authentic New York compression, there’s a missing detail that makes all the difference.
Ever wonder how different the Logic Compressor models really are? Mastering engineer Holger Lagerfeldt decided to answer that question. His findings are now available in a succinct, yet in-depth PDF.
Most Logic users rely on the Channel EQ for basic tasks. But which third party EQ plug-ins do you reach for when you want to add something a little different or unique to the sound of your tracks? Find out which ones our logic pro experts like to use when they want to add some special sauce to their mixes.
It has only been very recently that the phase shifter has been rediscovered as the great effect it really is. There has been a resurgence with new versions and re-issues from boutique as well as the old guard pedal makers, In the past year, we have also seen a number of plug in developers offer their take on the phase shifter. In this article, we’ll take a look at a few of my favourites.
A few days ago, my colleague Russ Hughes put together an interesting Production Expert post on the importance of mono compatibility. Given the home audio revolution taking place, more and more people are listening to music on single speaker solutions that sum the left and right sides together. This raises the question of how best to monitor stereo mixes for mono compatibility within Logic Pro X.
I had a guitar player over to record recently. We paired up Amp Designer (getting a relatively clean tone, using a slightly tweaked version of the Large Blackface Clean preset) with Eventide’s UltraTap and MangledVerb, and came up with a really interesting and unusual lead guitar tone.
TrainYourEars EQ 2.0 is a EQ ear training app for Mac and PC, that makes EQ ear training for mixing engineers cool and super intuitive. Use it on a regular basis, and the app will help you get better at equalization and mixing. After a successful launch, its price has been permanently reduced from €89 to €49.
EQ Ear Training for Mixing Engineers
During your music production journey, learning all there is to know about how to use equalizers, you've probably come across the concept of "EQ Sweeping".
I used to make manual EQ sweeps daily, with my eyes closed, on my first analog mixer: my trusted Mackie 24-8 24 Channel 8 Bus Analog Mixer (with its "Rude Solo Light"), which took up a quarter of the space of my bedroom studio.
Those were the days, when I learned how to use EQ, intuitively target frequencies that needed boosts and cuts, and get better at mixing over time.
What is EQ Sweeping?
The concept is simple:
Strap on an EQ plugin of your choice, on a piece of audio of your choice, and start sweeping through the frequency spectrum of that particular piece of audio. You'll go from 20 Hz to 20Khz (the audible frequency range our ears are blessed with at birth), while trying to identify the frequency areas that make certain sounds sound the way they sound.
Do this often enough, and over time, EQ sweeping can help you answer questions such as:
- Where's the sub of that kick drum sample?
- What makes this kick drum punch?
- Where's the 'click' of this kick drum?
- What makes this particular piano sound warm, and what frequency makes it sound that bright?
- My vocal sounds too harsh, what frequency do I need to cut, using what Q factor?
And on and on and on...
That EQ plugin you've been using for the sweeps has probably been giving you visual feedback.
Are you training your ears or your eyes?
This is where TrainYourEars EQ 2.0 comes in.
TrainYourEars EQ 2.0 - How it Works
EQ Exercises and Methods
- Any music you'd like to use for ear training sessions can be loaded into the application. These could be your own audio files, or tracks from other music applications such as Spotify and iTunes. You can also test your skills on plain pink and white noise.
- You then choose an ear training exercise, to learn the difference between a Low Cut and a Low Shelf, or a High Cut and a High Shelf for example.
- Exercises can also be designed from scratch: you can design custom EQ quizzes to train your ears to recognize specific frequencies, EQ filter types, boosts and cuts, Q-factors, or a combination of parameters for unlimited possibilities.
- Regardless your current EQ skills, you can always adjust your exercises to something that will keep challenging you once your EQ skills begin to improve.
Guess the EQ
- Next, you choose an EQ ear training method. With the Guess Method , you listen both to the unprocessed and processed audio, then guess which EQ parameters were altered. This is the classic method that has been used for over 40 years by thousands of successful mixing engineers.
Correct the EQ
- By using the new Correct Method, the app will start throwing EQ problems at you. You'll have to apply your EQ skills to make a processed audio file sound like the original again. Do this together with the app's ability to play tracks from Spotify or iTunes - and you could be fixing EQ issues on reference tracks you know best! This new Correct Method was suggested by audio mastering engineer and author of "Mastering Audio: The Art and the Science" Bob Katz, and implemented in version 2.0 of TrainYourEars EQ.
Watch the video below to see how this all works... pay special attention to the EQ example at 0:28...
Did you get it right?
Moar ear training for you my friend...
Train Your Ears Review
Have a look at Pro Tools Expert's Dan Cooper doing a 5-minute review of TrainYourEars EQ 2.0.
- Jump to 0:40 to get to know the Audio Player's features.
- Jump to 0:50 to get to know the Live Player's features.
- At 1:08, note the AU/VST hosting capabilities of the software.
- Skip to 1:31 to see the EQ ear training exercises that come bundled with the app.
- Jump to 3:08 to see how to make your own exercises with the ear training app.
Who uses TrainYourEars?
Since its release a few years ago, the EQ ear training software has already been used by thousands of students and teachers from the world’s top music and audio institutes:
- Berklee College of Music
- School of Audio Engineering (SAE Institute)
- Sonic Arts Center
- Pulse College
- Tecnológico de Monterrey
- Sonic College
- Sound Education Netherlands
Train Your Ears EQ Pricing & Upgrading
TrainYourEars EQ 2.0 used to cost €89, but is now permanently available for only €49 (that's without the above-mentioned Black Friday discount). For those who bought version 1 in 2015, the upgrade will be free. People who bought it before 2015 can purchase the ear training app at an additional 50% discount over the already discounted price - for a total discount of 75%.
For more information, documentation and user testimonials, visit the TrainYourEars EQ 2.0 website!
EQ Ear Training Exercises
TrainYourEars makes EQ ear training even more interesting, as the app features the ability for both EQ beginners and seasoned audio professionals to develop custom exercises.
Here's a good example:
Audio mastering engineer Andrea Zanini, the founder of Owl Mastering, has put together a comprehensive set of ear training exercises for TrainYourEars that can be used in the application.
The set contains 18 exercises to get you started in the art of frequency recognition, neatly going from 20 hertz to 20 kilohertz:
- The first five of these exercises focus on Basic Perception. First you'll learn to recognize some simple frequency boosts and cuts with a wide Q factor, then gradually grasp the impact of Q width as it is narrowed down.
The Low End
- Then the low end - roughly from 20Hz to 80Hz - is covered in an exercise. The next exercise focuses on 20 Hz (a subwoofer is needed). Next, 40Hz to 100Hz are covered, to help you achieve the perfect low end balance in your mixes.
Low Mids, Mids, High Mids
125Hz to 500Hz, 630Hz to 1250Hz, and 1600Hz to 3150Hz are the next areas. You'll learn about low-order harmonics, percussive attack and vocal recognition, and higher order harmonics of many instruments.
Lastly, 10kHz to 20kHz is covered, where the highest harmonics can be found. These are essential for a sense of air and brilliance.
In this article, Eli Krantzberg explains how to make a headphone mix for the Universal Audio Apollo audio interfaces using Logic Pro X and the UA Console 2.0 software.
Video - Creating a Headphone Mix
I’ve put together this short video to show all of these steps in action:
Groove 3 - UA Unison Preamps and Channel Strips Explained
To learn more about recording with Universal Audio’s Unison preamps and channel strips, check out these videos at groove3.com:
Logic Pro X and the UA Console Software
Using Logic Pro X with a Universal Audio Apollo audio interface involves routing Logic Pro’s output into the UA Console software.
Once you accept this change to the way you think about signal flow in Logic Pro X, the whole software monitoring/direct monitoring shift will feel more intuitive.
Step One - Signal Flow
The first step is to re-route Logic Pro’s output in the I/O Assignments tab of the Audio Preferences.
Instead of sending the audio signal of Logic Pro’s Main Output to the default Output 1-2 (Stereo Output), the Console 2.0 software provides virtual pathways to get the signal to the Console mixer.
By using these, we can control Logic Pro’s level within the Console mixer, just as if we were patching Logic’s output into a traditional mixing desk.
This provides control over levels when monitoring Apollo inputs through Console 2.0 at the same time as Logic Pro’s output. In other words, when we’re recording.
Step Two - Disable Software Monitoring
The next step is to disable Software Monitoring on the General tab of Logic Pro’s Audio Preferences.
Doing this means that when a track is record enabled, the audio signal will not be monitored through Logic Pro.
Since the signal is arriving at the Virtual 1/2 channel strip within the Console 2.0 software, it will be monitored from there.
A simple way to control the headphone mixes when recording is to send the signal to the Cue 1 and Cue 2 mixes on the Virtual inputs Logic Pro’s signal is arriving at.
Set the level relative to the Cue levels sent on the recording channel to create the balance between the input (your mic or guitar) and Logic Pro’s return. This is a simple but effective way to take advantage of the Apollo’s direct monitoring features.
A Better Way - Using Sends Across all Channel Strips
If you want more flexibility in adjusting the amount of Cue Mix/Headphone level sent from the individual tracks within your Logic Project, set up sends across all your Logic Channel Strips.
Send them to two unused aux buses. You can do this simply by rubber band selecting all your project’s channel strips, and creating the bus sends all at once.
While the channel strips are all still selected, I like to option-click to set them all to unity gain. This way they will start off by mirroring your main mix within Logic Pro.
Now go to the two Aux tracks that were automatically created, and route their outputs to Cue 1 L/R and Cue 2 L/R.
Now it is a simple matter of offsetting the send levels on the individual tracks to boost or attenuate their level in the headphone’s as desired.
When using this more comprehensive routing, make sure not to send to the Cue mixes from Console 2.0’s Virtual input channel strip at the same time.
Everybody in the US knows about the "Food Pyramid." It's a diagram that shows the different food groups and how much serving of each group people should eat to maintain a healthy diet. At the top of the pyramid are the sweets and fatty foods that should be low on everybody's consumption list, but unfortunately, that is exactly what most people are stuffing in their face in great quantities, causing obesity and all other sorts of health issues.
The same way an unhealthy diet focuses too much on sugar and fat, a heavy focus on plugins in audio production can also create problems. The plugins nowadays seem to be the sugar and fat in our everyday "audio production diet," and I think we are paying way too much attention to them compared to other "audio food groups."
The internet chatter on forums and websites is full of all those little gadgets that promise to solve/fix everything. Along the way, many of the much more important aspects of an audio production get overlooked or ignored altogether. Especially with inexperienced users, this could create a distorted picture that makes them believe that buying plugins makes their audio production magically sound great.
I tried to come up with a similar diagram as the food pyramid, but instead of food groups, I list various components that are involved in a typical audio signal chain that cannot be overlooked no matter how much you're craving for those sweet little plugins.
This is the component at the beginning of the signal chain, the sound source. At least when you record live instruments, you have to pay attention to the "quality" of the signal that you are about to record. If the drummer doesn't know how to tune his set or never changed the skins on the drums, then your expensive recording equipment, including any plugins will not make it sound good. Have you ever asked a guitar player to change his strings or try a different plectrum to achieve a different sound instead of reaching for the EQ plugin? Learn about the instruments and how to improve or alter the sound at the source.
Although the performer is responsible for the music side, the way that music is "delivered" has a big impact on how easy or difficult your mix will be on the technical side. If you later spend 90% of your time fixing timing and tuning issues, then you are not mixing the sound, you are doing damage control. Maybe the money for the next Melodyne update can be better invested in better session players that can play tight together and in-tune, so after tracking, you can concentrate on the actual mix.
Room - Mic - Placement
The next step of capturing the performance involves the selection of the right microphones and their proper placement in response to what the room acoustics is adding to the signal. Instead of loading up all the EQ and compressor plugins, maybe the time is better spent on being in the room with the performer for a while and first listening to the "original" source before listening in the control room. Experimenting with mics and the mic placements can improve the frequency response and phase issues that might eliminate any requirement to fiddle around with plugins later in the "damage control" department.
This is an important aspect that easily gets overlooked. Every seasoned engineer can tell you that a well-arranged song basically mixes itself. Think about it: you can dial the EQ knobs in the low frequencies until you are blue in the face, but you won't get a proper balance if the bass drum, the bass guitar, the drone notes, and some low synth pads are all playing at the same time and fighting for the same frequency range.
If the artists had some deficiencies in proper arranging skills, then don't be afraid to intervene. For example, a channel strip has a mute button for a reason, and volume automation is always your friend.
Recording / Mixing
Not much to add to that topic. It goes without saying that you have to know your technical stuff. Learn, learn, learn. Keep in mind that knowing the art of recording and mixing is not the same as collecting some "recipes" that you gathered from some questionable YouTube videos that show you which buttons to press or what preset to choose on the "Plugin Du Jour" to get a great mix.
The more you start digging a little deeper and actually learn why to press a button in the first place, the more you accumulate real knowledge. The other ingredient, experience, comes over time by applying all that knowledge, making mistakes, and learning from them.
Just one tip.
Choose the best DAW, which is the one that works the best for YOU, the one you can operate in your sleep. Maybe stop chasing features you want and start using the features you have. It's that simple.
Room - Speaker - Placement
The same principles about mics, room acoustics, and mic placement also apply to the other end of the signal chain, the speakers. Know where to place them in relation to the room and be aware of any compromises and shortcomings that you are facing in your particular situation.
Let me make one exception here and advocate a specific plugin, a Frequency Analyzer. Considering that most listening environments are less than perfect (especially in "budget studios"), the visual feedback of an analyzer might help you spot any issues that your speakers are not telling you about. If you constantly adjust the same frequency ranges in your mix with your plugins, then it might not be your "signature sound," and instead, it might point out a serious problem with your room acoustics or speakers (or all of the above).
With "listening" I mean "knowing" what to listen to, the awareness, the skills, and the experience that comes with it. It doesn't matter how many plugins you have, and even if you know how to use them if you don't spot specific problems or issues in the audio signal that needs some intervention, then your plugins won't do you any good.
This might be the most overlooked component in the entire signal chain, your ears. And with ears I mean their proper anatomical functionality. The reason why you boost the high frequency in your recording or mix might be that you have a 20dB dip in your hearing at 5kHz or a total loss above 10kHz. Your client or your customers with perfect ears might just wonder why your mixes always sound so harsh.
Hearing loss is usually a concern when getting older, but exposure to loud music, noise, or any other traumatic events could have damaged your hearing capability over time to the point that you are compensating that in your recording or mix without knowing it.
Especially troublesome is the constant in-ear bombardment of earbuds from phones that can do quite some damage already to the younger generation. So maybe, instead of buying another plugin, scheduling a hearing test at your doctor's office might give you some assurance that your ears are ok (or not).
And finally, a word about those highly addictive, sweet (high fructose corn syrup) plugins, the least important components, the ones that should be taken with moderation in a well-balanced "audio production diet." It doesn't matter if the popup menu for your effects plugins scrolls down half a mile, if you don't know how to use those gadgets, then what's the point of having them (besides the bragging rights). Maybe, concentrate on just a few and know them inside out.
One more thing ... People Skills
This aspect seems to get totally lost in audio production because it has nothing to do with the technical aspect. However, no matter whether you are dealing with a band, a soloist, or a voice-over session, your end on the talkback mic plays a big role in what comes back from the artist. The better you are in making the artist comfortable and making him or her secure so he or she can be at their best and deliver one knock-out take after another, the easier it is for you to add your technical expertise in making such a performance sound great in the end. This requires people skills to establish such a creative environment and also experience to know how far you can push an artist to bring out the best.
Unfortunately, there is no app or plugin for that (yet).