Adventures In Audio

Creative equalization - What is it? How to do it.

by David Mellor

Prime time for creative equalization is in the mix, where all the sounds you have recorded come together and need to blend well. But there are several alternative approaches, all of which can work well, given attention, thought and care. Let's start with the scenario of a live recording of a jazz band. How should you approach that EQ-wise?

The thing about recording a band live as they play, rather than doing it instrument by instrument, is that you have the opportunity to hear what the band really sounds like. And that sound will become a benchmark for your mix. If your mix doesn't achieve the same level of quality as the live sound, then you haven't done your job properly. However you will score points massively if your mix sounds even better than the live sound.

In this situation, the best approach is to start mixing (the band have all gone home now) with the EQ sections all either switched out, or set to flat (all EQ gains at their center positions). Balance the instruments on the faders and panpots and get as good an overall sound as you can. Work hard at this stage and don't be satisfied easily. Try different options, often the first balance you arrive at isn't necessarily the best. Explore the mix, play with it, get to know it.

An hour doing this is an hour well spent. When you have become thoroughly familiar with your source material you can start to think about EQ. As you listen to your best faders-and-panpots mix, you will find that some instruments are not being heard properly, yet raising the fader makes them too loud. Conversely, other instruments stick out like a sore thumb, but lowering the fader makes them go away. You just can't find the right fader positions, or the right fader positions have to be tuned to within a couple of millimeters. You need EQ!

Audio Masterclass Video Courses

Learn FAST With Audio Masterclass Video Courses

With more than 900 courses on all topics audio, including almost every DAW on the market, these courses are your fast track to audio mastery.
Get a library pass for all 900+ courses for just $25.

What happens in a band is that several instruments or groups of instruments will try to compete for the same frequency space, in their fundamentals but also in their harmonics. And whichever instrument happens to be louder at any particular time will mask other instruments competing for the same frequency space. So in this case, let's say that you are having difficulty hearing the trumpets and clarinets distinctly when they are playing together. Set an EQ boost for the trumpet channel and sweep the frequency control until they stand out more prominently. Do the same for the clarinets. If you find that the same center frequency works equally for both, skew one channel upwards in frequency and the other down. Now you have differentiated these instruments sufficiently for them not to mask each other.

As a finesse, if you have two mid-range EQ sections per channel, or have enough computer processing power to run additional pug-ins, whatever frequencies you boosted on one channel, cut on the other, and vice versa. So not only are you making the trumpets more prominent at their key frequencies, you are scooping out a 'hole' in the same frequencies on the clarinet track. This technique is sometimes known as 'complementary EQ'. It is a powerful tool.

Complementary EQ

Complementary EQ
The above two screen shots demonstrate complementary EQ. Settings must always be found by experiment.

When you are mixing a band like this, when you have either heard them play live in the studio, or it is a conventional line up and you know what it should sound like, always apply EQ in context. This means that you do not solo any channel while you EQ, apply EQ while all of the instruments are audible. In this way, you can see what effect the EQ has with reference to the entire mix.

Many recordings are not made with conventional band instruments, or with the musicians not playing simultaneously. In cases like this, there is no reference point. You don't know what it 'should' sound like. Rather than 'live up to' a standard, it is your responsibility to create that standard. This is more difficult, but it offers more creative opportunities too.

In this case we will assume that you applied corrective EQ during the recording process, so all the instruments and voices sound fully adequate at least. You could try a faders-and-panpots mix, as in the whole-band example above, but the result will probably be something of a jumble. Since the instruments were recorded separately, there wasn't much information to go on as to how they should blend.

In this situation, one very effective approach is to start from a 'foundation mix' - the very fewest instruments that can stand on their own and support the rest of the track. Very likely this will be the drums, bass and one 'pad' instrument - guitar or keyboard perhaps. If you can get this 'rhythm section' blending well, then everything else will hook in easily with that.

As before, you can do a faders-and-panpots mix of the foundation instruments. Set the EQ of each so that the sound is full and rich - it could be a finished mix in its own right but for the lack of vocal and color. This can be done by EQing in context, and of course applying complementary EQ - particularly in frequency areas where the kick drum and bass instrument clash. When you have all of this sounding really good, you can start adding the other components. The vocal will be next.

The Audio Masterclass Pro Home Studio MiniCourse


Great home recording starts with a great plan. The Audio Masterclass Pro Home Studio MiniCourse will clear your mind and set you on the right path to success, in just five minutes or less.

What you will typically find when you add the vocal to an already full-sounding track is that the vocal doesn't have a space to fit into. Once again, complementary EQ will come to our assistance. Unlike other instruments, the human voice is pretty consistent in the frequency bands in which it is strong. This stems from human evolution - we needed to communicate effectively so the ear has evolved to be very sensitive at frequencies where speech is also strong - the range round 3 kHz or so.

Notice that we are talking harmonics here, not fundamentals. But this is the range that allows us to differentiate between the consonants, vowels and phonemes of speech, in both the male and female voice. So if you apply an EQ boost to a vocal at around 3 kHz, it will suddenly sound very much more present and stand out wonderfully. Of course the next step is to apply complementary EQ to the other instruments to make a 'hole' for the vocal to 'sit' in.

As you start to add other instruments you will find that they need to be 'thinned'. Your foundation tracks are already fat - or 'phat'! - because you spent time optimizing them. The vocal is complementary EQ'd to perfection. So there is no room in the frequency space for anything else! Well, yes there is, but you can't add more 'phat' tracks to an already 'phat' mix. You need to 'thin' the new instruments so they will fit in.

Thinning can be accomplished by cutting low frequencies and often cutting high frequencies too. If an instrument isn't thin enough at this stage, you can apply a boost where it is harmonically strong. If the worst comes to the very worst, you can apply a complementary EQ to your foundation track to make a space for the new instrument to fit in. Oddly enough, in a world of 'phat', there is an amazing power in thinning things down.

It can't be emphasized too strongly that there is only a limited audio spectrum and everything has to fit into that. If every sound is rich in a wide range of frequencies, they will clash and mask each other. So you have to make your instruments complementary to each other in their frequency characteristics, and thin them down where necessary. Ultimately, your finished mix will sound so much bigger for that.

Equalization is covered in great detail in the Audio Masterclass Professional Course in Equalization.

Monday November 5, 2018

Like, follow, and comment on this article at Facebook, Twitter, Reddit, Instagram or the social network of your choice.

David Mellor

David Mellor

David Mellor is CEO and Course Director of Audio Masterclass. David has designed courses in audio education and training since 1986 and is the publisher and principal writer of Adventures In Audio.

More from Adventures In Audio...

An interesting microphone setup for violinist Nigel Kennedy

Are you compressing too much? Here's how to tell...

If setting the gain correctly is so important, why don't mic preamplifiers have meters?

The Internet goes analogue!

How to choose an audio interface

Audio left-right test. Does it matter?

Electric guitar - compress before the amp, or after?

What is comb filtering? What does it sound like?

NEW: Audio crossfades come to Final Cut Pro X 10.4.9!

What is the difference between EQ and filters? *With Audio*

What difference will a preamp make to your recording?

Watch our video on linear phase filters and frequency response with the FabFilter Pro Q 2

Read our post on linear phase filters and frequency response with the Fabfilter Pro Q 2

Harmonic distortion with the Soundtoys Decapitator

What's the best height for studio monitors? Answer - Not too low!

What is the Red Book standard? Do I need to use it? Why?

Will floating point change the way we record?

Mixing: What is the 'Pedalboard Exception'?

The difference between mic level and line level

The problem with parallel compression that you didn't know you had. What it sounds like and how to fix it.

Compressing a snare drum to even out the level

What does parallel compression on vocals sound like?

How to automate tracks that have parallel compression

Why mono is better than stereo for recording vocals and dialogue