Adventures In Audio

Interesting vocal reverb effects in live performance

by David Mellor
FREE EBOOK DOWNLOAD ► LEARN AUDIO ONLINE ►

Reverb doesn't have to be a 'set and forget' effect. Using different reverbs through a song can create wonderful sound textures that captivate the listener's ear and attention.

I don't have any reliable statistics, but I'm going to guess that out of 100 times that reverb is used, in 99 cases then it doesn't change from the beginning of a song to the end. Check your own work and see if I'm wrong.

But I have a great example of how changing the reverb through the duration of a song can work really well. And, what's more, in this instance it's in live performance, namely Lithuania's entry for the Eurovision Song Contest 2018 titled When We're Old. Let's skip right to the example. Listen all the way through. Or audition it at 0:13, then 0:57, then 1:20, then you will probably want to go back and listen to the whole thing.

At the beginning there seems to be no reverb at all; maybe just a hint of auditorium atmosphere. The performer, Ieva Zasimauskaite , does a great job of controlling breath blasting and mouth noises with the mic so close, which even with today's microphone technology can still be problematical. A finely-judged reverb kicks in at 0:57 which helps propel the song along, then at 1:20 there seems to be a delay effect that further adds to the density of the reverb. I say 'seems to be' because it might be pre-recorded on the backing track. (Singers are live in this show, but instruments are pre-recorded. There may also be pre-recorded background vocals but they must not replace or assist the live vocal).

Audio Masterclass Video Courses

Learn FAST With Audio Masterclass Video Courses

With more than 900 courses on all topics audio, including almost every DAW on the market, these courses are your fast track to audio mastery.
Get a library pass for all 900+ courses for just $25.

It's worth thinking about the live element here because things are different to the studio. In the studio it would be possible to automate the reverb to switch it in, then change program, at appropriate points in the song. In a live performance, then the changes to the reverb would also be live. In the studio we would talk about 'automating' the reverb. But here I feel that 'animating' is a more appropriate word, similar to how the word is used in video editing.

As an aside, it would technically be feasible to automate the reverb, since the backing track is fixed and the time points of the changes are known exactly. If I wanted to to this I would probably want to have a timecode track in the audio driving a MIDI sequencer providing control or program change messages to the reverb. Or... I would record the reverb into the backing track, so none of the reverb is live. That's easier, but does it break the rules?

One further point is that if you research further into videos of the event on YouTube, you will find one that is shot, very shakily, by a member of the audience. Guess what - the reverb is there from the beginning, and it's huge. So the live audience heard something different to the broadcast audience. Interesting.

Anyway, going back to my point, this is a great example of animating reverb. It's a useful studio technique and, as we see here, works well in live performance too.

Addendum

The Audio Masterclass Pro Home Studio MiniCourse

FREE MINICOURSE

Great home recording starts with a great plan. The Audio Masterclass Pro Home Studio MiniCourse will clear your mind and set you on the right path to success, in just five minutes or less.

Since this article was written, a couple of interesting quotes appeared in LSi mag, from sound designer Daniel Bekerman who was Head of Sound on this production...

"...the effects for each song are pre-programmed to follow the show timecode..."

"...backing music has to be 28 tracks - 14 will be the musical tracks, then there will be tracks with voices, tracks with click track, tracks with effects."

The full interview is in the June 2018 issue of LSi.

Saturday May 26, 2018

Like, follow, and comment on this article at Facebook, Twitter, Reddit, Instagram or the social network of your choice.

David Mellor

David Mellor

David Mellor is CEO and Course Director of Audio Masterclass. David has designed courses in audio education and training since 1986 and is the publisher and principal writer of Adventures In Audio.

More from Adventures In Audio...

An interesting microphone setup for violinist Nigel Kennedy

Are you compressing too much? Here's how to tell...

If setting the gain correctly is so important, why don't mic preamplifiers have meters?

The Internet goes analogue!

How to choose an audio interface

Audio left-right test. Does it matter?

Electric guitar - compress before the amp, or after?

What is comb filtering? What does it sound like?

NEW: Audio crossfades come to Final Cut Pro X 10.4.9!

What is the difference between EQ and filters? *With Audio*

What difference will a preamp make to your recording?

Watch our video on linear phase filters and frequency response with the FabFilter Pro Q 2

Read our post on linear phase filters and frequency response with the Fabfilter Pro Q 2

Harmonic distortion with the Soundtoys Decapitator

What's the best height for studio monitors? Answer - Not too low!

What is the Red Book standard? Do I need to use it? Why?

Will floating point change the way we record?

Mixing: What is the 'Pedalboard Exception'?

The difference between mic level and line level

The problem with parallel compression that you didn't know you had. What it sounds like and how to fix it.

Compressing a snare drum to even out the level

What does parallel compression on vocals sound like?

How to automate tracks that have parallel compression

Why mono is better than stereo for recording vocals and dialogue

#