Learn How Young Guru Engineers for Jay Z: An Introduction to Audio Recording

Young Guru, Grammy-Nominated, Legendary Audio Engineer

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
9 Lessons (44m)
    • 1. Trailer

      1:11
    • 2. Understanding Audio Physics

      3:25
    • 3. Understanding Audio Physics (continued)

      4:50
    • 4. Choosing a Microphone

      7:34
    • 5. Setting Up Your Recording Space

      3:05
    • 6. Setting Up a Session

      5:10
    • 7. Recording Rap Vocals

      5:02
    • 8. Recording a Singer

      7:44
    • 9. Finishing the Recording

      6:04
72 students are watching this class

Project Description

Record your own professional quality audio track

Understanding Audio Physics

  1. What is Sound?

    What is Sound?

    1. Sound is a displacement of air molecules.  It’s vibration that propagates as a mechanical wave of pressure and displacement, through some medium (such as air or water). Sometimes sound refers to only those vibrations with frequencies that are within the range of hearing for humans.
    2. A transducer is something that transforms one form of energy into another. For this class, the important transducers are the human ear drum and microphones. 
    3. When a difference in Sound Pressure Level (SPL) hits a transducer, it is converted into energy that is understood by the receiver as "sound." For the ear drum, the receiver is the brain, for a microphone, the receiver is the recording medium (magnetic tape, digital signal, etc.)
  2. What is frequency?

    What is frequency?

    1. Sound propagates as vibration waves of pressure and displacement, in air or other substances. These waves move at the same speed of sound (1,130 feet/second), but they have varying wavelengths (the distance from peak to peak or valley to valley on a wave). We measure how many wavelengths pass in a second by a term called Frequency, which we measure in Hertz (Hz).

    2. The frequency of a sound wave is the property of sound that determines tone. The longer the wavelength of a sound wave, the lower the frequency (Hz) and the lower the tone produced. The shorter the wavelength, the higher the frequency (Hz) and the higher the tone.

    3. The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is between about 20 Hertz (Hz) and 20,000 Hz (20 kHz), though the high frequency limit typically reduces with age.
    4. Other animal species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.

     

     

    Frequency Descriptions

    16-31 Hx -- The human threshold of hearing, and the lowest pedal notes of a pipe organ.

    32 – 512 Hz -- Rhythm frequencies, where the lower and upper bass notes lie.

    512 – 2048 Hz -- Defines human speech intelligibility, gives a horn-like or tinny quality to sound.

    2048 – 8192 Hz -- Gives presence to speech, where labialand fricative sounds lie.

    8192 – 16384 Hz -- Brilliance, the sounds of bells and the ringing of cymbals and sibilance in speech.

  3. What is Dynamic Range?
    1. Midrange frequencies are the most challenging for recordists, because nearly all instruments and vocals being recorded happen in that range. 

    2. Dynamic Range is the ratio of the amplitude of the highest frequency waves in the piece being recorded compared to the piece's lowest frequency waves. Recordists need to narrow the dynamic range as much as possible in order to better give life to the midrange waves. 

    3. The dynamic range of human hearing is roughly 140 dB. The dynamic range of music as normally perceived in a concert hall doesn't exceed 80 dB, and human speech is normally perceived just over a range of about 40 dB.

    4. The dynamic range will also vary based on ratio limits of the recording device (transducer), as a properly dithered recording device can record signals well below the Root Mean Square (RMS) noise amplitude (noise floor).
  4. Understanding Acoustics
    1. Acoustics are the properties or qualities of a room or structure that affects sound waves being performed or recorded.

    2. Hard surfaces tend to reflect sound and soft surfaces tend to absorb sounds.  When deciding on the best room to record in, you must consider acoustics and how the sounds waves being produced will be affected by surfaces in the surrounding space. 

    3. Standing waves are sound waves that bounce between two or more surfaces and consequently end up distorting certain frequencies in other sound waves as they enter the transducer. For instance, the area between two flat, perfectly parallel walls resonate at certain frequencies that directly interfere with each other, creating a flat frequency response.

    4. Standing waves are created when the distance between the walls is a multiple of a sound wave's wavelength, consequently reinforcing that wave.  

    5. Standing waves can be produced when waves bounce between all surfaces in a contained room (walls, floor, ceiling, additional objects). 

    6. Frequency response is how the room or equipment affects the sound.  When recording, you want to make sure the frequency response of the room is aligned with the type of sound you’re trying to record.  The deader the space the more you can control the frequency response. 

    7. Reverb is the time it takes for a sound wave to bounce off a surface and return back to the transducer. The bigger the room, the longer it takes for the sound to return.

    8. Phase is the progressive relationship between two wave forms.  Phase denotes the particular point in the cycle of a waveform, measured as an angle in degrees. It is normally not an audible characteristic of a single wave (but can be when using very low-frequency waves as controls in synthesis). It is a very important factor in the interaction of one wave with another, either acoustically or electronically.

    9. Decibel is how we measure the audible loudness of a sound.  It expresses the ratio between the threshold of hearing (intensity) and the sound pressure (power) being produced at a certain moment. The human ear has a large dynamic range in audio perception. The ratio of the sound intensity that causes permanent damage during short exposure to the quietest sound that the ear can hear is greater than or equal to 1 trillion.

Selecting a Microphone and Setting Up Your Space

  1. Understanding Sound Pressure Level
    1. Sound Pressure Level (SPL) is a logarithmic measure of the effective sound pressure of a sound relative to a reference value. It is measured in decibels (dB) above a standard reference level. The standard reference sound pressure in air or other gases is 20 µPa, which is usually considered the threshold of human hearing (at 1 kHz).
    2. SPL can directly determine the type of microphone you choose to use for recording.
  2. Types of Microphones
    1. Dynamic Microphone – Deals with a higher SPL level (sound pressure level).  The higher SPL, the louder the sound.  Great for vocals, drums or any other instrument that can withhold a higher SPL.

      37f0cf19



    2. Condenser Microphone's have a lower SPL.  There are two plates, one stationary and one that moves, which allows different configurations or polar patterns.  The condenser microphone the diaphragm acts as one plate of a capacitor, and the vibrations produce changes in the distance between the plates. 

      7efd05c4



    3. Tube Microphones is a capacitor microphone using a tube circuit in the preamp. Tube microphones allow for a warmer sound and are typically best for singers.

      925a226e

  3. Polar Pattern Configurations
    1. Figure 8 - "Figure 8" or bi-directional microphones receive sound equally from both the front and back of the element. Most ribbon microphones are of this pattern. In principle they do not respond to sound pressure at all, only to the change in pressure between front and back; since sound arriving from the side reaches front and back equally there is no difference in pressure and therefore no sensitivity to sound from that direction.

    2. Omnidirectional - An omnidirectional (or nondirectional) microphone's response is generally considered to be a perfect sphere in three dimensions. In the real world, this is not the case. As with directional microphones, the polar pattern for an "omnidirectional" microphone is a function of frequency. 

    3. Unidirectional - A unidirectional microphone is sensitive to sounds from only one direction.

    4. Cardiod - The most common unidirectional microphone is a cardioid microphone, so named because the sensitivity pattern is a cardioid. The cardioid family of microphones are commonly used as vocal or speech microphones, since they are good at rejecting sounds from other directions. In three dimensions, the cardioid is shaped like an apple centred around the microphone which is the "stalk" of the apple. The cardioid response reduces pickup from the side and rear, helping to avoid feedback from the monitors.

      Since pressure gradient transducer microphones are directional, putting them very close to the sound source (at distances of a few centimeters) results in a bass boost. This is known as the proximity effect.
  4. Understanding Phantom Power
    1. Phantom power, in the context of professional audio equipment, is a method for transmitting DC electric power through microphone cables to operate microphones that contain active electronic circuitry. It is best known as a convenient power source for condenser microphones, though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.

    2. Phantom power supplies are often built into mixing desks, microphone preamplifiers and similar equipment. In addition to powering the circuitry of a microphone, traditional condenser microphones also use phantom power for polarizing the microphone's transducer element. 
  5. Eliminate Standing Waves
    1. In order to create a flat frequency response in the room your recording in, it is important to eliminate all parallel walls. 

    2. Standing waves in rooms can cause certain resonant frequencies to either be unduly enhanced (nodes) or completely disappear (antinodes). For that reason, it is always a good idea to listen to your work from a few different spots in the studio, to eliminate standing wave potential.

    3. We can eliminate standing waves by making sure the room has no parallel walls.  Using gobos, foam, or other materials to eliminate standing waves by bouncing the signal around your recording space.

      f426f0a8

      4bc6a154

  6. Controlling Headphone Level
    1. The headphone level should be loud enough for the artist to hear the music, but quiet enough that it does not bleed into the microphone when recording.

Setting Up a Session

  1. Adjusting Levels
    1.  Listen to your track to make sure you have the proper headroom for your section.  “Headroom” is the amount by which the signal-handling capabilities of an audio system exceed a designted level known as permitted maximum level.  Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the PML without exceeding the signal capabilities of an audio system.  The goal is to prevent the audio track from going into the red or “peaking”.

    2. If the track is peaking, there is no room for the artist for the artist to place their vocals on top which can lead to a bad mix.
  2. Setting up Auxillary Tracks

    1. Auxillary tracks are tracks that you can send a bus to.

    2.Inside your DAW, there is a bussing system (you can title them what you want, ie: Bus 1, Bus 2, Bus 3, etc.). This allows you to route multiple channels to one bus, which essentially serves as a carrier for an effect (see next section). Using the bussing system saves you DSP power (compared to applying separate effects to every single track).

    3. Create three stereo auxilary tracks. Then assign the bus inputs for each track. Now set up your session so all of your recorded tracks will go to these busses.

  3. Adding Effects

    1. Effects are changes you can add to the track after the recording has taken place, via the bussing system from the last section. Examples of effects are reverb and delay.

    2. Why use delay or reverb? It most likely depends on the preference of the singer or performer. Depending on their methods and technique, some vocalists like to hear themselves with a certain effect because they feel it helps them pronounce notes or better sink into the feel of all the tracks.

    3. Because of our bussing system, you don't have to use effects on every track separately. You can create one effect and then bus it to the auxilary tracks you want to be affected by it. 

    4. Try adding various effects to these auxilary channels. I chose two reverbs and a delay. You can customize yours however you want, but try to get a feel for both and note the audible changes each effect has.

  4. Naming Your Audio Channels

    1. Remember, before you even start recording, it is best practice to name each channel to help you keep everything organized.

    2. Naming a track will embed it in the raw audio file, which will also help those who may work on the recorded tracks later (like the mixing engineer or mastering engineer).

    3. I also reccomend naming your effects channels as well, so they are easy to come back to and apply as you record more tracks that you want to bus effects to. This also helps differentiate them from the audio channels. 

  5. Selecting a Pre-Amp
    1. A microphone preamplifier prepares a microphonesignal to be processed by other equipment. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase a microphone signal to line level (i.e. the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal.

    2. The output voltage on a dynamic microphone may be very low, typically in the 1 to 100 microvolt range. A microphone preamplifier increases that level by up to 70 dB, to anywhere up to 10 volts. This stronger signal is used to drive equalization circuitry within an audio mixer, to drive external audio effects, and to sum with other signals to create an audio mix for audio recording or for live sound.

    3. A microphone preamplifier also affects the sound quality of an audio mix. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and so change its tone quality. A preamplifier might add coloration by adding a different characteristic than the audio mixer's built-in preamplifiers. Some microphones must be used in conjunction with a preamplifier to function properly (e.g., condenser microphones). 

    4. You should choose a pre-amp that best fits the type of vocal you are recording.  The pre-amp you use for a vocalist with multiple harmonies and stacked vocals may differ from a hip-hop artist or even a live instrument.
  6. Selecting the Proper Compressor
      1. Look at the waveform levels in your sequencer and see how much variation there is. If there's a big level fluctuation, you may need to add some compression when recording, but never add more than you'll ultimately need, as you can't take it off once it's been added.

      2. Use a fast attack and a release time of around a quarter of a second, or the automatic mode if your preamp's compressor has one. If you don't have a compressor in your preamp, then record with no processing and use a software compressor when mixing. 

      3. Excessive or inappropriate compression at this stage can lead to a congested, lifeless sound that's almost impossible to fix later. It also pays to bear in mind that compression brings up the effects of the room ambience in quieter passages, so while you may not hear the room on an unprocessed recording, it may start to intrude once you start to add compression. 

Recording Vocals

  1. Placing Your Microphone
    1. The diaphragm of the microphone should be in front of the sound source.  The microphone should be on even level and screen should be 3-5 inches from the diaphragm of the mic to eliminate any popping from our recording.

    2. Adjust the Floor Roll-off which will take care of any floor noise or any other extraneous noises that can happen when recording.  The roll-off switch allows you to reduce the strength of the low-frequency signals that pops and gusts can cause.

    3. Where possible, mount the microphone on a stand. Only let the singer hold the mic if to do otherwise would compromise their musical performance. When the singer is hand-holding a mic, particularly if it's a cardioid model, make sure they keep their hand clear of the rear of the basket, as obstructing this area can change both the directional and tonal characteristics of the mic.
  2. Recording Stacked Vocals and Harmonies
    1. Depending on the vocalist as well as the nature of the song you may want to double or triple that lead vocal performance.  The amount of stacking can also change based on a particular part of the song.  Typically, you stack more on the chorus than the verse – but stacking is common on both. 

    2. There are certain tonal characteristics you can expect when you stack a vocal performance.  It will sound warmer and fuller but not quite as crispy or intimate as a well mixed single lead performance.   I always experiment with both.   Most of the time I like the lead stacked, however that is not always the case.  The cool thing is you can shape the tonal characteristics by playing with the volume-to-volume levels of the performances you stack.

    3. If you decide to stack your lead performance be prepared to do more editing.  If you stack three lead vox then be prepared to go through all 3 of those takes and make sure there aren’t any pops, clicks, or other noise artifacts that will deter from the performance.  You will also need to decide which take is the best and make sure the other two mesh well with it. 

    4. When recording harmonies or stacked vocals, be sure to give the artist a pre-roll or review of their lead vocals so that they know what vocal they'd like to double or note they'd like to harmonize with.  This will also allow you to determine how many tracks you'll need and how they should be labled.

    5. Panning vocals allows you to fill in the vocal performance.  It also allows you to tell the direction of where the sound is coming from.  You can place the instruments and voices in different places so that the harmonies, adlibs, or stacked vocals hit at different places in the audio to create a fuller sound.
  3. Recording Adlibs
    1. Adlibs are like toppings on a pizza.  They can add a lot of flavor and color to a rap vocal performance.

    2. Reinforcement Adlibs occur when a vocalist dubs/reinforces certain phrases of a lead vocal passage – particularly the rhyming words
  4. Communicating with Your Vocalist
    1. You should make the vocalist as comfortable as possible while recording

    2. Make the room,  space, and microphone levels as comfortable for your artist

    3. Be sure to make any artist request for adjustments to the music or microphones immediately.

    4. Communicate while recording to see the number of stacked vocals, harmonies and adlibs and takes that may need to take place while the artist is in the booth.

    5. Let your vocalist know where in the song they are coming in so that they can be properly prepared for their performance.

    6. Once the recorded, make sure you listen to the track with the artist to make adjustments, record adlibs or double tracks as well as sound and quality of the recording before mixing.

    7. Don't settle for anything less than the best vocal performance you can get, and don't expect to get it all perfect in one take. More often than not you'll have to punch in and out around phrases that need re-doing, but if you have enough tracks, get the singer to do the whole song several times and then compile a track from the best parts of each take. You can do this on tape by bouncing the required parts to a spare track, but hard disk editing is much more flexible in this respect.

Finishing the Recording

  1. Clean Up Your Session

    1.  Take out extraneous audio or gaps of silence, before delivering your recording to be mixed or mastered.

    2.  Make sure you have the cleanest audio possible.

    3. Make sure all lead vocals, harmonies, adlibs, and stacked vocals are labeled properly and on the proper note.

    4. Your ears and what you hear are the best determination of your recording.  Make sure the signal is clear going in, listen for any distortion, and make sure it is not too loud.