Learn How Young Guru Engineers for Jay Z: An Introduction to Audio Recording
Young Guru, Grammy-Nominated, Legendary Audio Engineer
Watch this class and thousands more
Watch this class and thousands more
Lessons in This Class
-
-
1.
Trailer
1:11
-
2.
Understanding Audio Physics
3:25
-
3.
Understanding Audio Physics (continued)
4:50
-
4.
Choosing a Microphone
7:34
-
5.
Setting Up Your Recording Space
3:05
-
6.
Setting Up a Session
5:10
-
7.
Recording Rap Vocals
5:02
-
8.
Recording a Singer
7:44
-
9.
Finishing the Recording
6:04
-
-
- --
- Beginner level
- Intermediate level
- Advanced level
- All levels
Community Generated
The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.
10,726
Students
14
Projects
About This Class
Since Berliner’s phonautograph introduced the world to the concept of a playable audio track, human beings have been enamored with recorded sound. In the centuries following, technological innovation and studio wizardry have made the process immeasurably more complex, but there are several basic recording principles that still apply. Audio recording, the process of reproducing live sound, will always be a celebrated practice because it allows us to tell stories, share ideas and preserve our voices across nations and through generations.
I’ve had the fortune of recording a diverse variety of music’s most essential artists from Jay Z to Beyonce to Eminem, so I’ve learned what recording techniques work best with the various aural textures. Drawing from my personal experience and lessons, I'll provide you with the tools you need to add a professional touch to your audio recordings.
What You'll Learn
I’ve created this Skillshare class to best help you bring your recordings to life with the effects you want, regardless of the pricepoint of your equipment. This will all be presented through exclusive videos and written direction. We'll cover:
- Understanding Audio Physics. This unit will cover the science of sound and how it reacts according to its surroundings.
- Selecting a Recording Space. This unit will cover how to select the appropriate space for your recording, and how to properly optimize it for your desired effect.
- Positioning Microphones, Instruments and Voices. This unit will cover how each take will be affected based on where the microphone is in relation to the audio source/ the recording space.
- Monitoring Levels. This unit will cover how to ensure your recording is clean (or muddled, if that’s the desired effect) and within an acceptable dynamic range.
- Making Adjustments and Multiple Takes. This unit will cover how to alter the recording variables (mic placement, recording space adjustments, microphone add-ons, instrument add-ons) to create multiple tracks, which will eventually be layered to complete the whole of the audio project.
- Finishing. How to properly label each take, how to organize each track and who to send everything to upon the completion of recording.
What You'll Make
Your Class Project will be a recording of your own audio file using the principles we’ve discussed.
Meet Your Teacher
Throughout his illustrious, decorated career, Gimel “Young Guru” Keaton has resoundingly earned his reputation as one of the most renowned recording and mixing engineers in music today , having worked with artists such as Jay-Z, Beyonce, Rick Ross, Drake, T.I., and Eminem. Wisened after years of successful endeavors (multi-platinum albums, and multiple Grammy nods) Young Guru has recently been working tirelessly to elevate the discourse of audio engineering philosophy, science and technology, emerging onto the college lecture circuit as one of the subject’s most distinguished and dignified speakers, and further proving why he is one of audio’s most important minds and essential voices. Traveling the country, Guru’s intellect and el... See full profile
Hands-on Class Project
Record your own professional quality audio track
Understanding Audio Physics
- What is Sound?
What is Sound?
- Sound is a displacement of air molecules. It’s a vibration that propagates as a mechanical wave of pressure and displacement, through some medium (such as air or water). Sometimes sound refers to only those vibrations with frequencies that are within the range of hearing for humans.
- A transducer is something that transforms one form of energy into another. For this class, the important transducers are the human ear drum and microphones.
- When a difference in Sound Pressure Level (SPL) hits a transducer, it is converted into energy that is understood by the receiver as "sound." For the ear drum, the receiver is the brain, for a microphone, the receiver is the recording medium (magnetic tape, digital signal, etc.)
- What is frequency?
What is frequency?
- Sound propagates as vibration waves of pressure and displacement, in air or other substances. These waves move at the same speed of sound (1,130 feet/second), but they have varying wavelengths (the distance from peak to peak or valley to valley on a wave). We measure how many wavelengths pass in a second by a term called Frequency, which we measure in Hertz (Hz).
- The frequency of a sound wave is the property of sound that determines tone. The longer the wavelength of a sound wave, the lower the frequency (Hz) and the lower the tone produced. The shorter the wavelength, the higher the frequency (Hz) and the higher the tone.
- The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is between about 20 Hertz (Hz) and 20,000 Hz (20 kHz), though the high frequency limit typically reduces with age.
- Other animal species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
Frequency Descriptions
16-31 Hx -- The human threshold of hearing, and the lowest pedal notes of a pipe organ.
32 – 512 Hz -- Rhythm frequencies, where the lower and upper bass notes lie.
512 – 2048 Hz -- Defines human speech intelligibility, gives a horn-like or tinny quality to sound.
2048 – 8192 Hz -- Gives presence to speech, where labialand fricative sounds lie.
8192 – 16384 Hz -- Brilliance, the sounds of bells and the ringing of cymbals and sibilance in speech.
- Sound propagates as vibration waves of pressure and displacement, in air or other substances. These waves move at the same speed of sound (1,130 feet/second), but they have varying wavelengths (the distance from peak to peak or valley to valley on a wave). We measure how many wavelengths pass in a second by a term called Frequency, which we measure in Hertz (Hz).
- What is Dynamic Range?
- Midrange frequencies are the most challenging for recordists, because nearly all instruments and vocals being recorded happen in that range.
- Dynamic Range is the ratio of the amplitude of the highest frequency waves in the piece being recorded compared to the piece's lowest frequency waves. Recordists need to narrow the dynamic range as much as possible in order to better give life to the midrange waves.
- The dynamic range of human hearing is roughly 140 dB. The dynamic range of music as normally perceived in a concert hall doesn't exceed 80 dB, and human speech is normally perceived just over a range of about 40 dB.
- The dynamic range will also vary based on ratio limits of the recording device (transducer), as a properly dithered recording device can record signals well below the Root Mean Square (RMS) noise amplitude (noise floor).
- Midrange frequencies are the most challenging for recordists, because nearly all instruments and vocals being recorded happen in that range.
- Understanding Acoustics
- Acoustics are the properties or qualities of a room or structure that affects sound waves being performed or recorded.
- Hard surfaces tend to reflect sound and soft surfaces tend to absorb sounds. When deciding on the best room to record in, you must consider acoustics and how the sounds waves being produced will be affected by surfaces in the surrounding space.
- Standing waves are sound waves that bounce between two or more surfaces and consequently end up distorting certain frequencies in other sound waves as they enter the transducer. For instance, the area between two flat, perfectly parallel walls resonate at certain frequencies that directly interfere with each other, creating a flat frequency response.
- Standing waves are created when the distance between the walls is a multiple of a sound wave's wavelength, consequently reinforcing that wave.
- Standing waves can be produced when waves bounce between all surfaces in a contained room (walls, floor, ceiling, additional objects).
- Frequency response is how the room or equipment affects the sound. When recording, you want to make sure the frequency response of the room is aligned with the type of sound you’re trying to record. The deader the space the more you can control the frequency response.
- Reverb is the time it takes for a sound wave to bounce off a surface and return back to the transducer. The bigger the room, the longer it takes for the sound to return.
- Phase is the progressive relationship between two wave forms. Phase denotes the particular point in the cycle of a waveform, measured as an angle in degrees. It is normally not an audible characteristic of a single wave (but can be when using very low-frequency waves as controls in synthesis). It is a very important factor in the interaction of one wave with another, either acoustically or electronically.
- Decibel is how we measure the audible loudness of a sound. It expresses the ratio between the threshold of hearing (intensity) and the sound pressure (power) being produced at a certain moment. The human ear has a large dynamic range in audio perception. The ratio of the sound intensity that causes permanent damage during short exposure to the quietest sound that the ear can hear is greater than or equal to 1 trillion.
- Acoustics are the properties or qualities of a room or structure that affects sound waves being performed or recorded.
Selecting a Microphone and Setting Up Your Space
- Understanding Sound Pressure Level
- Sound Pressure Level (SPL) is a logarithmic measure of the effective sound pressure of a sound relative to a reference value. It is measured in decibels (dB) above a standard reference level. The standard reference sound pressure in air or other gases is 20 µPa, which is usually considered the threshold of human hearing (at 1 kHz).
- SPL can directly determine the type of microphone you choose to use for recording.
- Types of Microphones
- Dynamic Microphone – Deals with a higher SPL level (sound pressure level). The higher SPL, the louder the sound. Great for vocals, drums or any other instrument that can withhold a higher SPL.
- Condenser Microphone's have a lower SPL. There are two plates, one stationary and one that moves, which allows different configurations or polar patterns. The condenser microphone the diaphragm acts as one plate of a capacitor, and the vibrations produce changes in the distance between the plates.
- Tube Microphones is a capacitor microphone using a tube circuit in the preamp. Tube microphones allow for a warmer sound and are typically best for singers.
- Dynamic Microphone – Deals with a higher SPL level (sound pressure level). The higher SPL, the louder the sound. Great for vocals, drums or any other instrument that can withhold a higher SPL.
- Polar Pattern Configurations
- Figure 8 - "Figure 8" or bi-directional microphones receive sound equally from both the front and back of the element. Most ribbon microphones are of this pattern. In principle they do not respond to sound pressure at all, only to the change in pressure between front and back; since sound arriving from the side reaches front and back equally there is no difference in pressure and therefore no sensitivity to sound from that direction.
- Omnidirectional - An omnidirectional (or nondirectional) microphone's response is generally considered to be a perfect sphere in three dimensions. In the real world, this is not the case. As with directional microphones, the polar pattern for an "omnidirectional" microphone is a function of frequency.
- Unidirectional - A unidirectional microphone is sensitive to sounds from only one direction.
- Cardiod - The most common unidirectional microphone is a cardioid microphone, so named because the sensitivity pattern is a cardioid. The cardioid family of microphones are commonly used as vocal or speech microphones, since they are good at rejecting sounds from other directions. In three dimensions, the cardioid is shaped like an apple centred around the microphone which is the "stalk" of the apple. The cardioid response reduces pickup from the side and rear, helping to avoid feedback from the monitors.
Since pressure gradient transducer microphones are directional, putting them very close to the sound source (at distances of a few centimeters) results in a bass boost. This is known as the proximity effect.
- Figure 8 - "Figure 8" or bi-directional microphones receive sound equally from both the front and back of the element. Most ribbon microphones are of this pattern. In principle they do not respond to sound pressure at all, only to the change in pressure between front and back; since sound arriving from the side reaches front and back equally there is no difference in pressure and therefore no sensitivity to sound from that direction.
- Understanding Phantom Power
- Phantom power, in the context of professional audio equipment, is a method for transmitting DC electric power through microphone cables to operate microphones that contain active electronic circuitry. It is best known as a convenient power source for condenser microphones, though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.
- Phantom power supplies are often built into mixing desks, microphone preamplifiers and similar equipment. In addition to powering the circuitry of a microphone, traditional condenser microphones also use phantom power for polarizing the microphone's transducer element.
- Phantom power, in the context of professional audio equipment, is a method for transmitting DC electric power through microphone cables to operate microphones that contain active electronic circuitry. It is best known as a convenient power source for condenser microphones, though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.
- Eliminate Standing Waves
- In order to create a flat frequency response in the room your recording in, it is important to eliminate all parallel walls.
- Standing waves in rooms can cause certain resonant frequencies to either be unduly enhanced (nodes) or completely disappear (antinodes). For that reason, it is always a good idea to listen to your work from a few different spots in the studio, to eliminate standing wave potential.
- We can eliminate standing waves by making sure the room has no parallel walls. Using gobos, foam, or other materials to eliminate standing waves by bouncing the signal around your recording space.
- In order to create a flat frequency response in the room your recording in, it is important to eliminate all parallel walls.
- Controlling Headphone Level
- The headphone level should be loud enough for the artist to hear the music, but quiet enough that it does not bleed into the microphone when recording.
Setting Up a Session
- Adjusting Levels
- Listen to your track to make sure you have the proper headroom for your section. “Headroom” is the amount by which the signal-handling capabilities of an audio system exceed a designted level known as permitted maximum level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the PML without exceeding the signal capabilities of an audio system. The goal is to prevent the audio track from going into the red or “peaking”.
- If the track is peaking, there is no room for the artist for the artist to place their vocals on top which can lead to a bad mix.
- Listen to your track to make sure you have the proper headroom for your section. “Headroom” is the amount by which the signal-handling capabilities of an audio system exceed a designted level known as permitted maximum level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the PML without exceeding the signal capabilities of an audio system. The goal is to prevent the audio track from going into the red or “peaking”.
- Setting up Auxillary Tracks
1. Auxillary tracks are tracks that you can send a bus to.
2.Inside your DAW, there is a bussing system (you can title them what you want, ie: Bus 1, Bus 2, Bus 3, etc.). This allows you to route multiple channels to one bus, which essentially serves as a carrier for an effect (see next section). Using the bussing system saves you DSP power (compared to applying separate effects to every single track).
3. Create three stereo auxilary tracks. Then assign the bus inputs for each track. Now set up your session so all of your recorded tracks will go to these busses.
- Adding Effects
1. Effects are changes you can add to the track after the recording has taken place, via the bussing system from the last section. Examples of effects are reverb and delay.
2. Why use delay or reverb? It most likely depends on the preference of the singer or performer. Depending on their methods and technique, some vocalists like to hear themselves with a certain effect because they feel it helps them pronounce notes or better sink into the feel of all the tracks.
3. Because of our bussing system, you don't have to use effects on every track separately. You can create one effect and then bus it to the auxilary tracks you want to be affected by it.
4. Try adding various effects to these auxilary channels. I chose two reverbs and a delay. You can customize yours however you want, but try to get a feel for both and note the audible changes each effect has.
- Naming Your Audio Channels
1. Remember, before you even start recording, it is best practice to name each channel to help you keep everything organized.
2. Naming a track will embed it in the raw audio file, which will also help those who may work on the recorded tracks later (like the mixing engineer or mastering engineer).
3. I also reccomend naming your effects channels as well, so they are easy to come back to and apply as you record more tracks that you want to bus effects to. This also helps differentiate them from the audio channels.
- Selecting a Pre-Amp
- A microphone preamplifier prepares a microphonesignal to be processed by other equipment. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase a microphone signal to line level (i.e. the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal.
- The output voltage on a dynamic microphone may be very low, typically in the 1 to 100 microvolt range. A microphone preamplifier increases that level by up to 70 dB, to anywhere up to 10 volts. This stronger signal is used to drive equalization circuitry within an audio mixer, to drive external audio effects, and to sum with other signals to create an audio mix for audio recording or for live sound.
- A microphone preamplifier also affects the sound quality of an audio mix. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and so change its tone quality. A preamplifier might add coloration by adding a different characteristic than the audio mixer's built-in preamplifiers. Some microphones must be used in conjunction with a preamplifier to function properly (e.g., condenser microphones).
- You should choose a pre-amp that best fits the type of vocal you are recording. The pre-amp you use for a vocalist with multiple harmonies and stacked vocals may differ from a hip-hop artist or even a live instrument.
- A microphone preamplifier prepares a microphonesignal to be processed by other equipment. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase a microphone signal to line level (i.e. the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal.
- Selecting the Proper Compressor
-
- Look at the waveform levels in your sequencer and see how much variation there is. If there's a big level fluctuation, you may need to add some compression when recording, but never add more than you'll ultimately need, as you can't take it off once it's been added.
- Use a fast attack and a release time of around a quarter of a second, or the automatic mode if your preamp's compressor has one. If you don't have a compressor in your preamp, then record with no processing and use a software compressor when mixing.
- Excessive or inappropriate compression at this stage can lead to a congested, lifeless sound that's almost impossible to fix later. It also pays to bear in mind that compression brings up the effects of the room ambience in quieter passages, so while you may not hear the room on an unprocessed recording, it may start to intrude once you start to add compression.
- Look at the waveform levels in your sequencer and see how much variation there is. If there's a big level fluctuation, you may need to add some compression when recording, but never add more than you'll ultimately need, as you can't take it off once it's been added.
-
Recording Vocals
- Placing Your Microphone
- The diaphragm of the microphone should be in front of the sound source. The microphone should be on even level and screen should be 3-5 inches from the diaphragm of the mic to eliminate any popping from our recording.
- Adjust the Floor Roll-off which will take care of any floor noise or any other extraneous noises that can happen when recording. The roll-off switch allows you to reduce the strength of the low-frequency signals that pops and gusts can cause.
- Where possible, mount the microphone on a stand. Only let the singer hold the mic if to do otherwise would compromise their musical performance. When the singer is hand-holding a mic, particularly if it's a cardioid model, make sure they keep their hand clear of the rear of the basket, as obstructing this area can change both the directional and tonal characteristics of the mic.
- The diaphragm of the microphone should be in front of the sound source. The microphone should be on even level and screen should be 3-5 inches from the diaphragm of the mic to eliminate any popping from our recording.
- Recording Stacked Vocals and Harmonies
- Depending on the vocalist as well as the nature of the song you may want to double or triple that lead vocal performance. The amount of stacking can also change based on a particular part of the song. Typically, you stack more on the chorus than the verse – but stacking is common on both.
- There are certain tonal characteristics you can expect when you stack a vocal performance. It will sound warmer and fuller but not quite as crispy or intimate as a well mixed single lead performance. I always experiment with both. Most of the time I like the lead stacked, however that is not always the case. The cool thing is you can shape the tonal characteristics by playing with the volume-to-volume levels of the performances you stack.
- If you decide to stack your lead performance be prepared to do more editing. If you stack three lead vox then be prepared to go through all 3 of those takes and make sure there aren’t any pops, clicks, or other noise artifacts that will deter from the performance. You will also need to decide which take is the best and make sure the other two mesh well with it.
- When recording harmonies or stacked vocals, be sure to give the artist a pre-roll or review of their lead vocals so that they know what vocal they'd like to double or note they'd like to harmonize with. This will also allow you to determine how many tracks you'll need and how they should be labled.
- Panning vocals allows you to fill in the vocal performance. It also allows you to tell the direction of where the sound is coming from. You can place the instruments and voices in different places so that the harmonies, adlibs, or stacked vocals hit at different places in the audio to create a fuller sound.
- Depending on the vocalist as well as the nature of the song you may want to double or triple that lead vocal performance. The amount of stacking can also change based on a particular part of the song. Typically, you stack more on the chorus than the verse – but stacking is common on both.
- Recording Adlibs
- Adlibs are like toppings on a pizza. They can add a lot of flavor and color to a rap vocal performance.
- Reinforcement Adlibs occur when a vocalist dubs/reinforces certain phrases of a lead vocal passage – particularly the rhyming words
- Adlibs are like toppings on a pizza. They can add a lot of flavor and color to a rap vocal performance.
- Communicating with Your Vocalist
- You should make the vocalist as comfortable as possible while recording
- Make the room, space, and microphone levels as comfortable for your artist
- Be sure to make any artist request for adjustments to the music or microphones immediately.
- Communicate while recording to see the number of stacked vocals, harmonies and adlibs and takes that may need to take place while the artist is in the booth.
- Let your vocalist know where in the song they are coming in so that they can be properly prepared for their performance.
- Once the recorded, make sure you listen to the track with the artist to make adjustments, record adlibs or double tracks as well as sound and quality of the recording before mixing.
- Don't settle for anything less than the best vocal performance you can get, and don't expect to get it all perfect in one take. More often than not you'll have to punch in and out around phrases that need re-doing, but if you have enough tracks, get the singer to do the whole song several times and then compile a track from the best parts of each take. You can do this on tape by bouncing the required parts to a spare track, but hard disk editing is much more flexible in this respect.
- You should make the vocalist as comfortable as possible while recording
Finishing the Recording
- Clean Up Your Session
1. Take out extraneous audio or gaps of silence, before delivering your recording to be mixed or mastered.
2. Make sure you have the cleanest audio possible.
3. Make sure all lead vocals, harmonies, adlibs, and stacked vocals are labeled properly and on the proper note.
4. Your ears and what you hear are the best determination of your recording. Make sure the signal is clear going in, listen for any distortion, and make sure it is not too loud.
Class Ratings
Why Join Skillshare?
Take award-winning Skillshare Original Classes
Each class has short lessons, hands-on projects
Your membership supports Skillshare teachers