Introduction to Music Production Masterclass | Jason Allen | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Introduction to Music Production Masterclass

teacher avatar Jason Allen, PhD, Ableton Certified Trainer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

99 Lessons (6h 33m)
    • 1. Introduction

    • 2. What We Will Cover

    • 3. What is an Ableton Certified Trainer?

    • 4. How to Use This Class

    • 5. Analog Audio

    • 6. The Oldest Known Recording of a Human

    • 7. Edison

    • 8. Bell Labs and Max Mathews

    • 9. Advances Since Mathews

    • 10. The Difference Between Analog and Digital

    • 11. Mac or PC?

    • 12. Laptop or Desktop?

    • 13. The ADC and the DAC

    • 14. The Audio Interface

    • 15. Speakers or Headphones?

    • 16. Microphones

    • 17. External Hard Drives

    • 18. My Setup at Home

    • 19. What is a DAW?

    • 20. What to look for in a DAW

    • 21. A Highly Opinionated List of Common DAWs

    • 22. For the Money...

    • 23. The 4 Sections in Every DAW

    • 24. The Timeline

    • 25. The Mixer

    • 26. The Effects Section

    • 27. The Transport

    • 28. Nearly Universal Key Commands

    • 29. Care and Feeding of Your DAW

    • 30. What is the Grid?

    • 31. Horizontal = Time

    • 32. Vertical = Tracks

    • 33. How DAWs Handle Meter

    • 34. Vocabulary: Downbeats, Upbeats, and Offbeats

    • 35. Elements of the Beat: Kick, Snare, Hi Hats

    • 36. Building the Worlds Most Basic Beat

    • 37. Placing the Kick

    • 38. Placing the Snare

    • 39. Placing the Hi Hats

    • 40. Having Fun with Hi Hats

    • 41. Looping and Consolidating

    • 42. Audio is Finicky.

    • 43. Looking at Waveforms

    • 44. Sine Waves

    • 45. Clipping

    • 46. File Formats

    • 47. The Sample Rate

    • 48. The Nyquist Theorem

    • 49. What Frequencies Do We Really Need?

    • 50. The Bit Rate

    • 51. Standards for Sample Rate and Bit Rate

    • 52. The Dawn of Electronic Instruments

    • 53. The Theremin

    • 54. What Happened to Leon?

    • 55. The Moog

    • 56. Wendy Carlos

    • 57. What Happened in 1981

    • 58. The MIDI 1.0 Spec is Born

    • 59. MIDI Instruments Today

    • 60. 61 OtherUsesOfMIDI

    • 61. 62 MIDIisAProtocol

    • 62. 63 MIDIisNotAudio

    • 63. 64 MIDIChannels

    • 64. 65 AnatomyOfAMIDIMessage

    • 65. 66 NoteOnErrors

    • 66. 67 VelocityTracing

    • 67. 68 AdvantagesOfMIDI

    • 68. 69 AdjustingNotes

    • 69. 70 WhatAreMIDIEffects

    • 70. 71 Arpeggiator

    • 71. 72 Chord

    • 72. 73 NoteEcho

    • 73. 74 NoteLength

    • 74. 75 Pitch

    • 75. 76 Random

    • 76. 77 Scale

    • 77. 78 Quantizing

    • 78. 79 AudioEffectsOnMIDITracks

    • 79. 80 PrintingMIDITracks

    • 80. 82 MIDIAndAudioAreFriends

    • 81. 83 VelocityEditing

    • 82. 84 Automation

    • 83. 85 PanningAutomation

    • 84. 86 BinaryAutomation

    • 85. 87 AudioEffectAutomation

    • 86. 88 MIDIEffectAutomation

    • 87. 89 TempoAutomation

    • 88. 90 AutomatingAnything

    • 89. 91 WalkthroughOfAWholeTrack

    • 90. 92 FindingLoops

    • 91. 93 FileTypes

    • 92. 94 AutomaticallyWarpingTracks

    • 93. 95 FixingAPoorlyWarpedTrack

    • 94. 96 LetsDoThatAgain

    • 95. 97 Transients

    • 96. 98 BreakingALoopForFun

    • 97. 99 FlatteningLoops

    • 98. 100 StackingLoops

    • 99. Bonus Lecture

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

Welcome to the Introduction to Music Production Masterclass!

This course is "5-Star Certified" by the International Association of Online Music Educators and Institutions (IAOMEI). This course has been independently reviewed by a panel of experts and has received a stellar 5-star rating.

100% Answer Rate!
Every single question posted to this class is answered within 24 hours by the instructor.

This class is for anyone who has wondered what music production is all about. Especially:

  • Aspiring Producers: If you are just getting started with music production, this course will be the swiss army knife that you will keep in your belt forever.

  • Musicians: If you have wanted to improve your compositions by understanding the tools of electronic music production, recording, and sound design work, this class is for you.

  • Producers: If you are making tracks that rely on presets, and want to learn how to craft your own sounds, this is the course for you.

  • Songwriters: Improve your compositions by understanding how to use the latest tools in your songs, and record them!

In this class, we start with the very basics: What kinds of tools do I need to produce music? We explore the various tools, then we start to learn how to use them for making professional music. By the end of this class, you will be making your own tracks on your own computer (no matter if it is a mac or a pc!)

This course follows the tried-and-true curriculum that I've used for my Music Technology 101 class at my university position. I'm excited to be able to make it available to you, at 0.001% of the cost of a University class.

The goal of this class is for you to learn how to make original music on the tools you already have, or can get access to inexpensively.

This course is NOT specific to any DAW program.

I'll be using Ableton Live Suite 10 and 11 in this course as my main DAW, but if you are using any other program you will be able to follow along just fine. That includes Logic, FL Studio, Pro Tools, Reaper, Reason, Cubase, or any of the others. My method in this class is to teach concepts, so whatever I do, you will be able to do it in your own software.

I'm best known for working with electronic music, but I've designed this course to be as inclusive as possible when it comes to genre. We will talk about techniques for all genres, sounds, and styles. All genres are welcome here!

Topics Covered: 

  • The Essentials of Digital Audio

  • Equipment: Mac or PC?

  • Laptop or Desktop?

  • Speakers or Headphones?

  • All about Microphones

  • What software should you get?

  • How all DAW (Digital Audio Workstation) Software works

  • Building tracks using the grid

  • How to read an audio waveform

  • Programming drums and making beats

  • Using audio samples for producing music

  • Sample Rate and Bit Rate

  • Using MIDI

  • MIDI Guitars and other Instruments

  • MIDI Effects

  • Automation

  • Working with Loops

  • Finding Loops (for free!)

  • Synthesis

  • Sound Design

  • Synthesis types (analog, modular synthesis, physical modeling, Serum, and more)

  • Using Samplers

  • Building Tracks from Scratch

  • And much, much more!

If you are ready to start making professional-sounding tracks, this is the class that will start you on that journey. Get started today.

Dr. Allen is a university music professor and is a top-rated instructor - with over 100 courses and over 350,000 students.

In 2017 Star Tribune Business featured him as a "Mover and a Shaker," and he is recognized by the Grammy Foundation for his music education classes. 

Meet Your Teacher

Teacher Profile Image

Jason Allen

PhD, Ableton Certified Trainer


J. Anthony Allen has worn the hats of composer, producer, songwriter, engineer, sound designer, DJ, remix artist, multi-media artist, performer, inventor, and entrepreneur. Allen is a versatile creator whose diverse project experience ranges from works written for the Minnesota Orchestra to pieces developed for film, TV, and radio. An innovator in the field of electronic performance, Allen performs on a set of “glove” controllers, which he has designed, built, and programmed by himself. When he’s not working as a solo artist, Allen is a serial collaborator. His primary collaborative vehicle is the group Ballet Mech, for which Allen is one of three producers.

In 2014, Allen was a semi-finalist for the Grammy Foundation’s Music Educator of the Year.

... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Introduction: Hey everyone, welcome to Intro to Music Production. Master. It's called masterclass because it is huge. There's like a 140 videos in this thing. Um, and the reason I did that is because I wanted to take kind of each concept that I wanted to introduce and I wanted to tease it out a little bit. So what this class is designed to do is give you a foundation. So the idea is, you may have very little or no experience using these tools, the tools we use to make music production. You might not even know what these tools are. And in this class, I'm going to introduce you to the tools here to you about the theory behind them. Teach you how to use those tools. Take you all the way up through sound design, mixing, and ultimately making a truck. Okay, so by the end of this, you'll be making tracks. Now this class doesn't go all the way. There's a lot more to learn after this. But what this class is gonna do is it's going to get you to that point where you can start making music and it'll sound good. After that. You'll know what else you need to work on. Maybe you need to work more on mastering your platform, like your audio software. Maybe you need to work more on sound design. I'm mixing, mastering, whatever it may be. But this class is going to make sure you know exactly everything you need to do to make music and then where to go after that. So this is a really huge class and really happy with it. This is the same curriculum that I'm using in my college courses, just in online video format. So let's dive in and let's start make it. 2. What We Will Cover: All right, So let's dive in and talk about what we're going to cover in this class. So in this class we're going to talk about everything you need to know to get into electronic music. And that means recording, producing, mixing, even live recording we'll talk a little bit about. But we're really just going to skim the surface and talk about some of the hardware of things you need, some of the software things that you need, how some of the software works, what you can do without, and what is absolutely essential. Super important. Do not go out and buy anything yet. Just wait. Because let me walk you through everything that you need or might just want and kind of tell you what to look for and give you some pointers. I'm not getting paid by anybody to recommend certain stuff. So I'm just going to tell you the things that I think are really important and good products that I use in my home studio on that I would recommend. So we're not going to go super deep into any specific topic, but deep enough for you to understand what avenues you want to go further down later on. And if you're just interested in exploring things and, and doing a little tasting menu of music production, then this is the perfect class for you. We're gonna taste a whole bunch of different things. So moving on, I did say that no one's paying me to endorse any specific product or anything like that for this class. But I do have a relationship with the software company able to in, and I want to talk about that in the next video. So off we go. 3. What is an Ableton Certified Trainer?: So I am what is called an Ableton certified trainer, which just as a sidebar is a really interesting thing for me because in a lot of communities where I'm working, I have a PhD in Music, like that's the top, top degree you can get in music. And yet people are more impressed that I'm an Ableton Certified Trainer. And to be fair, they should be because it was I would venture to say almost harder to get than the Ph.D. PHD took more time. But the exam to be enabled and certified trainer is no joke. It was a two-day long process. It's a whole ordeal. It's not it's you can only get it from Ableton. It's not like I went to some weekend class. So what this means though, and when you see someone who's an Ableton Certified Trainer, it doesn't just mean that they are really good at using the program able to him. It means that they are really good at teaching how to use the program able to. And that is actually what the Ableton company is certifying, is that you are a certified teacher, meaning you are in their mind, one of the top features of this thing, of this program in the country or the world. There's only like a 100 Ableton certified trainers in the US. I think there's a kind of roughly 1000 worldwide. So it's fairly rare. I'm telling you all this because this thing, Ableton, this word I keep saying is this software that's on the screen here. And it's a software we use for music production, and it's one that I use every day. And I'll be walking you through some of the elements of it and how it works. But it's not the only software out there. There are other pieces of software that are great. Fl Studio, Logic, Pro Tools, Reaper, reason. There are a lot of great ones. So you don't have to fall in love with live in this class. But I am going to be using it for a lot of stuff, but I'll also show you things in other programs as well. Shortly, we'll talk about software and the pros and cons of the different platforms that you could choose to use. So, like I said in the last video, don't go out and buy anything yet. But just know you're taking a class with one of the top people in the country. And I also have a PhD. 4. How to Use This Class: Okay, One last thing before we dive into the real meat of the class. How I would recommend that you use this class? First? Go slow, enjoy it. There's no rush to get through this class. Watch the videos, pause them. Pop open a browser Explorer the concepts a little bit more, if you like. Use all of these topics as a jumping off point to learn more, you can certainly look for more classes on each topic if you want to go further down that rabbit hole. I have almost a 100 classes available, most of which are further details on some of the topics we're going to be talking about here. So feel free to take your time, pause the video, look things up if you want to go back to the videos and just use them as a jumping off point to learn more and more about the particular things that you're interested in. This is kind of what we would call it my university job at a survey course, right? It means we're going to survey the landscape and see everything that's out there. And then at the end of it, you're going to say, that was fun. But this one thing, That's what I want to go deeper into. And then you're gonna take more courses in that if you want to. Okay. So I just wanted to point that out there just you have full license to take your time through this. There's no rush. 5. Analog Audio: Okay, so let's start off talking about analog sound and how early recordings were made. And that'll kinda move us into digital sound, which is primarily where we're working now when we make music and when we record music now. But when technology to record sound first came into being, it was only analog sound that we were able to record. What is analog sound? Well, sound itself is just waves moving through the air, right? I talk, my throat makes sound. My lungs push air. And that generates a wave. And it's going like this. It's at the pitch that I'm speaking. These waves are probably about this long, roughly a foot or so long. And there are real physical things going through the air. It's just pressure in the air. And those are going to if you were standing here with me, they would be going to your ears and that wave would hit your eardrum and push it in just the right ways that you would be able to hear the sound. So if we want to capture that, right, like we want to take that wave and put it in a bottle and want to capture it. How do we do it? It's, it's actually kind of a phenomenal thing to think about. Like you've got this wavelength through the air and we need to capture it somehow. And if you think about it, there are these waves going through the air, not the only kinds of waves going through the air, right? There are tons of different kinds of waves going through the air. The example I like to give is if those waves or the waves of my voice or probably about this long. But if they were smaller and smaller and smaller and smaller and they got to be like a measurably small, like incredibly small. Those waves we would interpret with our body as light, right? T waves. We interpret with our eyes. Waves this longish. We interpret with our ears. But there are even longer waves. There are waves that are miles and miles long. There are waves that can wrap themselves all the way around the earth there that long. We typically interpret those waves as weather. Same kinda waves. Just much, much, much bigger. So it depends on how big those waves are, depending. In order for us to interpret them. If they're the right size, we interpret them as sound. If there are different size, there are light. If they're much bigger size there, whether or something in between. So the way that we first came up with we, meaning humans, Not me, I was involved in this. The way we were able to capture sounds was using a process or using something called analog sound. So analog, you know, it comes from two Greek words, meaning Anna, which meant according to and logos, which meant a relationship. So analog sound means it's according to an a relationship with the sound. We have another word in English that is analogous, meaning like to things that are very similar to each other. And that's what analog sound is. In other words, analog sound has captured that waveform in a way that is this seen as the actual waveform. So if we look at a vinyl record and we zoom way we re in, what we would see is little wave forms because a vinyl record is analog sound. So we would see these little wave forms on there. That's how analog sound works, is we capture those waveforms, were able to chart out those waveforms. And then using a little needle on the record and a big speaker, we can recreate those same waveforms. So when we say a sound is analog, it means we've captured the waveforms in a way that you could, you can actually see the wave form. But it is the same waveform captured onto some kind of media, a disk or something like that. So that's an analog sound. Now, just for fun, let's talk about some of the oldest analog sounds that were ever recorded. And when analog sound first came to be. 6. The Oldest Known Recording of a Human: Okay. Do you know when the first audio recording was made? Before you answer that, let's think about this. Prior to this, we have no idea what humans sounded like. This would be the oldest recording of a human being, right? Of the human voice. We have no way of knowing what people sounded like even prior to this. So what does the oldest record we have of the human voice? Of the humans, of humans doing anything actually. So take a guess. The correct answer is 1878. And that first recording we have is Thomas Edison. But there's a little asterisk next to it because recent well, it because in the last decade or so, another recording has been found to predate Edison. So Edison, Thomas Edison invented the phonograph, which is the first way we had to record sound. But there was somebody else before him. So 1878, Edison invents the phonograph and records himself reciting the words to Mary Had a Little Lamb. But in 1857 there was a French typographer who was working with graphite like pencil, and found that he could do a cool trick where he could hook a big horn to a pencil. And then he could yell in it. And he set up a contraption that would scribble, right? It would just like scribble the waveform that he yelled into the big cone. Right? So that was neat. It was, it could just scribble away form. It's all it could do. But it was never designed to be recreated, right? Like we went over, we can hear it. It was just pencil on paper. But with computers we can. So some scientists have taken those scribbly waveforms and put them into a computer and interpreted those waveforms so that we could hear them. So this comes all the way from 1857. Now remember this is pencil on paper. This is not high-quality audio, but it sounded like this. That's it. That's all we got. So that is supposedly this French typographer whose name I'm not going to attempt to pronounce because it is a very French name. But that is a rendering of his pencil on or a graphite on paper recordings. How? But commonly, we think of the first analogue recordings being 1878 made by Thomas Edison. And that is the first recordings where we can really hear somebody talking. And it happens to be Thomas Edison. It sounds like this. And that was it. So 1878 we get our first recording. Now you may have seen floating around the Internet, this idea of this ancient Mayan pottery and audio recordings made from them. Similar idea to the French typographer. Where the idea was, when you make pottery, you're spinning clay on a wheel and you take some kind of stick. And while it's spinning, push it onto the wet clay and you pull it down in order to make a groove all the way down the pot, right? That dries. And with computers, we could extract out of there the vibrations, we very, very, very subtle vibrations of the wave forms created embedded in the clay that would capture the sound in the room. You can find audio recordings of this around online. However, before you get all too excited, this turned out to be a hoax. This was just not possible. The, the minutia of those waveforms is too delicate to be captured in wet clay. I suppose it's still possible that with conceivably that with computers we could pull out some waveforms from that could be hidden in something like that. But presently, it has not been done successfully yet. So if you see something around online like that, if you have a friend tell you actually ancient Mayan. The audio exists. Now, doesn't it? A hoax. Um, so don't fall for that. Because I totally did. When it came out, I was like, Oh my God, this is amazing. And then I read more about it. And it was actually the audio file that's going around and the story about it, it was actually released on April Fool's, and it's just totally not real. So I fell for it. Anyway. So our earliest known recordings. 7. Edison: So of course, now that we've heard the sound that brings us to talking about Thomas Edison and what he invented. Now Thomas Edison did not at first create the first phonograph to look like a record as we know it now, a little a vinyl record that is a disc. What he did was he created something that looked like a cylinder and it looked more like this. It was a cylinder and it spun. I'm not going to it's spun on its side like this. I can't do it, but there's actually water in here. Okay. It's been on its side like this. And an a needle sat on top of it and red side to side. The reason that he eventually switched it to being more like a disk is because when he first invented it, the story goes that it wasn't designed to be mass-produced ever. These were designed to be like ways to hold documents, MIT, ways to preserve languages that were dying, ways to preserve historic moments. And we wouldn't, you would ever want to mass produce the things they would always just kinda be sitting in a vault somewhere. But I'm not too long. People found that this idea of having music in their house Without performers was kind of amazing. They could use this contraption to play music. So Edison started making these with music on them. And that was like kind of a crazy idea. And then the idea came to mass-produce these things. And Sullivan, and he in fact did. And Edison studio was the first record company that existed. But these cylinders add this problem, but they were very difficult to mass produce and they were very fragile. So went back to the drawing board and eventually came up with a disk. The disk could be made into a stamp so that they could just like stamp them out and make them very fast. Whereas the cylinder, being more three-dimensional, couldn't be made into a stamp. And therefore it was very hard to mass-produce. The disk was also a lot more durable than the cylinder, although the early disks were a much thicker vinyl and they did shatter kind of a lot. I have a collection of really old records that are kind of half shattered. But as we kept developing it throughout the decades, throughout a century and a half, the vinyl record became much more durable. It is still somewhat fragile, but and we also develop other ways to access analogue recordings like tape. There is analog tape that we use for recordings. But at some point, practitioners of recording started to look for a way to Record at a higher fidelity and get away from this idea of an analog recording where it was just a representation of the waveform on the thing, on the piece of media. And luckily for them, there was this newfangled thing called the computer. And that made a whole new kind of audio possible. That was digital audio. So let's go to a new video and talking about the origins of digital audio. 8. Bell Labs and Max Mathews: So in about 1957, there was a guy named Max Mathews. Max was a scientist at Bell Labs. Bell Labs was at the time the phone company. It was big, big company and it was the one and only big monster phone company. It was kind of the apple of its day. Except it had very little or possibly no competition. Remember, this is in 1957, phone was still a pretty new thing. At Bell Labs. They had a research wing and that's where Max was a scientist. His job was to find a way to get more voices down a single wire. The way the phone worked at the moment at this moment in time was an order for me to talk to my grandma. I have to have essentially a wire connecting me to my grandma and nothing else can go down that wire at the same time as I'm talking for me to my grandma. Now we facilitated this through a switchboard system so that someone could patch me to grammar without running a wire directly from me to her. But still, only one voice could go down that wire at a time. One conversation could go back and forth in that wire at a time. So with Max, they said, we need to not run millions and millions of wires from every house to every other house. There's gotta be a better way. Find a way that we can get many voices down a single wire. So that's what Max is working on. And what he came up with was this idea that he would create a special kind of phone that would deconstruct the sound and break it down into little tiny pieces. And then it would send those as data throughout the wire, down the wire. And then it would get to its destination. And a special phone would then get all of that data and put the sound back together and you would hear it. So this didn't really work very well. It didn't sound very good. But that concept of taking a sound and breaking it down into little tiny bits and then putting it back together is effectively the origins of digital audio. That is how digital audio works in exactly to this day. So we credit the invention of digital audio to Max Mathews. He's really fascinating guy. I had an opportunity to meet him once at a conference. He actually knew this was 1957 and he didn't die until relatively only like a decade ago or so. He actually passed away. He was really, really old. But I had an opportunity to meet a very, very old Max Mathews, fascinating guy, did some great works. He actually was known as an amateur violinist. And he would go into Bell Labs supposedly. And program has computer to make little sounds, little bleeps and bloops is 2, 2, 2, 2, 2, 2, 2, 2, 2, stuff like that. And he would bring his violin late at night when no one was looking and who played duets with his violin and the computer. Which was kind of like mind-blowing at the time, right? Because people would be like, Wow, robots are making music. The end of the world is near, was probably the way a lot of people interpreted that. But it all worked out. And digital audio came to be. People took that concept of breaking the sound down into little bits and putting it back together and built tools around it. And that's where we have software, hardware and all kinds of stuff that have been developed using that fundamental principle of taking tiny little nuggets of the sound, representing it as data, and then putting it back together as audio. So you can think Max Mathews, for all of that. 9. Advances Since Mathews: One really interesting thing to consider about digital audio is that there have been hundreds of thousands of different tools for using digital audio. There have been different formats, different file types, different audio programs, plugins, all that stuff. The fundamental principle of digital audio is pretty much the same as it's always been. We change the way in which we take those little snapshots of sound. And we've changed the way that we put them back together, which changes the way that we send them across, that we move them, we change what we can do to them while they're, while they're in their little grains of data. But the fundamental principles that Max created on how digital audio works hasn't really changed. It's kinda the same as how the fundamental principle of how analog audio works hasn't changed since Edison. We've added a lot to it. But those core principles are still what we use to run kinda everything when it comes to audio. Okay, so let's talk real quick. Last thing in this section about what this all means. What is the difference between analog audio and digital audio from a practical standpoint. 10. The Difference Between Analog and Digital: Okay, so I have here a digital audio file. It looks like a wave form, right? Could be analog, could be digital. Okay, Now, it can't be analog because it's in a computer. Computers are capable of dealing with digital sound. Humans are capable of dealing with analog sound. Okay? Like period, There's no ifs, ands or buts about that. We cannot deal with digital sound. We can hear digital sound. There is no organ in our body that will interpret digital sound for us. In order for us to deal with sound, it needs to be converted back to analog in order for us to hear it. And our speakers do that. And another thing that we'll talk about later, Let's just say our computer does that for us. We'll get into more detail about that soon door. But when our, when this program sends a message to the speakers, it's sending this data as digital sound. And by the time it gets to the speakers, it gets converted to analog sound so I can hear it. If there's no speakers connected, I can't hear it, right. Humans hear analog sound. Computers deal with digital set. Okay, That's the biggest difference. You cannot hear digital sound. There's no such thing. Let me show you these little pockets of sound. If I zoom in. Now I'm looking at the waveform. Zoom in even farther, zoom in even farther, zoom-in even farther. Keep going. Keep going. Boom, there we are. We're down to dots. Right? Now. These dots are important. Each one of these dots we call a sample. And that is the little nugget of sound that we use, right? We take all of these little nuggets of sound, or we put it back together and then we can hear the sound. But you'll notice one of the key things that's different between analog and digital is that there are space, there is space between these dots. If I zoom in, this is as far as I can zoom in. Here's a sample, here's a sample. There's empty space between them. This is the biggest argument for analog sound being better than digital sound, because an analog sound, there's no space between the dots because there's no dots. Analog sound is analogous, right? It's constant. Wave, the same as it is in the physical world. But in digital sound, we have space between the samples that makes it less precise. You would think this is an argument that people have been having for a really long time. And it technically is true there is space between these samples. However, this amount of time is, let's see, it would be 144100th of a second. So it's, it's a fraction of a millisecond. I mean, it's not even a fraction. It's a fraction of a fraction of a millisecond. It is an unbelievably small amount of time. It's completely an audible, but it is something it is some amount of time and there are a lot of them if you look at it. So those are the biggest difference is between analog and digital sound. Analog is sound we can hear and it's constant and smooth. Digital is only sound computers can make and it has holes in it. Can we hear them? Probably not. 11. Mac or PC?: Okay, so let's talk about hardware. Now. What I'm talking about here is if you want to get into recording or music production, what kind of hardware should you have at your disposal? Hardware meaning physical things, right? Computer's microphones, stuff like that. So the first question, which is a question as old as time it has stumped philosophers for millennia, is, should you get a Mac or a PC to make music with? Or should you have a Mac or a PC? Well, this can be a very controversial answer and a highly opinionated answer. So let me first tell you that when it comes to speed and computing power, processor power, graphics power, all of that stuff. At this point, this wasn't always true, but at this point, any computer will do just fine. Really, doesn't matter. Acc is fast enough, a PC is fast enough and off the shelf PC is probably fast enough to do everything you wanna do. So that really doesn't matter. Though. It does matter if you're thinking about what software is available for the the type of computer that you're going to use. For example, I like to use Ableton Live for a while. That wasn't available on the PC. It was Mac only. It is now available for the PC also. So if you want to use Ableton Live, It doesn't matter. Off the top my head, the only professional audio program that is not available on both Mac and PC at this point, is logic. Logic is owned by Apple, so it's only available for Macintosh. So if you want to use logic, then you're on a Macintosh. All the others have a Mac and a PC version. Now this, this thing about Mac and PC is really, actually really interesting. Because if you go back to the late nineties or so, when people were really starting to do some of this work outside of big recording studios. Even in the big recording studios. You almost always saw Mac in those studios. And the reason for that was really just because the software that was available for a Mac was at that time superior. When it comes to audio production, to the software that was available for a PC. The early versions of Pro Tools and things like that were only available for the Mac. And probably because the Mac was seen as kind of the creative computer or the computer for the creative artist. Or the PC was seen as the computer you get to do your taxes. Now, obviously those things aren't really true anymore. And like I just said, both are fine. However, there still is a stigma attached to using a PC in a professional studio. A little bit, a little bit. If I went into a professional recording studio and I went into the control room and I sat down and I fired it up and it was a PC. It would give me a little bit of a pause. It just would. It's much, much more common to have a Mac in those situations. I probably because of tradition, really, it's definitely not because of computing power or anything like that at this point. Mostly because of tradition. There's even a little bit of genre preference. Like I know a lot of like the people making dubstep and trap are more into pieces then Max. I don't know why that is. It has it has no bearing on like the software or the power of the computer. It just it, it's probably because PCs are cheaper. That's for sure. There's no debating that. So all of that being said, I learned how to use computers by using Pro Tools. So I've been a Mac guy ever since I got my first computer. So I'm a Mac guy. I like Apple stuff. But that doesn't mean you have to be. So if the question is, should I get a Mac or a PC? The answer to that is, first, how much money you got if you're on a tight budget, get a PC, fine. And if story, if you've got a little more money, then sub-question, how cool do you want to be? If you want to be cool, get a Mac. If you have a lot of money and you don't care about being cool, and get a PC and then save the rest of your money for something else for the software you're going to need to buy for that thing. So that's how I answer that question. How much money you got and how cold you want to be? That's not a very fair way to answer it, but it's true. 12. Laptop or Desktop?: Okay, Next, let's talk about a laptop computer or a desktop computer. Again, this is really kind of a question of preference because perhaps it was true at 1. Well, it was definitely true at 1 that a laptop was less powerful than a desktop. And you needed a desktop to do some of the high-end audio processing that we need to do when we're working with audio. That's not true anymore. A lacked a lot of laptops are just as fast as desktops. And often you can get a laptop faster than a desktop. It's surprising, but it's true. So, but like I said before, even an off the shelf laptop is probably going to be fast enough at this point to do everything that you need to do in order to produce professional music on it. So it's not really a matter of speed. It's more a matter of convenience. Do you sit in one spot and work all day long in a room? Like I am in now. If so, then a desktop might be good. Or do you run around and go work at coffee shops in multiple places and embed and all of these other things, then you probably want to desktop. One thing to think about is, how many things are you going to plug into it? I have like 15 cables coming out of my computer. So I have a desktop because I'm almost always working in this room and I have a ton of stuff plugged into it. I don't want to unplug that and move that around for walking around and go into the coffee shop. I have an iPad and I also have a laptop. So how many things you're going to plug into it is a consideration. Another thing that maybe is what other purposes are you going to use this computer for? Is it just going to be dedicated to music and therefore not leave your studio room, then maybe a desktop is good. Or are you going to also do your schoolwork on it and some some of your job work on it, stuff like that. And you're gonna have to take it back and forth to the office than a laptops, probably better. I don't know. Last thing to consider with that is that a laptop can always act like a desktop, right? You can always take a laptop and just put it in a corner and hooks stuff to it and not move it. But a desktop cannot act like a laptop. You can't just take your desktop, hike it down, strap it to your back and hike it down to the, to the coffee shop if you want to. So I think at this point, most people get laptops and work on a laptop because you can always hook more screens to a laptop. You can extend the capabilities of a laptop. But you can also unhook everything and just use it like a laptop, a desktop. You can't. It's a big burly thing that's got to stay in the corner of a room. So I use a desktop because I'm always working in this room and I need a lot of power. But because I do video editing and things also on this computer, so I have a exceptionally fast one. But if you're not doing that, then get a laptop. You can carry it around and it can always pretend to be a desktop. And that's just fine. 13. The ADC and the DAC: Okay, so there's one piece of hardware that you might want to get. That is the one thing that most people don't know that they need. It's an extra little box that is just designed for professional audio work. So before I tell you what it is, and let me explain why we need it. So let's say you want to record something on your computer. Your computer probably has a microphone built into it, right? So you've got a little microphone built into your computer somewhere, but that's not a professional microphone. That's really just designed to pick up your voice for you. No, video chat and things like that. If you want to use a professional microphone, you're gonna get one that looks like this. And it's going to have a cable coming out of it that looks like this. Now this is called an XLR cable. And one end of it looks like that. It has three pins. So now you're going to look around your computer and you're going to find a spot for you to plug this kind of a cable into your laptop or desktop and you're going to come up empty. No computer that I have ever seen has the ability to plug one of these directly into. It just doesn't happen. But perhaps you have a computer and it has a little eighth inch microphone jack in it. So you could get some adapters and plug into that eighth inch microphone input. If you have a PC, it might have this. So hypothetically, let's say you get some adapters and you plug your microphone into that little microphone input. Going through adapters is never a great idea when it comes to professional quality sound, but let's just roll with it for a minute. What's happening now? We have that analog and digital problem, right, that we talked about earlier. So this microphone is analog. Okay? My voice is analog. Microphone is picking up an analog signal when it goes down that wire through your adapters and into that little eighth inch jack on your computer. Still analog till it gets to the computer. Once it gets to the computer, your computer has a little chip in it that's called an ADC. Can, ADC stands for analog to digital converter. That little chip is designed to convert an analog signal to a digital signal. Simple enough, once it's a digital signal, it can roll through your computer and get into your software and do whatever we need to do with it. Now the problem is, what's built into your computer is cheap, little ADC thing. It's not very good for professional quality audio. We need a bigger, more robust, and more accurate analog to digital converter because that conversion of analog to digital is important. How we convert that signal is really important because you can lose a lot of fidelity on that conversion if you don't do it very well. And the things built into your computer, whether it's a Mac or a PC, thirds, not very good. They're not designed to be professional quality. So what we do is we get a separate box. We get a box. It looks like this. This is an analog-to-digital converter, and it's so much more. But let's start with analog to digital converter. So we can plug in a microphone. So that solves that adapter problem, right? We can plug in a microphone here, or here, we can plug multiple microphones in. That solves another problem that we'll get to down the road. And then we can plug this with USB into our computer. Now this box is an analog-to-digital converter. It's going to convert that signal into a digital signal with much more care than your computer's going to do it. These are designed to be fairly good. Right? Now. There are cheap ones of these, and there are very expensive analog-to-digital converters that you can get. Talk more about that in the next video. This has another function too though, because when you plug in headphones to your computer, you get that same little eighth inch adapter right? Now. Headphones to your computer is fine. But that same process, but backwards has to happen. Somewhere in your computer, there's a little circuit called a DAC, digital to analog converter. And that is also not professional quality. It's cheap little one that's built into there. And yes, it sounds fine when you're listening to Spotify and things. But when you're really mixing music and you're listening close to music, you want a high quality digital to analog converter as well. And this does that to one box is an analog to digital converter, meaning we can plug a microphone into it and send it to the computer. And a digital to analog converter, meaning we can plug our speakers or headphones directly into it and listen to our sound, converts both. So if you want professional quality sound, you need one of these boxes. We don't call it a digital and analog to digital converter. And a digital analog converter. We have one word for these boxes and it is an audio interface. That's what this is. It's an audio interface. So let's go to a new video and talk about what to look for in an audio interface. 14. The Audio Interface: Okay, there's a few things you're looking for when you're shopping for an audio interface. This one is by a company called M audio. This is called a profiler 6 ten. There's a reason that I just had this one laying around. It's because I don't really like it. I had problems with the computer seeing it and it's old. And it just didn't sound all that great. Um, audio stuff is fairly inexpensive. So if you're on a budget, you might try some of the audio stuff, although it's hidden mess. So what you're looking for in an audio interface is, the first thing is make sure it has the right connection. This one actually has old-school because it's old firewire 400 port. I can't even plug it into my current computer without a bunch of ugly adapters. So USB is fine, but it needs to be USB 2 or USB 3. And then any faster connection cable is fine to lightening or whatever you want. But USB 1 is generally going to be too slow, but USB 2 and 3 are fine. So make sure it has connections for what you have in your computer. The next thing you're going to look for is how many inputs it has. This has actually four inputs. It's got one here, one here, and then what's called a line input here and here. Line inputs like an instrument, like a guitar can plug those in here. They're not XLR cables, but they're just a instrument cables, quarter inch cables. So I can put four things into this. The number of things you can plug into your audio interface tell you how many things you can record at once. So in this case, if I'm using this box, I could record for things at once. I can plug in two microphones, a guitar and a bass. And I could set up four tracks and record those four things separately if I wanted to. But if I wanted to record a whole band, can't do it with this because I can only record four things at once. So how many things do you want to record at once? You're always going to find these things in multiples of 2, 4, or 8. So you'll find a box and audio interface that has two inputs, four inputs, eight inputs, 12 inputs, and then double from there, 24 inputs, 48 inputs. Not sure why that is, but it's always that way. The next thing you want to look for is how many outputs it has. You want at least two outputs because your outputs are going to be your speakers, right? We're going to connect our speakers to this thing. This one happens to have eight outputs. That's all of these. Now having many outputs can be useful if you're trying to do sub-mixes or you're trying to do like a surround sound thing or something like that. You can plug in eight speakers to this if you wanted. Or you wanted to send a click track to someone while they were recording something. You could do that if you have multiple outputs. More on that later. If you're just getting started, unlikely that you're going to need more than two outputs. So if I was you, I would look for something relatively inexpensive. That means in the $300 range, I mean, these aren't super cheap, but you can get fancy ones that go up to 30000 dollars for the really high-end audio converters. The one that I use here in my studio is an apogee Quartet. I think it was around $1500. It's pretty nice. One, it's only got two inputs. That's all I need here. And then it's got four outputs. Now it's got four inputs and four outputs, I think. But I really only need two inputs and two outputs here in my home studio, in the recording studio that I work in, I use a different interface that has 48 inputs and a whole bunch of outputs. I don't remember how many. So these can be cheap or expensive. And audio stuff is okay. If you're looking at that price range, look at Moto and OT you as a company whose stuff? They have some stuff in the $500 range that is quite good. Also focus, right? Focus, right. Scarlett is a unit that a lot of people are using. It's really affordable. We use them all over the place at the, some of the schools that I teach at. And they are, They're great, they're really reliable, they sound great, and they're really affordable. So check those out. Cool. So that is an audio interface. 15. Speakers or Headphones?: Okay, Up next, speakers and headphones. Should you get speakers or headphones? In an ideal world? Both is great. Having a good set of speakers, which we call monitors in the kinda studio lingo. And a good set of headphones. If you can only get one or the other. A good set of headphones will get you pretty far. And what you can get for a small amount of money goes a lot farther with headphones. For a $100, you can get a really good set of headphones. For a $100, you cannot get a good set of speakers. You need a lot more money for good set of speakers. So for headphones, getting a good set of speakers means good over the ear headphones, okay, So not earbuds. And those are fine for listening to music. But for a professional quality work, we really want an over the ear kind of an ear muffs style like this headphone. For a few reasons. One, they supposedly emulate the acoustic signal a little bit better. Another reason is that you might be wearing these for like many hours at a time and you want a comfortable set? I don't like having that thing stuck in my ear. Doing that. It just doesn't feel great after awhile. So these are good and comfortable, stylish. These are the Sony MDR 75 or fives. These are kind of a standard set. There are about a 120 bucks, I think. Anything that's around a $100 range, not anything but most headphones that are kinda in that a $100 range are going to be pretty good quality, good studio quality headphones. Highly recommend. When it comes to speakers. It's kinda of a different story. Speakers. You could spend anywhere from $300 for a decent set of studio monitors. And if you're looking around online, what you want to search for is near-field monitors. That means small, high-quality speaker that you're pretty close to. So you can spend a couple $100 to get a decent pair. You could spend, you know, 30 or $40 thousand to get the top of the line pair. I just got new ones. I'm really excited about the speakers I just recently got in my studio here. I got these focal twins speakers. They were about $5 thousand for the pair. So about twenty-five hundred dollars each. So you know, there are pricey, but they're really good there. That's a pretty high-end speaker. So go around to one of the music websites, search for near-field monitors and see what you can find on the cheaper end. I think Sony has some that are somewhat inexpensive. The k RK rockets are pretty good for their money. And then you get to the high end like gentle acts and things like that. Those are the high-end speakers that are going to be quite expensive. And you can go up from there, get into something like Meyer Sound where you start talking real money. So I really, I don't really like working with headphones as much as possible. I always try out a mix on headphones, but I prefer to work with speakers so that I can just kind of relax and hear things. But you've got to have good quality speakers to do that. And you gotta be able to make some noise. And perhaps you are in an apartment or something where you can't really make some noise. So get a good pair of headphones and you'll be just fine. 16. Microphones: Okay, Next let's talk about microphones. So just like speaker's microphones can be anywhere from $50 to $50 thousand for a single microphone. And it can be insane. In fact, whenever. If you see ever advertised a recording studio and they say this is a 10, $1 million recording studio. Probably 9 million of that is in microphones. Microphones are expensive and a good recording studio has a wide variety of really nice microphones. In order to do professional recording, you need a lot of them for it because they're all specific to different situations. But if you're just learning how to do some stuff and you just want to try out the waters. There's one microphone that I would recommend getting. If you want to buy a microphone, which you don't need to do at this point. But if you're dying to make some sound, let me tell you some stuff. So I have here three microphone's. Hey, this one's a little fancier. This is what's called a condenser microphone. It's going to pick up a lot of sound and beam and be much less focused than some other microphones. Meaning, if I plug this in right now you would here, you might hear like planes flying over and things like that in addition to my voice, but other sound from elsewhere. One that looks like this. This is actually a measurement microphone don't get one of these. This is like to measure the acoustic properties of a room. I just happen to have it sitting next to my desk. This one is your good all-purpose microphone. This is called a dynamic microphone, meaning that it can handle a lot of volume dynamics is what we call volume. They are not very fragile. They can really take a beating. If you've ever gone to a show and a bar or a club, you've seen probably 20 or 30 of these on the stage. This is called a Shure SM 58. Okay. There's also the Shure SM57. Both of those are good. Any recording studio is going to have a drawer full of these things. They are at a standard microphone that everybody has a bunch of there are about a 150 bucks. So for a 150 bucks you can get the same microphone that's in a bunch of recording studios. And they use these in recording studios in some situations. Other situations that are going to use something more like this, which is much more delicate and picks up sound in a much more delicate way. But sometimes they use this. These are great for vocals. These are great for drums, although you need a lot of them to put all over the drums. There good Four amps. If you want to get a guitar sound, throw one of these in front of your guitar amp. You'll get a great sound out of it. So if you can only buy one microphone in your entire life, get an SM58 or an SM57. Look around online, find when used because they're super durable, it'll be fine. That's what I recommend. 17. External Hard Drives: The last thing I'll mention when it comes to hardware is external hard drives. Your computer has a hard drive in it. But some of these sessions that we'll be making get to be pretty big. Gig or two gigs, maybe even more depending on what you're doing. So having an external hard drive is really useful. I'll show you in a minute my crazy hard drive tower. I have a lot of external hard drives. In fact, I can just show you on my desktop right here each of these little circles and these icons and that one. These are all hard drives and these are big, these are 20 terabyte hard drives, which is a really big hard drive. I've named them all after lakes that have been in my life. Lakes in my life. And I was just kinda fun. I grew up at a lake called a point. Chaos means this hard drive is just weird. So I call it my chaos drive. Games is a lake here in Minnesota. Iowa is the lake near my house. Lake Michigan, it's the big one. Lake Superior is another big one. And like Nicole, This is another lake near my house. So anyway, I've named all the hard drives after lakes because I thought it was funny. Portable is one that I pull out and I take back and forth to different studios that I'm working. So I might work on a track, throw it on this portable drive, and then pull it out and take it somewhere else. That's the one I think you should have. Have a portable drive that you don't mind lugging around to other places? This one is probably, I don't know. It's probably 20 gigs or so. So it's not huge. And so you can get these kind of drives for 30 or 40 bucks now. So I would recommend having a good portable hard drive. 18. My Setup at Home: Okay, next, I thought I'd just show you my little setup here. So this is my home studio. So this is what it looks like. Now, first thing you'll notice is that I have a stupid amount of displays. And that's because I've just gotten used to having a million things open at once. So that's how I roll. So off to the left here we have some scores for a project I'm currently working on. Then I have able to push controller. And here I have the Rowley seaboard keyboard to really enjoy. The stream deck, which is like my lifesaver. Maybe I'll talk more about that later. These are my cool new focal twin monitors that I love. Have big TV up here. A little display is down here, not little actually, quite big. Here I have a microphone I use to record my voice for videos on a swivel so I can tuck it away. This is my apogee quartet, audio interface. Other focal twin. Yeah, What else do I have? Here is my computer, so I have The Big Mac Pro tower. And you can see I do have a million things plugged into it. If you go down there and then if we crawl around back, you'd see even more hubs and things. This is my crazy hard drive tower. Or all of those hard drives are the portable one, is this one here because it moves around. But these ones are big, big hard drives and then I've got little ones in all of these little slats. I mean, this one thing myself, I'm really quite proud of it. Anyway. And over here I've got some hardware year and good old-fashioned turntable. So that's it and that gets me pretty much everything I need. This is quite an elaborate setup. I wouldn't suggest that anyone needs this, especially are these displays right away, but rather enjoyed it. So there you go. This is what I'm looking at when I'm filming videos. So now you know what the other side looks like. 19. What is a DAW?: Okay, So the next big thing to tackle is the software. Now when we're doing production or recording, the big tool that we use for just about everything is the DAW. Daw. This stands for digital audio workstation. Now this is a broad term that means any software that's designed to let us manipulate multiple tracks of audio over time. So any professional audio software is going to be considered a DAW. And some not professional software, it would be considered a dot as well. Something like GarageBand. You can think of it as kind of like your Microsoft Word. But for audio, right, like in Microsoft Word, we have the ability to write text. We can move text around and cut, copy, paste, add things, delete things, change the order, makes up things bold, make some things italic. We have kind of equivalent things in the DAW program that you decide to use. And there are a bunch of different dots. I'll get back to that in a minute. So we have the kind of equivalent thing, you know, we can move things around in time, make this part come before that part of it around. We can make things louder, we can make things quiet, or we can make things sound different than the things around it. So it's kind of our main Canvas for working is the DAW. So like I just said, there are a bunch of different ones. What you're seeing here is on the screen is one called Ableton Live. Ableton Live is my personal choice. And typically what happens with people making or recording music is that you kind of pick one and then learn to master it and then stick with it for a long time. I have switched. When I first started doing all of this, I was using a dock called Digital Performer. And I think I started using digital performer because that's what my teacher used. And so it made sense. I switched. So I started using it. And then after a while, I switch to using logic. And then after awhile I switched to using Ableton. So I've switched a few times. And the reason why someone would switch is maybe there could be a few reasons, but none of them are really crucial at this point. Early on. Some features would be in some dogs, you know, some, some dogs could do things that other ones couldn't do. At this point. They can all pretty much do everything that all the other ones can do. It's just a matter of how it's laid out, what it looks like, what it feels like. They all pretty much have the same abilities. With a few notable exceptions. I still use logic a little bit, so I bounce a little bit back and forth between live and logic. I use logic because if I'm collaborating with someone using logic, then it's easier to use logic then to convert the sessions back and forth. That's really hard to do to convert sessions between Dawes. A lot of the time if I'm working on a film project, the film people like logic because it plays nice with some of the video editing software. So I might use it for that. So I'm pretty comfortable in logic and in Ableton. But if someone just said, go make something, I don't care what you do, then I'm going to use Ableton because it's just my go-to thing at this point. So let's talk about what to look for in a DAW, assuming that you haven't made this decision yet. So I'm going to talk about the things that we look for, which are actually not many. But there are a couple of things that will separate the Dawes. So what to look for when choosing one? And then I'll go through kind of a highly opinionated list of what the best-known dogs are really kinda known for. And talk more about that in a minute. But first let's talk about what you're going to look for if you're choosing a dog. 20. What to look for in a DAW: Okay, when you're trying to decide what dy you should use, that are kind of three factors worth considering. And then a fourth that's not really worth considering. I'm gonna tell you about it anyway. So number 1. First and foremost, are you on a Mac or a PC? Because it used to be true that some dogs or work on Mac and some work on PC at this point. And all of the major Dawes that I can think of off the top my head have both versions for Mac and PC. So it's not that big of an issue with one exception, logic. If you want to use logic, you must be on a Mac. Logic is owned by Apple, and so they don't have a PC version and they probably never will. So if you have, if you're on a Mac, you could choose logic. If you're on a PC. Cross out logic, that's not an option for you. But I believe all the other major does have PC versions. There was a NMAC versions. It was a holdout for awhile with fl Studio for a long time, didn't have a Mac version, but they do now. And I think all the other ones do. So that's the first thing to consider, is, is there a version for your computer, which pretty much means, are you considering logic? The second thing is hardware compatibility. So we talked about the audio interface before. That's the thing that you have to worry about. Does that hardware interface work with the doll that you want to use? Not all hardware interfaces will work with all does. But the good news is, most of them will work with most das. So it's probably true that your hardware interface will work with your DAW. The big exception here is if you're considering Pro Tools. If you're considering Pro Tools, then you have to think about what hardware interface you get. Because Pro Tools makes their own hardware interface. And they really like to work with only their hardware interfaces. So if you're going to use Pro Tools, you should get a Pro Tools audio interface. Now they do just recently in the last couple of years they have come out with versions of pro tools that we'll use some other hardware interfaces. So that is possible, but not all of them will work with Pro Tools. Pro Tools quite finicky about the hardware that it chooses to use. So if you want to use Pro Tools and you already have an audio interface, be sure and look up the specs for your hardware interface and make sure that it's compatible with Pro Tools and the specific version of Pro Tools that you get. Because not all of them work with all the different versions of Pro Tools. It's really bizarre and hard to keep track of. If you're not considering Pro Tools, then the odds are that your hardware is going to work with your audio interface. But before you buy any software, it would be good to look up the hardware that you have and make sure it's compatible with the kind of software that you want to get. It probably is, but always good to check before you buy anything. So that's number two. Number three is probably the biggest one, and that is just price. Some of these applications, these software can be expensive and it can be cheap. There are doors available for free, and there are some available for $10 thousand and that range. So that's what you're working with between free and $10 thousand. Most of the applications are in the three to $500 range. I think logic is about $200, I think live is in this 5 $500 range, I think. But they do offer an educational discount. So if you're a student somewhere, anywhere, you can, which you are, you can get it for roughly half price. So that takes it down to two hundred, fifty, three hundred dollars. Pro Tools is the one that gets really expensive. Because sometimes you have to buy the hardware with it and they have big professional versions and it just gets silly. So the price can vary wildly. But you know, there are free applications like there's a program called ardor that is great and it's free. There's things like GarageBand that people use that I wouldn't quite call a professional tool. Garageband is quite limited in what you can do. So it's not really a professional tool. But there's a good amount of things you can do with GarageBand. And it just, I believe it's free if you have a Mac and just kinda comes with your computer. And it's fine for getting started. Um, but if you really fall in love with GarageBand, I would encourage you to upgrade to logic, which is a kind of an easy step up. More on that later. Okay, So price is probably the, the, is the third kind of big factor. The last factor that is maybe worth considering, maybe not. But people do consider it, is hipness. This is a weird thing, but there are certain genres where it's cool to be using certain software. There's not a great reason for it. It's just kind of the cool thing to do. I guess. If you are making techno, bass music, things like that, then Ableton is probably the best thing to consider. You can make all of those genres with any DAW. But the cool people are using able to, I guess, if you were doing big recording sessions, Pro Tools is the standard thing to have. Although you can do that with any of the programs. If you're making big pounding dubstep with real gritty base and stuff like that. Then the cool thing to have is FL Studio. But again, there's no real logic to this. You can make all of these things with all the different programs. It's just kinda of what people are using for really good reason. So if that's a concern for you, that you'd be taken seriously within this genre by having the right tools, then you can consider that. But you don't really need to consider it because it's not very real. But I thought I'd mentioned it. Okay. Let's move on and talk about my highly opinionated list of common ducts. 21. A Highly Opinionated List of Common DAWs: Okay, so I'm gonna go over this list. This is kind of all changes fairly wildly the cost, so look up the cost before you make any real decisions. This is just kinda of what it is lately. And there's a couple things that I've left off this list just because I didn't think of it when I was making the list, but now I'm thinking of it. So I'll talk about those in just a second. So we're gonna go over this list. And then in the next video, I'm going to tell you, just for the money, Here's what I would do. Okay. So first let's talk through this. Okay, so I've listed here a bunch of the different DAW applications. Kind of what? A big summary of their strengths in my mind, the general cost and some notes about them. Okay, so Ableton Live, I've already talked about its strength is performance and production. This performance thing means that it has the ability to like if you see somebody like deejaying live, they might be using Ableton Live because really the only one that has that built into it, the ability to use it as a performance tool. But it's also really solid for production. The cost ranges between $99 and 699 because they have different versions. There's an intro version and then what they call us standard version, and then what they call the sweet. So the suite is 699, the intro is 99. If you want to do professional work, kinda have to have the sweet I think. But you can always upgrade. So getting a smaller version and then upgrading to a bigger version is cheap. So something worth considering. Okay, Next on the list, Pro Tools. It's big strength is recording. I find editing and working in Pro Tools to produce music, to generate music to be really cumbersome and somewhat frustrating. But to record it is kind of the industry standard right now. For a big recording studio. They're probably going to be using Pro Tools. Okay. It also has various versions. So it's between 699 and up to 2000. It usually needs its own hardware, which makes that price jump a lot. I'm like if you get one of the big pro Tools, HD systems, it's going to be up in that 10 thousand or more range. The other thing I'll add about this based on some recent experience, is that Pro Tools, the parent company of Pro Tools, is avid AV AID. Avid is the company that owns Pro Tools. I was recently doing some work with Pro Tools and some pro tools hardware, and I had to deal with their tech support, which I was warned against doing because people had told me how just utterly horrible that are tech support is. But I had no choice. And I went to their tech support. And I can confirm it was one of the worst experiences of my life. So I really don't advise anyone using Pro Tools anymore because their tech support is so awful that I just don't think any human should be treated that to be honest. So photos, not my favorite. Okay, moving on, logic. The big strength of logic I think is software instruments. Meaning, if you want to simulate the sound of an orchestra, logic is really good at that. They're really good at making things sound real. And you can do that in any of these programs. But it's really just kinda built into logic that you can queue up an orchestra that's going to sound pretty real. So it can handle a lot of what we call virtual instruments all at the same time. So it's quite good for that. The costs for 99 and I think I recently saw this down it to 99, maybe even 199, maybe it was a sale. Note. There's no PC version because Apple owns it. Okay. Digital Performer is really good at midi sequencing. It's good at other stuff to Digital Performer just is not very popular. I don't know why. It's made by a company called Markov unicorn. It's a great program. It's great. It's just, I think it just didn't make it into any of those kind of hipness categories. So it's not particularly fashionable, but yeah, it's a solid program. Order. So order is basically a Pro Tools clone. So it's designed to look and act like Pro Tools. It's open source and free. So that means that Arther can be a little tricky to set up. Although it's gotten a lot better. But it's a full featured program. It's totally free. Check it out. You might like it. I know people, professional producers that use Arther. So it can be a great option. It's got, you know, if something is open source, that means that usually, that means that their tech support might be non-existent. But there's usually a really big community of users who are willing to help you if you have a problem. So check it out. It's worth considering. And I believe it's for Mac and PC and Mac, Linux and some other weird platforms. Cubase. Similar category to Digital Performer. It's a great program. I don't have any problems with it at all. It's not as hip as anything else, but a lot of people are using Cubase and having great results. So check out Cubase if you like. Reason. Reason is a weird program. For a longtime reason only did midi sequencing and synthesis, but it was really good at synthesis. And it still is, I think you can do audio sequencing. Now, sequencing means like moving stuff around in time. But that's relatively new. So if you're really into synthesis, then reason is good for that. But it's not very popular because for a long time it was basically only a synthesis tool and wasn't a full feature DAW. Again, I think it is now, but has kind of fallen to the bottom of the list because for a long time you weren't able to do full projects in it? Reno is is a weird one that I just thought I'd throw on there because I kinda like it to weird program. It's really fun for making beats. It's really cheap. And I just wrote that it's new, it needs some time to mature. At this point, I don't really think it's all that new anymore, but it's kinda neat. And GarageBand, it's good at introducing people to possibilities. It's free on a Mac, um, and, but there's no PC version of it. And it's not really a professional tool. But it's just kinda, it's kinda great for introducing people to what you can do on a computer with music. But it's not, you can't really make professional quality work on it because it just kind of it's got a lot of like training wheels on it, so to speak. It doesn't let you have full control over the program. Notably left off here is FL Studio. Fl Studio is really popular with dubstep people and other people ought to EDM stuff. It's probably comparable to Ableton Live. I don't know what it costs, but it's on Mac and PC now. And it's a great program. It works a little bit differently and I don't know it all that well, I have poked around with it a little bit. And what I see in the interface is a little confusing to me. So I think it's different than some other programs, but they're all slightly different. It definitely has that hipness thing to it. If you're on a PC, especially. And if you want to make dubstep or anything similar to that, then I FL Studio might be kinda your go-to thing, but so it's definitely worth considering. Okay, So this is my kind of highly opinionated list. But let's go into let me just tell you what I think you should get. If you don't have anything and you're just getting started. 22. For the Money...: Okay, So I'm going to break this down by the best free option, the best low-price option, the best mid-price option, and the best expensive option. That might be too many categories, but we'll see. So the best free option is harder, I think. Now, a lot of people have asked me in the past about this program called Audacity. Audacity is a free program as well. You should get it because it's free. Why not? But audacity isn't to me a full feature DAW. Audacity is really good at a couple of things, but I really wouldn't want to make a full track in audacity. I think that would be painful to do. I don't think it's really designed to do that. It's designed to do some things and we will look at audacity in this class. But I wouldn't consider it a full-featured DAW. So hold on to Audacity for a little while. Yeah, We'll talk about it more later. But for full-featured DAW, I'm gonna go with Arther. There's another one that's popular. Right now. It's four letters and starts with an L. I want to say element OP, but that's on it. L, L, M, and S was at it, something like that. A lot of people are using that one and really liking it. I think it's PCA only. So you might check that out. But in my experience, Arther has been better. So best free, go with art or if you're look, if you don't have any money and you're like I just wanted to get in and start making stuff. Try harder. Best, inexpensive. So inexpensive, I'm thinking under a $100. For that I'm going to go with again, this is super opinionated, but I'm gonna go with Ableton little light. That's their intro version. I think that's a really good way to get you started. You can do a lot with Ableton light. And you can always upgrade to the full version from there. So able to enlight. You can get it for a 100 bucks, maybe under a 100 bucks. And you can actually get it for free a lot of time, it comes with hardware and things like that. They like give you a free license. So it's worth considering that best mid-price. I'm thinking in the two to $300 range, I'm probably going to go with logic on that, assuming that it's that 199 price I saw recently is going to stick then and go with logic. If you're on a PC, maybe FL Studio is in the same range. So logic or FL, depending on what the price of FL is. Okay. I just looked up what the price of FL is. So FL Studio right now, comparable to Ableton. So they have an intro version at $99, and then they have a couple middle versions, and then their full version as 737 is what I'm saying. Dollar. So to that mid-range might be logic or the 299 version, which is the middle version of FL Studio, might be a good option. And then the best if money is no option, what should you get? And I'm going to go with Ableton Live. I'm not going to go with Pro Tools just because I can't do it. Ableton Live sweet or logic, the full version of logic, possibly. But for me, Ableton Live sweet as the best one if money is no object. If you're into FL Studio that full version at 737, it's good option as well. So that's what I would do. 23. The 4 Sections in Every DAW: Okay. So regardless of what DAW program you're using, they all kind of work the same. They all have different layouts in different ways. You have to click here and click there to do different things. But there are similarities in all the dots. In particular. They will all have kind of four main areas. You might have to open separate windows to get all those areas, or they might be all packed into one window in Ableton Live here, they're all packed into one window. So those four areas are, number one, the timeline. All of these programs are going to have some kind of timeline. Number to the mixer. Number 3, the Effects section, and number 4, the transport. They're all going to have those four things somewhere. So what I wanna do for the next couple of videos is go through each of those in my idea here. And my hope is that no matter what program you're using are looking at, you'll be able to follow along. I'm going to use live. But you could be looking at GarageBand or ardor or FL Studio and be able to follow along with this next section just fine, because we're not gonna go into the details of how to use any of these programs specifically, but how each of these areas kinda work together to help you make or record music. Cool. So no matter what program you're working on, you should be able to follow along with this discussion on these four sections. Okay, So let's dive in first with the timeline. 24. The Timeline: Okay, so the timeline is probably the biggest part of your program. For me, it's this big chunk of stuff here. What this lets me do is look at all my content K. So it might be audio stuff, might be midi stuff. If I go down to the bottom, I think I have Mideast stuff. Maybe I don't have any midi in this session. Are some note there. That's a midi track, those little dots. That means you're looking at midi information. And these waveforms tells us we're looking at audio information. So you might have many staff, you might have audio stuff. And the key thing here is that you can move it around. Okay, so I've got this that's happening here. Maybe I want to put it there. I can move it around. You know, maybe I just want to copy this part and put it over there. I can do that. I can adjust things and move it around within the timeline. Always in a timeline, time flows from left to right. So if I play this, you'll see what we call the playhead. Kinda moving across the screen. And everything sounds when it gets to the play head. Now we also here have tracks, okay? Each one of these horizontal lines, groups of things is called a track. So each track has its own settings that we can control when we talk about the mixer. But you can think of this as analogous to a musical score. That's really what it's designed to be like, right? In a score we have each instrument has its own line and then we read from left to right. So this works exactly the same. Every instrument has its own line and we read from left to right, or in this case, the computer reads from left to right. Generally, we can make things bigger or smaller. If we want to really get in there and see what's going on. Or we can tuck it away and make it nice and small. You can usually look at your timeline either in terms of bars and beats, which in Ableton is at the top. If I zoom in here, you can see 25 means bar 25. 25, 0.2 means bar 25 beat to 25. 0.3 means beat 3, 25, 0.4 means 25 beat four. And then we get to 26. Stuff in between 25. 0.4.3 means bar 25 before third 16th note, because what that means, and that timing mechanism works a little bit different in each of the programs. They might do this a little similar. You might also just see time in minutes and seconds Live. We have minutes and seconds at the bottom. So here we're at 49 seconds, 49.5 seconds, 50 seconds. You will also in a lot of timelines, be restricted to working on a grid. So you can see here these little blocks, if I try to move this over, it's going to make me land on those blocks. Okay. I can easily turn that off by control clicking and just say off. And now I can move this anywhere. And I'm not restricted to the blocks. But we generally like to be on some kind of a grid so that we know what's going on. But that's how the timeline works. We have content, we can move it around, we can cut copy paste. I'll talk more about how to do that in a second. When we talk about common key commands. And we can just work on our arrangement. That's where the majority of your work is going to be really in the timeline. At least when you're producing music. When you're mixing music, you're going to mostly be in the mixer section. So let's go on and talk about the mixer. Now. 25. The Mixer: The mixer section for me is over here. You might also have it on the bottom. And you might have to open this in a different window. So if you go in your program, if you go up to the menu that says View and then select mixer, can see the mixer. I have a few different ways. I can look at the mixer in live. I can look at it like this over here. Or I can press the Tab key and then look at it in a way that might look more like how it looks in the other programs. So I'm going to stay on this view because most of you are probably seeing something that looks like this. Now there are a few things in the mixer section that all mixers have. The first is the ability to control the volume for that track. Okay, so let's look at this one. Okay, so I can turn it up. I can turn it down. Okay. So I can control the volume of it. The other thing that is common in just about every mixer section is panning. Panning means the left to right balance. We always, almost always working with two speakers, okay, so you have one on the left side and one on the right side. The reason we do that is because we have two ears, one on the left side and one on the right side. We want to kind of emulate that. So with this knob, which is called panning, we can decide how much of the sound goes to the left speaker and how much to the right speaker. When it's straight up. That means the sound is going equally to both speakers. Okay? So if I push this to the left, we're going to hear it moved to the left side. If I push it to the right, we're going to hear it move to the right side. Now, if that was backwards for you, the camera does a weird mirror imaging thing. So I'm not sure if that's going the right direction, but to me, that's left and that's right. So let me demonstrate that by going over to the master channel, which is another thing that all mixers will have a master. This is the last thing. This is a composite of all of our tracks go here. Okay, So this was everything. So if I go all the way left now, it should all the way be in your left speaker phone. Nothing out of your right speaker said my voice. Send it over to the right. Now it's all the way. If you didn't hear that panning, it could be because of the way this video is compressed. Sometimes the panning doesn't come with it and they just get rid of the padding and the video compression. It's weird. But you can see in the meter here, you see two signals. Now you only see the left signal. Now you only see the right signal. So you can tell it's working. Okay. Another thing that is common in all mixer sections is a mute. Mute means turn this track off. Okay, So for me it's this big yellow button in LA. So I can say this one. So now you can see it kinda got grayed out and we're not I'm hearing this. Anyone can get a whole bunch of stuff, commute all the drums. Now all the drums are going. So you can unmute him by turning him back on drums. We also have solo. That's the S for me. Solo means mute, everything else except this track. Okay? So for this thing, the whole thing, I say solo, we're just going to hear this kick drum, right? I can see what's on this. It's here. Turn it up. So solo mutes everything except what we are currently hearing or accept what we've selected solo on. And then the last thing that almost all mixers will have on each track is a record button. This means arm to record. It doesn't mean start recording. So what these always mean is five press record on one of these, it'll turn red. Red is like the universal color for record. So it'll turn red. And then it'll wait for me to press the record button up here. And when I do, it will start recording on that track that I've said to record two. So we call this arm to record mean, meaning when I start recording, we're going to record on that track. If I don't select, if I don't press that button on any track and I just hit record and start recording. There's no track that was actually just recording. So you'll find those controls in every mixer on any program. They might look a little different, but all of those things are going to be in all of the programs. 26. The Effects Section: Okay, Next, there's going to be an effects section. So you can put effects on any track. Effects are things like you might be familiar with, something like distortion. That looks like, well, they're all going to look different. But you know what distortion is probably it's going to add some fuzz to the sound. We also have things like reverb, even AUTO-TUNE. Those are all effects. We're just going to change the sound. Now it's important. Let me actually put one more thing on here just to demonstrate something here. It's important to remember when you're thinking about effects, the idea of a signal flow. Signal flow means that there's always a path that the audio is going to go through your program. Okay? For me, and this is true in most programs. This is my Effects section. And the audio is going to come in here and out here. And it's going to flow through these effects this way, right? That means that this effect is going to come before this effect. And that's important. There are cases when you want the effects in a certain order and you can usually just kinda drag them around to reorder them. But it's important to remember that in this case what I have is distortion coming through and then reverb. So that reverb is going to be applied to the distorted signal. That should be okay, but maybe I want it the other way around. So we get reverb and then distortion so that the reverb sound, the kind of echo sound has distortion on it as well. Both ways could work, but it's a decision I should make. There are other cases where the order of the effect is going to be more extreme. So it's just one thing to think about. Another thing to think about with the effects section is that this is where you're going to put plugins k. So in some programs, they have separated audio effects and plug-ins into two separate categories. That's how Ableton does it, sometimes will all be lumped together. But plugins are typically their own little program. You can think of plug-ins as a whole separate program, but they run inside of your DAW. And they do different audio effects. Or they might be synthesisers or sound making things as well. So these are going to look very different than my effects that are built into the program. The effects built-in to the program or the things made by the program. But these things like, let's grab this guitar rig. So when I load up this guitar rig, it pops open a new window and is a whole separate program. But my signal, my audio is still flowing in and out of that effect on either side. But it is its own really kinda program. I can close it and it'll still work. But plugins are made by other companies. Usually, you can find hundreds of millions of plugins around the internet. And there, some of them are cheap, some of them are free, some of them are expensive. It all depends on what you're looking for. We'll talk more about plug-ins later. But that's the effect section. 27. The Transport: The last chunk that all applications have as some kind of transport section. For me, the transport is this top part. This tells me the easiest way to spot the transport is it's going to have your big play Stop record buttons on it. But there's some other important stuff here too. You might have your loop settings here. This is how I turn on a loops. If I want a section of the song to just loop. This is where I know the tempo of my song. This is where I know the meter. This is a metronome that I can use if I'm recording. So the transport section is probably the easiest section to understand, but it's something that you should not forget about. There are a lot of important things here. They're all pretty specific to each DOD 0. This tells me where my loop starts and how long it is. These are overdubbing controls. So what happens when I record over audio or midi that's already there. And then stop play and record. This is our current position. This means follow along. So if I go back over to my timeline and let me turn off the metronome. If I press this follow along button. Transport is going to stay still, but the music is going to follow underneath it. I generally like it gets kinda relaxing to watch actually. Anyway. So don't forget about your transport section. It is important, but it's also relatively simple. 28. Nearly Universal Key Commands: Okay, when you're working in your DAW, there are a couple of key commands that are newly universal. They're almost always the same in all programs. Now, I'm not a big fan of saying you should memorize thousands of key commands. Like some people are though if you, and if you get really good at a DAW, you will eventually know a lot of the key commands for it. Just because you'll find yourself doing the same thing a 100 times in a row. And then you'll just say, Hey, is there a key command for that just to save myself some work? And there probably is. So key commands are good to know. I'm not a fan of just saying you should sit down and study the list of key commands and memorize all of them. That's not very useful to me. Memorize the ones that you're going to use all the time. But these few that I'm about to tell you are the same and virtually all programs that I've found and are really useful to know. The first and most common is the spacebar. The spacebar in just about every application I've ever worked in, means play and stop. It's a toggle. So I am currently stopped. I put my cursor somewhere and I press the space bar and it's going to start playing. I press the Spacebar again and it's going to stop. Space-bar. Always beans play and stop. Another thing that's a nearly universal key command is anything you can do in a text editor. So if I select something and I'm going to press Command C. Now I want to Mac if you're on a PC, I think it's Option C or control C. But for me it's Command C. To copy something. I can click out of it goes somewhere else. And Command V to paste. I can Command D to duplicate and Command X to cut. So if I just want to get rid of something Command X, that is true in every program I've ever worked with. I'm pretty sure it's the same text editor stuff, right? Like copy, paste, cut, duplicate. Maybe duplicate isn't one of them, but Command D is almost always duplicate. So file those away in the back of your head. Those are probably the most common things you're going to need. And it's really handy to have to. There you go. Nearly universal key commands. 29. Care and Feeding of Your DAW: So a couple of things about working in a DAW that are just generally good practice. The first is I would say, keep your program updated whenever there's new updates that come out, run them. If you're doing professional work and want to be working on a professional level, should always be working on the latest tools. So keep your application up to date. Um, when there's a major update, like when it shifts by a number, like able to intend to able to 11 that you might stop and think about for for a few days to decide when it's a good time to convert. But for little updates, things that are like live 10.2 to 10.3, that would be considered a little update. Then just run them. Run them because they're probably going to make your program more stable and they might even give you some new features. So always just keep everything up to date. Now for those big updates, I also recommend that you work on the latest version. But keep in mind that you can, usually you can't go backwards. So if you do something in Ableton Live 11, for example, you cannot, you can no longer open it. In Ableton Live 10. You can't go backwards, but you can always go forward. Anything you made an Ableton Live 10, you will be able to open and able to live 11. That's true of all software, I think. But I do recommend you are on the highest number, latest version, whatever it's called. Second thing to keep in mind, if you want to be doing professional work, get the biggest version of the program. If you're going to start with something like, like going back to FL Studio, like those smaller versions of FL Studio. That's fine. But make it a goal to graduate up to the highest version. You'll get to a point where you're like, I can't do anything with this program anymore, then it's time to update to the newest version. The more you use it, the more you learn it, the more you'll become aware of the limitations that the lighter version, the smaller version and the cheaper version halves. So always make it a goal of getting to the biggest version as time and money, mostly money allows. With that being said, Let's talk about having a cracked version of the software. Now I know a lot of people are totally fine with working on crack software. That means if you're not familiar, that means you've basically downloaded the software off some website somewhere and you didn't legally buy it. I'm, you know, I'm not a warrior about this. I don't really care about stuff like that. You do what you gotta do. However, let me just see this into your head. Once you make something with that software and you're making money, then by that software. Cool. Can we come to agreement on that? And I probably shouldn't say this, but I'm going to especially true if you are working with one of the smaller accompanies, right? Ableton, for example, is there's no parent company to Ableton, able to make Ableton, and that's how they employ a bunch people. And they're all such good people. So by the software. But if you buy a copy of logic and you know, give apple another a 100 dollars, is it going to change anyone's life? Probably not. So maybe consider that. But there's another downside and that's that crack software often has problems, right? Like if you're into using this kind of software, you've probably encountered problems with it. The biggest problem is that you usually can't update it. And so you're, you're locked into whatever you have. And it might have some glitches in it because of the cracking process, that process of removing the license verification stuff. So I'll just reiterate, you do what you gotta do. I'd rather you were making music than not making music. So do what you gotta do to get the software to make music. But if you ever make something and you sell it, you sell a track. The minute you make $1. With music you've made, promise me, you'll get rid of that crack copy by illegal copy. Or before. Always better to have the full legit version so that you can update it. You can get support, you can get extensions to it. I mean, if you have a crack version of Ableton and you go to Ableton and say I have a cracked version of lives light, can I upgrade the sweet they're going to say no. But if you have a legal version and he tried to upgrade to sweet, then yeah, it's cheap to do. So okay, enough on that. Do what you gotta do. But think about the little people working at the company. Okay, let's move on. 30. What is the Grid?: Okay, up next, let's talk about how we organize sound in our DAW application. Now throughout this whole section, I'm going to be using Ableton Live, but you're welcome to use whatever you want. Everything that we talk about in this section. And actually probably from here on out until the end of the class, unless I specifically pointed out is true and just about every professional die and even not professional die, even like GarageBand, this is probably all still true. So you can use whatever software you want and follow along, you'll be just fine. So we talked about in the last section the kind of four different areas. We're going to focus in here on the timeline, and we're gonna talk about the grid. Every program works on kind of a grid system. So let me just throw something in here. I'm just going to grab a little clip. Let me find like a drumbeat shirt. Throw that on to an audio track. How about right there? Okay. Now you can see that we have these little rectangles here, right? These might look different and the different software, but see that I kind of snap to them. I can't get in-between those two things, but I actually can. We'll talk about that in just a minute. But when I talk about the grid, this is what we're talking about. And it's the way that our software snaps into the grid. Okay, so that snap to grid thing means that we're going to stay locked into different spots here. Now, this is generally good. If you're making any kind of beat based music, you want to be locked into the grid like that. Because that's going to keep everything nice and tight and it's going to sound good as long as you started on downbeats, basically more on that in a minute. If you're not making beat based music, you don't want to be on a grid, right? If you're making more abstract music, may be ambient music, anything like that, you want the freedom to be anywhere you can beat. So you would turn off the grid. This is going to be different in any program, but for Ableton, I'm going to Control click anywhere on the grid, get this menu, and then go to off. Okay, now you can see those grid lines are still there but they're dotted lines. And I can just put stuff wherever I want. And I can zoom way in and just really put things where I want. Now, look at this. This is one of the biggest problems that people do when they're just getting started out. And that is, they think they're on the grid. If they're not snapping to the grid, they might do something like this where sure looks like I'm on the grid right there at that 20. Right. But if I zoom way in, I am not on the grid. So that is going to make problems if I'm doing beat based things. So always zoom way into your grid and make sure you're right on it, even if you're not snapping to it. If you want to be on the grid, get right up on their zoom in as far as you can. Make sure you're right on that line. And I'm going to zoom out, I'm there. Okay, so let's talk about what this grid actually is. So first let's talk about the kind of the horizontal element of the grid, and then we'll talk about the vertical element. 31. Horizontal = Time: Okay, so the first thing I'm gonna do here is I'm going to turn my grid back on. So I'm going to go to this adaptive grid medium. Actually, let's go smaller than that. No. Now we're back to where we were. So we're going to talk about the horizontal element here. Okay, So going across this way, what are we saying? First, let's talk about that thing I just did. So if I go here, I have two choices for my grid. I have adaptive grid and fixed grid. You probably have this your software to. So if I say fixed grid and I say one bar, what that means is we're going to see every bar, each one of these blocks is now one bar. If I zoom way in, we're looking at one bar from dark line, dark line. I'm seeing beats here. But it's going to snap me to every bar, right? It's only going to let me do things on the bar here. Okay? So if I want this drumbeat to start on beat two, I can't do it. I can only start on the first, on this bar or this bar. Okay, so it's a little prohibitive. So let's say half-note. Okay, now we're slicing the barn half. Now I can start on, on beat three of the bar, which is halfway. So if we look here, here's, well, let's go all the way back to the beginning so that it's a little more obvious. There we go. Okay, Here's bar one. Okay, Here's B2, right? So there's a halfway point. So that's why we said half. Half because I've sliced the bar in half. It's a quarter. Now I've sliced the bar into quarters and I can start anything on a quarter note. Now important that no matter how far I zoom in here, I'm only going to be able to put things on the quarter note. If I drop something right here, it doesn't even move. It's either there or there or there. I can't get anywhere in between because that's what my grade is doing. No matter how far I'm zoomed in. K so I can make it smaller and smaller. But if I go to this adaptive grid setting, what this means is what I'm going to snap to depends on how far I'm zoomed in. So let's go to narrow. Okay, so now I'm looking at actually 30-second nodes. So let's zoom out a little bit. Okay, Now I'm looking at 16th notes. So I can snap to 16th notes. Each one of these is a 16th note. Let's zoom out a little bit more. Okay, Now I'm all the way back looking at half notes again. Okay, so I can get out a half-note. I can get on another half note. I can't get anywhere in between. But if I wanted to get more in-between these two spots, I can zoom in just a tiny bit. Non looking at quarter notes. Okay. If I wanted to get in-between these two quarter notes, could zoom in a little bit more. There we go. Now I'm looking at 16th or eighth notes. Okay? If I wanted to get in between these two spots and zoom in a little bit more. There we go. Now I'm looking at 16th notes. If I wanted to get in between these two, zoom in a little bit more. Now I'm looking at 30-second nodes. And I can keep going and going and going and going and going. Now I'm looking at 2048th notes. Okay, so incredibly, incredibly small. So in this adaptive mode, your grid is dependent on what you're zoomed into, Okay? Now not all programs do this next thing, but in able to in you can always tell how far you're zoomed in by looking into lower right corner, right there it says 8th. I'm looking at eighth notes now. Each of these boxes is an eighth note and zoom in a little farther. Now I'm looking at 16th notes, farther, 30-second notes. I go in. Okay, Here's 16384th notes. Pretty quick. Cool. So that's an important element of the grid. I almost always, for my purposes, keep this on adaptive grid narrow. Now, another important element, and the horizontal line of our grid is that we start our, if we have a clip that's beat based, right? So in this audio file, there are, it's a beat, right? Here's what's happening. All right, so the first spot of the clip right here is on a downbeat. And if I set that right, then the rest of the clip is going to be in time. Okay, Here's another downbeat because it's a whole note, a whole number, I should say. And it's right on it, right? It's lined up. So if everything lines up, right, our drum loop here is going to be perfectly on the beat. Assuming it's recorded or adjusted to fit into the beat that we have. We'll talk more about that later. We can stretch something to fit our tempo. Ableton does that automatically for you. Other programs do it automatically for you. There are some settings behind it. We'll talk more about that later. But it's important to know that if my downbeat is lined up right, the rest of the beat will be lined up right. Now we're going to go into, in a minute, we're gonna go into making a beat from scratch, in which case we'll just have smaller things like this. Like here's a kick. Here's the snare. And we put a snare again and again and then our kick again. And with these, we just need to manually line them up. So we'll talk more about that in a minute. Last thing about this horizontal layer is the numbering. If you look up here, I think we talked about this already. But just a reminder. If you see a whole number, so just the number 2. That means we are at bar two because that's the number to beat one. If you see a number dot and other number, that means in this case B2, B2. Okay, Here's B2, B3, B2, B4. And then we're gonna go to bar three because there are four beats in a bar. Now if I zoom in a little farther, we're going to see note in between there. So now we have bar to be 1. Third 16th note, okay, there are 4 16th notes possible, so 1234, that makes the third 16th note the halfway point. Okay? Because this is, these are, this is B1 and B3. So the way ableton does it is bar dot B, f.16 eighth note. Other programs might do something different. You might see a, a weirder number in the end. But almost all of them do bars, dot, beat, and then either dot 16th note, or they might use a different timing mechanism for the last number. You can also look at this in terms of time. If we go down here, the bottom, we see time three seconds, 3.5 seconds, four seconds, 4.5 seconds. The seconds you might think, Oh, that's what I want to see because that's more familiar to me. It's actually much less useful. Because what I really care about is where things are on the grid. The grid is the most important thing here. Not so much the timing. Okay? We'll work more with this once we start building and beat, which we'll do in just a couple of videos. But, but first let's talk about the vertical axis. 32. Vertical = Tracks: Okay, on the vertical axis, we have tracks. Each one of these is a track. Okay? So we've already talked about the two kinds of tracks we can have here. We have audio tracks, which are these two. And we have midi tracks, which are these two. When it comes to the grid, they basically, we're saying a couple things when you're working on a project that you're going to want to do. First, try to group things together. These are both drums, so I'm going to keep them side-by-side. You can always rearrange things by just dragging them. So if I wanted this to be in between those two, I can just drag this track above there. And now it's there. That doesn't change anything, just keeps things tidy. The next one is name your tracks. So if I go here and for me I'm going to press Command R to rename this. I'm going to call this. Well, let's see. We have kick and snare is what I have on this track. So I'm actually going to break this out into two different tracks. I'm going to call that kick that I'm going to duplicate it. And I'm going to delete my kicks from this one and delete my snares from that one. Now I'm going to call this snare. I like to, when we're working with individual drum sounds, or actually any individual sounds, keep them on separate tracks as much as you can. Sometimes you just can't do that. But in a case like this, I only have kicks on this track. So let's call that kick. Now I'm going to have only snares on this track. So let's call that snares. This makes it a lot easier when you get into the mixing phase. I can do things to my KEK without affecting my snares if they're on separate tracks. So for example, if I turn the volume of this kick way down, that's fine. I can do that if I have it set up like this. But if the snares are also on that track, now I've turned down the volume of my kick and my snares. So that's less convenient. Now I have separate volume adjustment for my kick and my snares. Not to mention all of the effects I might put on there. If I put some effects on this kick, I might not want those to be on the snare. So having them on separate tracks, any individual sounds, try to put on separate tracks. They'll just keep things more organized. It might make it so you have tons and tons of tracks, but that's okay. That's how big sessions work. This one has kicks and snares and hi-hats because this is a whole loop. Okay, so I'm going to call this drum loop because it's got a whole loop on it. That's going to tell me this is a lot of different stuff. Cool. So name your tracks and keep them grouped kinda close together. By similar sounds, right? If I had a synthesizer here and a synthesizer here, I'd keep those close together just so that my synths are together. Okay, So none of that will really affect the sound where these are ordered. But it just makes it easier to keep track of things as you start to build a whole song using this stuff. 33. How DAWs Handle Meter: Okay, Let's go back to the horizontal element a little bit. And I want to point out one kind of weird thing about how these programs work when it comes to meter. Okay, so first let's talk about what a meter is. So all music is in a time signature. And if you don't know what time signatures are, I guess I'm not going to go into huge amount of detail about how different time signatures work here in this video. But let's just focus on the difference between a time signature like 44 and a time signature like 34. Okay? So in 4, 4, we have four beats in a measure. And the beat is a quarter note. That's the bottom number. It's a four. So we're going to call it a quarter note. And there are four of those in a measure. Okay? So for example, this beat is in 4, 4. I'm going to solo it. So we only hear this beat. And let's count to four while we listened to it. You'll notice that it basically starts over every four beats, one, etc. Okay, so that's in 44. Now, we could have a beat that's an 3, 4. And that would be the same thing except it would line up every three beats. Okay. So be like 123123123123. Got it. Okay. We can set the time signature. Our session up here in the transport area. Okay, right now it's set to four for what I want to point out here is if I change this to 34, okay, now my session isn't 34. And now let's listen to this loop. It's still in four for changing the session. Time signature doesn't really change our content. What it changes is the grid. Okay, so let's put this and let's put this on B1. Okay, so our loop, our drum loop is now on, starts on beat 1. So now you'll see that it goes 1, 1 point 2. So bar 1, b2, bar one beat three to B2. So we're only showing on the Grid 3 beats per bar, which is correct because I've changed it to 34. However, our drum loop hasn't changed. Like the downbeat of the drum loop is still up here. This is before. Right? Before is now listed on our grid, has beat 1. So we have the wrong meter for our drum loop. Our drum loop is in 4, for our session meter is in 34. That's fine. You might choose to do that. But what it means is that our drum loop isn't going to line up on our grid very well, right? Because r greater than three and our drum loop and then four. So what I want you to get out of this video, if you're confused, is this changing the meter of our session does not change the content in it, okay? It really only changes the grid, okay? And where things line up on the grid, see this one is at bar to beat two is where this kick is. I'm gonna change it back four for now, just move to B2, right? Because it started counting at the beginning and shifted it over. Now, did it change the way it sounds? No. It changed how the grid is arranged, but it did not change how it sounds. Important thing to understand. Moving on. 34. Vocabulary: Downbeats, Upbeats, and Offbeats: Okay, We're going to start moving towards making a beat. And we're going to make a beat using individual hits like these. Before we do that though, let's talk about a little bit of vocabulary. Okay, So we have here, and let me do this. Okay? So this bar here is showing us one measure, one bar k, four beats. So a little bit of vocabulary about building a beat. There are really 43 elements to a beat. There are downbeats, upbeats, and offbeats. Downbeats. There's only one downbeat per bar. And it is here. Okay, It is the beginning of the bar. That is the downbeat. Okay? The next downbeat in time doesn't happen until the beginning of the next bar. Okay? Those are our downbeats. The upbeat, it's halfway through right there, which is going to be beat three, halfway between two downbeats. So if our downbeats are hearing here, are upbeat, is right here on beat 344. There is only one upbeat per bar, because it's halfway. Unless we're in double time, which we'll talk more about that later. And then the third thing is our offbeat. So our offbeat make a new track here. Are, offbeat is going to be in-between every beat. There are four of the four off beats per bar. So here's B1. B2, halfway is, is the third 16th note. Those are our offbeats because it's off the beat. Hey, there are four of those and every bar k. So downbeats, offbeats, upbeat or sorry, downbeats, upbeats and offbeats for offbeats, for every bar. One upbeat and one downbeat gave these terms are going to be important because we're going to build a little drum loop and we're going to need all of those terms. Okay? All right, Next, let's talk about the elements that go into making a beat. 35. Elements of the Beat: Kick, Snare, Hi Hats: Okay, What sounds go into a beat? Now we're going to start by making what we call the world's most simple drumbeat. Okay, this is going to be no thrills. So there can be anything in a drumbeat, you know, you can use spoons clanking together. You can use hitting on your desk. Any percussive sound like that. One of the most famous producers right now, dead mouse talks about using farts for to take the place of stairs. You can do that if you wanted to. We're not gonna do that here though. We're going to use the three most basic things to build our first beat, okay? And the three most basic things at any beat needs is a kick, snare, and a high hat. Okay, those three things. So I basically have kicks, snares and high hats here, but I'm going to erase them because all three of these things I pulled from this loop. So let's get rid of those. I'm going to un-solo this loop and then I'm going to mute it because I don't want to hear that. So I'm going to search through my library of stuff and find some sounds. Someone's going to search for kick. Well, in school, I'm gonna go with that one. So I'm going to put a kick on my timeline. I'm going to put that on my kick track. Okay, now, you probably don't have a big library of sounds like this. This is a great opportunity to head over to a website like I mentioned this before. I love this website. You can find millions of things for a totally free, royalty free download all day long. Let's Of course there. As with typically happens if I'm searching through like hundreds of stairs. Like one of the first ones I just heard was the one that I want that one. I ultimately it's got a little bit of a clap to it only because that's kinda what I'm feeling like right now. Okay, so here's our snare. Hi-hat and this root for hat. Now one thing you'll notice here that a lot of these hats are little loops. And I don't want a loop right now. I just want a single hit. We sometimes call these one shots, like just a one shot of a hi-hat hit like that. That's a one-shot. That's what I'm looking for, although I don't like that one. Here. Nouns, nice, nice and thin. We'll go with that. Okay, So here are my three sounds. Kick, snare, snare, that's not a snare. Let's rename that to high hat. Okay, Now next thing I'm gonna do is shorten the sound. You see how there's a lot of empty sound at the end of these. I want to shorten these to be roughly a quarter note long. So I can grab the ends of the clip and push it in. Or I can just highlight here and delete, and then delete all this extra stuff. That's still ringing. I can draw a little fade out on it just to make sure it's not still making sound. Okay. The hi-hat, I might want even shorter, but we'll see once we start building our beat. Okay, so those are the three most basic sounds that we need for a beat. We can add in all kinds of stuff later. But this'll get us off the ground. Okay, so let's go in and start talking about the elements. 36. Building the Worlds Most Basic Beat: Okay, So here's the formula we're going to work with in order to make the most basic beat. Okay, now, before I tell you the formula, if you're not into beats and you don't have any interest in making beads. That's okay. Just use this as a way to get to know the dog. Okay. So it's just roll with me here for a few minutes. And then once you make this, then you'll kinda understand how things work on the timeline k. So just, just go with it for a second. The formula for the most basic beat is kick on, downbeat. Snare on upbeat, high hat, on offbeats. Iq. Okay, So the first thing I'm going to want to do is make sure my samples, my one-shot start right where I think they start. Okay, So if I look here, this one's got a little bit of space right there, like that. So I'm going to tighten that up and then slide that over. This one has a little bit of space. I think that's probably fine. Because we're really zoomed in here. So I think that one's okay. My kick is just like a really crazy waveform as kicks sometimes are. And we're going to leave that how it is. That'll be fine. Okay. So now I'm going to zoom out and let's get rid of these old things. Make sure I'm looking at one bar k, This is one bar. So this is B2, B3, B4. Okay, make sure you're on the right resolution. It's really easy to be zoomed way in here and be like Cool, here's my b. And then realize that your bead is like hyper fast, right? Okay, that's not interesting. We gotta make sure we're zoomed out so that we're getting the whole beat, the whole bar. Now once we build this beat, you'll be able to adjust it to do different styles. So one of the cool things is that we're going to make this beat. And then just by like nudging something around here, there you fall into a different genre entirely. Beats are very genre specific, so we can just move the kick drum over an eighth note. And suddenly you've got like drum and bass and crank up the tempo, but it's still okay. So let's start. Let's do these, each element one at a time. So let's start with getting our kick in the right spot. 37. Placing the Kick: Okay, so remember the formula, kick goes on downbeats. So our kick is on a downbeat, it's right there. Let's solo archaic. So we're only going to hear our kick and we're going to turn this loop on. Okay, So we're gonna loop one bar over and over. My loop is turned on. This is the area that's going to be looped. And here we go. 341234. Okay? If I want to double-check that I'm right, I can always turn on my metronome. And here 1234. Cool. So KYC is on beat 1. That's our downbeat. That's where we want it. And we could jazz it up a little bit by putting our kick also on the upbeat on beat three. But hold on to that for now. 38. Placing the Snare: All right, snare goes on the upbeat here. Okay, so B3 halfway. And that's gonna give us that lets us into our kick and snare at same time. So I'm going to command click solo here so that we're soloing both things. I could also just mute everything but these two, let's do that. So now my hi-hat and my drum loop is needed. Let's go back to the beginning. Okay, So far, so good. Now, I mentioned earlier the idea of double time. What we could do is take this be, and what we will do in a minute is take this beat and double everything to make it so that everything happens within the first two beats. We'll talk more about that in just a minute. But for now, let's just move on to our hi-hats. 39. Placing the Hi Hats: All right, our hi-hat are gonna go on the upbeats. Now I can exactly see my upbeat here, right? Or sorry, our hats are gonna go on the offbeat. So I have to zoom in a little bit more so I see my offbeat. It's right there. Okay, Now I'm going to copy and click right here, paste, click right here, paste, click right here, and paste. Ok. Now we have the whole thing. Okay, So let's hear it. Oops, I got to unmute my hi-hat, turn off my metronome. Cool. Now, last thing we wanna do with this is let's double it up, like I was just talking about. So to double it up, I'm going to leave my hi-hats right where they are. I'm going to move my snare to be not halfway between the bar, in the bar, but halfway between beats 13. So that puts it on beat 2. I'm also going to put it halfway between B3 and the next downbeat. It's going to put it on before. Okay, now I'm gonna take my kick and put it on beat one and also on beat three. Now let's here. It starts with getting, okay, starting to come together. There's a bit more we can do with our high hats and still be in this kind of framework of the most basic beat. So let's go to that next. 40. Having Fun with Hi Hats: Now the high hats are generally the most forgiving, meaning you can put them really kind of all over the place and make it work. But one thing we can do is we can put, take all of our high hats if we want to and put them in addition to where they are now, also put them on the beads. So I'm going to copy and paste these on the beads and basically double them up. So now they're on every beat and every offbeat. Let's hear that. Okay, Now it's starting to sound like a beat, right? You can also just kinda have fun with your adds a little bit. If you wanted to just add a few around there just for fun. If you zoom in a little farther, you can make some faster ones like that. That gets you into that kind of almost trap style that's really popular right now, that has really frantic I hat. You just do this a whole bunch and you'll get that really frantic hi-hat sound. But let's go back to this. Cool. So now we have our basic beat. So the next thing I want to do with this is make it so that I can easily loop this beat. It's not as easy as looping this beat, right? Because this is a loop. So how do I turn this into a loop? Something I can just copy and paste over and over and over. It's not there yet because if I just copy this and paste it, it can go anywhere. There's one step I can do to make it so that this is an easily livable beat. And that is called consolidating. So let's do that in the next step. To finish off our very basic b. 41. Looping and Consolidating: Okay, in order to make this a livable beat, what I wanna do is take each track and turn it into a one bar clip. Right now, this clip is only a 16th note long. Okay? But what I wanna do is have this clip B, the whole bar. So what I'm gonna do is I'm going to select both of these clips. And I'm going to select the empty space after them because I need to select the whole bar, including the empty space. Okay, now I'm gonna do something called Consolidate. Now different programs might call this different things they might call emerge, they might call it render in place. But in Ableton we call it consolidate. And it's Command J is the key command to do it. Like I said, I don't memorize all key commands, but this one happens to be my name, so I remember it. So I will press Command J. It's going to take a second to think, and then it's going to make our clip, one long clip. Okay, Now, this is great because now we can do is if I zoom out and I just hit Command D to duplicate, I can just duplicate this all day long and it's going to stay right perfectly in time on our grid. Look at that. Let's go to the downbeat. Here's bar 8. It perfectly right there, right? So having that one, that full one bar is nice thing to do. Let's do that with our snare. And don't forget, we need this extra empty space to make it a full beat or a full bar long. So Command J. There it is. Duplicate that out. And it's going to be perfectly in time. Same thing with our hi-hat, not forgetting the empty space Command J. Duplicate that out. Cool. All right, let's hear it one more time. Let's start from random spot. Still perfectly in time. Here's our metronome. Lovely. Now, one thing that's really fun to do is take your beat and play it against another loop. And you just kind of augmented this loop with your beat. Let's hear what that sounds like. Next. Let's kinda go. My play around with that a little bit. But I think I'm more happy with just stick into our bead. It's simple, but it's nice. Crank up step a little bit. And we've made something great. We can start adding layers to it and playing around with their beak. 42. Audio is Finicky.: Okay, In this next section, we're gonna kinda deep dive into some of the principles of audio. And now working with audio is finicky. It's not finicky in the way that it's delicate and can fall apart once it's recorded, you've got it and that's good. The thing about audio is that there's just a ton of data in audio. If we look here at this waveform, and we'll go over how to read these wave forms at a minute. But you can see there's just so many points of data. And if we zoom way, way, way in, you can see those actual points that they are. Okay? All of these points represent some number in this file. So it's a lot of data. And each one of those points can be manipulated, right? So you record something and that's only really the kind of starting point of the sound is when you record it. There's a lot you can do after you've recorded a sound to make it sound better. There's things you can do to the volume. There's things you can do to the frequencies. There's things you can do to time. You can really manipulate audio a lot because there's just so much data in it. There's so much to work with. So more on that when we get into effects processing and things like that. For now, let's look at, let's talk about what we're actually looking at when we're looking at a wave form like this. 43. Looking at Waveforms: Okay, So there are two ways that we look at recorded sound. And the main way, the way we're looking at most of the time is as a waveform like this. Now what we're looking at here is first we're looking at a stereo waveforms. That means that there are two audio signals. Let me just separate this. There we go. So here's one signal, that'll be our left signal, and here's our right signal. And most of the time when we're dealing with music, It's a stereo track, meaning he's got two signals, one for the left side and one for the right side, because we have a left and a right ear. But when you're dealing with individual samples, you might only be looking at one file or a mono signal. So let's treat this as a mono signal. I'm going to get rid of the other one. Okay, so now we have a mono signal. Now this is going to come out. Now it's going to come out the left side. But I can change it to becoming out both sides by changing the panning to the center. You remember the term panning back from when we talked about how to use different programs. So now what are we looking at with this big squiggly line? What we're looking at is amplitude on the vertical axis. So volume, amplitude and volume are same thing. So volume on the vertical axis and time on the horizontal axis. So as it goes forward in time, the different volumes, Let's just zoom in. Okay, so this gets here and then this gets, goes up and down. Now, you'll notice that the center is 0. So the way we represent volume here is a little weird. The center is 0, meaning if it's sitting right on that line, that the signal is right on that line. No volume. If it goes under that line, that means volume. And if it goes over that line it means volume. So you can see here that we have 0, 0, 0, 5, and 1. And under it we have 0, negative 0.5 and negative one. We'll talk more about those in just a second. But basically the thing we need to understand here is that what this is representing is what our speaker is doing. So imagine a speaker, right? Speaker can push out. And if it pushes out, it has to come back. So if it's standing still, that means our signals right on the line, right on 0. If the signal goes above the line, then in it, so it has a positive value, then it's our speaker pushing out. If it has a negative value. So our speaker coming back in. Okay, so speakers are always doing this. Now. That would mean that most of the time, symmetrical, it goes up the same amount that it goes down. So it's always going in and out the same amount, but it's not actually always going in and out the same amount. If we zoom in, we can see there are little spots where it doesn't see like look right here. They're pushed out, came back to 0, dipped under 0 just a tiny bit and then pushed out again. So that's kinda indicative of maybe a distorted sound or something like that. It's not bad. All audio files do this. They generally go back and forth like this. But even that isn't perfectly symmetrical. You should have grabbed a more normal sound. This is kind of a Gucci weird little sound. But what we're seeing is volume above or below. And then time. So the amount that it goes above this line or the amount that goes below this line is how loud it is. Okay, and then we scroll through time on this axis. The other way that we look at sound, sometimes much more rare, but sometimes we look at sound as a spectrogram. This is a spectrogram. You may have seen something that looks like this in the past. What we're looking at here is frequency over time. Okay? So we have the frequencies that we're hearing and then scrolling across in time. So anything that's got a color means we're hearing those frequencies. So we're hearing in this one, you know, we're hearing a bunch of stuff up here, a little bit of a void in the middle and then something really hard down here that looks to me like maybe it's a snare hit or something. And we typically represent volume with color when we're looking at a spectrogram. So this is louder than this. And this yellow and white is the loudest. Sometimes you encounter these on what we call spectral effects. Effects that really do you think was with pitch. But this is actually a pretty hard for a computer to do because it has to figure out all the pitches and all the frequencies that are being used in a sound. And that sometimes takes a little bit of time. It definitely takes a little bit of computing power. So more common is to see a waveform that's doing volume over time. Because a computer can figure that out on the fly like immediately. So this is typically how we look at sound. 44. Sine Waves: Okay, Let's talk about this 0, 1 and negative 1 issue again, but from a different perspective. So I have here what looks like a big block of sound. And if I zoom way in, you'll see that it's not a big block of sound. It is a perfect sine wave. Sine wave is our most simple sound. It's our most basic sound, sounds like this. Okay? Most basic sound. Now. It's perfectly going up and down, crosses over the 0 line perfectly because this was mathematically created, I did not record this. So I want to show you how assign relates to this 01 negative one. If you remember back to your math classes, assign S I N, which is what this is based off. It's basically a circle. And this wave form is a circle. It doesn't look like a circle. But this is what happens to a circle when you have to distribute it out over time, right? Circles and time don't line up very well. So what we have to do is kind of take it and kinda twist it like that. And this is how we do it. So let me just show it to you as a circle. Okay, so here's a circle. Let's say we call this line through its middle access as 0. Because we're going to put a 0 on both sides. This line, we're going to say is one. And this line, we're going to say it's negative one or the other end of the same line. So if I was going, if I had to distribute the circle out over time, what I would do is I would start at, let's start anywhere, but let's start here. Okay, so I'm gonna say this point is 0. Then I'm gonna go to one. Then I'm going to go to 0 again. Then I'm going to go to negative one. And then I'm going to go to 0 again. That's the path that we take around the circle. If I go look at my wave form, I'm doing here is 0, negative one. I'm going backwards in this case, let's go to right here. Okay, so 0 to one, and this is a little shorter one, but we'll come back to that later. Let's say this is 1, 0, 1, 0, negative 1, 0. So it's the same thing through here. So this would be called one phase of the sine wave, or one cycle of the sine wave goes from 0 to one to 0 to negative one to 0. So that makes a circle, and we would call that a sine wave. Now circles and sine waves are good. They are pure sounds, they are clean, they are simple. Now there's another thing that this one and negative one represent here, and that is our peak volume. We cannot go above 11 is our maximum volume. Okay, so when we talk about volume in audio programs, most of the time, we're talking about one as our peak. So our volume is always going to be a decimal point. It's going to be 0. 9 would be pretty, pretty loud, right? Because that's going to be near one. If we hit one. Depending on the software, that can be okay. But if we go past one, we get, we overload the system and we generally get distortion out of it. We'll talk more about that in just a minute. In fact, let's talk more about that right now, but let's go to a new video to do it. 45. Clipping: Okay, now let's talk about what happens if we go over 1 with our volume. This results in something called clipping. Clipping is a term you're going to hear all the time when we talk about audio. If an a, any sound might be clipped, the waveform might be clipped because they're all kinda terms that we use. And this is what it means. Here. I have a little thing that this is going to generate me a sine wave, okay, so I can say the frequency and the amplitude or the volume. I'm going to try to generate a sine wave that goes over one. See it wants a value of 0 to one. And let's see if it'll let me say 1.2. No, it's not gonna let me. Okay. I'm just gonna put it at one. Okay? Now if I zoom in, you'll see my sine wave is going all the way up to one hit and right on the top. Okay. Let's see if I can boost it to go over. Okay, I'm going to boost this by a little bit and say Allow clipping. And I'm going to do it. Wonderful. Okay, now my signal is clipped. It's going to be a little buzzy. So there's a little more buzz to that sound because it's clipped. So let me show you what it's doing. The reason we call it clipped is that we can't go over one. So it's not that this signal is going up like this. And then curving around here. It's not curving around there and then coming back down and it's not the end. It's just that we can't see it. That's not true. What's happening is the signal goes up and then it's flat right there, and then it goes down. It's getting a little bit of a haircut, right? Someone just went against all the tops and the bottoms of my waveforms. So that data, which is the arch at the top, has now been lost because my volume is too high. It is clipped. So the tops of those waveforms are gone. When they, when they disappear, we can't usually get that data back. And anytime you have like a jagged edge in a waveform, which we now do because there's kind of a, almost a right angle right there, right where this goes up and then flat and then down. Not quite a right angle, but pretty close to it. Jagged edge generally means distortion in a wave form. So let me see if I can zoom back in. Let's undo it. Cat. Here's our perfect way farm. Okay, now I'm going to read you the amplify on that clip it. Now here's that same wave form by clapped writers. All that extra bussiness there, that is called clipping. And you really want to avoid it. 46. File Formats: Okay, Let's talk really quick about file formats for audio. There are probably three formats that you're maybe familiar with, One more than others. One is the MP3. You're probably familiar with an MP3 file. The other two main audio formats are WAV and AIFF, okay, and there's a bunch of other audio formats that you can see here. And there's actually even more than this. But the main three that we work with the most are AIF, wav, and mp3. Now, here are, here's the big difference. And I often wave are uncompressed files, meaning that if you save something as a WAV file, it is everything, okay. It's gonna be a much bigger file is it's going to be full quality. Everything we have, all that data that we know about that's in that file is going to go into a WAV file. An mp3 file is a compressed file format. Which means to simplify, that means that when it saves it as an MP3 file, it's going to throw out some information. There's gonna be some information that it thinks we probably don't need that and it's going to get rid of it to make the file smaller. That's why mp3s are cool for sharing on the internet because they're smaller. But mp3s are not good for making full audio tracks out of because they're missing information. You're kind of already, if you start working on a project by importing an MP3, kind of already starting at a disadvantage. So when we're working with professional quality audio, we want WAV or AIFF. Wav is much more common these days, even though it says Microsoft here and AIF says Apple. Both system using, using both. They're both uncompressed file formats, so they're both great. Mp3s are still cool. But what an MP3 is good for is after you've made a whole project, then you save it as an MP3 so that you can send it to your spouse or your buddies. That's what mp3s are good for. Or you can upload to a streaming service or something like that. But while you're working on it, everything stays as an AIFF or a WAV file. Now, what I'm talking about here is individual audio files. Let me show you, for example, Here's my sample library. Here's just a ton of audio samples. Let's just go into I don't know anything. Okay. Now you'll see that the majority of these are WAV files. Okay, so if I'm working with a sample, it's gotta be a WAV file or an AIFF file. But it's gotta be an uncompressed audio file. Okay, I'm not going to use a sample that's an MP3 file. It's just not going to sound as good. So that's what I'm talking about is your individual samples that you're going to be working with. Our WAV files. The session that you save, when you save a whole session in your software. That's going to be something proprietary, that's the name of the program. So I'm looking at a program called Audacity right now. And audacity saves files as AUD, I think I think it's AUD for Audacity is their project. So that will contain all of my files in all my uncompressed audio files. So able to infile FL Studio files, GarageBand files. Those are all fine because they have the uncompressed audio in it. But the audio that you start from and that we work with. For professional quality, audio should always be an uncompressed format, wave or a goal, or cool. 47. The Sample Rate: Okay, When it comes to an audio file, there are two numbers that you always see attached to an audio file. That is the sampling rate and the bit rate. Now there are kind of standards for what those numbers are and we'll talk about that more in just a minute. First, let's talk about what those numbers mean. Ok, and let's start with sampling rate. So the sampling rate is a number that tells us how many samples there are per second. Okay? So if we remember back to when we were first talking about what digital audio is, we said that there are a whole bunch of little snapshots of the sound that we then break up into these little slices, okay, and each one of those is called a sample. This is a little bit different. There's a weird terminology thing here. We can talk about samples in terms of like a kick sample or a snare sample or things like that. This is different than that when we're talking about the sampling rate. Well, we're talking about is those little snapshots of sound. And how many times per second that happens. It's a little different than the sample that we talk about. When we're talking about using a sample to make music. These samples look like this. Okay, I'm going to zoom in on our audio file and zoom in, keep zooming in, keep zooming in there they are. Okay. Each one of these is a sample, okay, to one of these little dots. Okay? So our sampling rate tells us how many of those there are per second. Now it's not unlike video, right? When you watch video, you might be looking at something that's 32 frames per second. Okay? What that means is that there are 32 still images, just pictures flashing in front of your eyes. And if we do it, 32 of those per second, you perceive it as normal motion. This video you're watching now is probably exactly that. It's probably about 32 frames per second. It's 32 images per second. And we just flash those still images in front of your eyes and your brain fills in the gaps and makes it look like fluid motion. Okay. But it's really just a long series of still images flashing really fast. Now in video, it takes 32 of those per second for you to perceive it as motion. In audio takes quite a bit more samples per second in order for us to get up to a full quality audio file, specifically, takes about 44,100 of those per second. That's how many of these we have. K, 44,100 per second. That's a ton, right? That's a lot. So you can see that my sampling rate is 44,100. Now we sometimes abbreviate this 44.1 K. A 44.1 K is a shorthand to say 44, 144,100. So that is a standard sampling rate. Now it's not the only sampling rate. We can go higher than that and we can go lower than that. If I look here, it's gonna give me the option to change it to a whole bunch of different sampling rates. I can go up to forty eight thousand and eighty eight thousand, two hundred. I can go all the way up to 384 thousand samples per second. I could do that if I wanted to. So why would someone want to if 44,100 is kind of the standard and that term standard changes a little bit depending on what you're doing, but more or less, Let's call that the standard. Why would I want to go higher or lower? If lower is going to degrade the sound and make it sound less good, and that's more or less true. Wouldn't I want to go all the way to the top and always make it as high as possible to make the best sounding music. Not necessarily because adding a higher sampling rate doesn't necessarily make it, make a recording sound better. It actually means you can represent more frequencies. This is a weird kind of mathematical thing that happens in digital audio. And it's something that's really often misunderstood about sampling rates. So I often see people recording at 88,200 samples per second. And that's fine. You can do that. But what they think they're doing is recording higher definition when they're not really, what they're doing is recording higher frequencies. So let me go into a little bit of the math on this in the next video and we'll talk about something called the Nyquist Theorem. 48. The Nyquist Theorem: Okay, So we're gonna talk here about the Nyquist theorem. Now what this tells us is what the sampling rate is actually doing. Okay? So the Nyquist Theorem states that the sampling rate must be twice as high as the highest frequency you want to represent. That's the Nyquist theorem. So if we want to represent, uh, a frequency, a specific sound, we need a sampling rate that's at least twice as high as that sound. Okay, here's why. The secret to this is in the fact that there are space in between the samples. There is space in between the samples and your brain needs to fill in some of that missing information. Okay, so let's say this is our waveform, okay? So we've got a waveform, pretended to normal smooth waveform, although it doesn't really matter for this example. Now let's say my sampling rate is moving along. Let's say this fast and zoom in just a little bit here. Okay, I'm taking samples at these points in it. Okay, that's all good. So in this case, Everything's fine. So I took a sample here and then here. So that means remember that the speed of this wave form affects the pitch. Okay? So this wave form is moving along at just the right speed where our brain is going to have to fill in the gaps in-between these samples. Okay? And it's going to do it like this and we get rid of that arrow. It's going to put together a waveform that looks something like this. And it's going to be pretty accurate to what the wave form actually is, right? Because that missing information isn't very much. Okay? So let's step back a little bit here now. And let's say that waveform is now moving much faster. So our sampling rate is only going to get it here, here, here, here, here, and here and here, Let's say, OK, now we have much less information. So our brain's going to put together the missing information. And it's going to be less good, right? Because our brains gonna do that. And this is what was actually happening. So we're a little bit inaccurate because we're going so fast. The wave form is so fast, but we're still more or less there. Okay, now let's say the waveform is moving even faster. And now we're getting data at, let's say, these points. Okay, Now our brain's going to put together this. Oops, let me make it better. Okay, Now, we have an inaccurate pitch because our brain has basically made a different waveform moving at a different speed. That's wrong. We're going to hear the wrong frequency because we don't have enough samples happening because the waveform is moving too fast. And when a wave form is moving too fast, that means it's a high pitch because the speed of the wave form is what tells us the frequency, okay? Follow. So when a frequency is moving too fast, we don't get enough samples to fill in the gaps. So what we need is at least two samples per cycle of the waveform. This is the most minimal amount of samples that we can have and still accurately put together. What's happening. Because if we fill in those blanks, we're going to get this right. And that is still the correct pitch. Cool. Okay? So if, so to summarize, if the frequency is too high, we don't get enough samples to represent what it is. So what that means is that we need to figure out what is the highest pitch we can represent for a given sampling rate. That's what this tells us here. Okay? The sampling rate must be twice as high as the highest frequency we want to represent. So what frequencies do we want to represent? Let's go to a new video and let's talk about the frequencies that we need to capture. 49. What Frequencies Do We Really Need?: Okay, so what frequencies do we really need to capture? Well, humans here, the lowest notes that we can hear generally, and I should preface this by saying, this is different for every person. Everybody is different. And there's a lot of factors that play into your hearing. If you are older and have played in a whole bunch of rock bands like this guy. The upper range of your hearing is going to be a lot lower. But assuming you have perfect hearing, you probably hear on the low end down to about 20 hertz. That's like think of the lowest sound you can imagine. Very, very, very low. That's probably about 20 hertz. And on the top, it's about 20 thousand hertz. Now some people can hear up to 2202022 thousand is probably the absolute max. But maybe a little bit over that. Some people can only hear up to about 17 or 18 thousand. I think I tested at once and my hearing at this point in my life because of playing and a lot of rock bands and damage to my ears caps out at about 16 thousand hertz at this point. But That's how that's roughly the range of human hearing, 20 hertz to 20000 hertz. Now interestingly, different animals here, things in different ranges. You know, dogs can hear sounds much higher than that. And other animals have different hearing. But we're not making recordings for dogs, were making recordings for humans. So this is what we care about. Now, we sometimes abbreviate this to 20 hertz to 20 kilohertz. Remember that K here means 1000. So 20 thousand hertz and hertz is how we measure frequency. Okay? So we can hear 20 hertz to 20000 hertz. Now according to the Nyquist Theorem, if we want to represent 20000 hertz, we need a sampling rate that must be twice as high as the highest frequency we want to represent. So if we want to represent 20000 hertz, we need a sampling rate that is at least double that. So 40000 hertz. Now remember what I said. Some people can hear higher, some people can hear up to 22. Okay, so that gets us up to 44 thousand hertz. And maybe we give a little bit of room on the top and say, we want to Let's say we want to represent we want to, let's say record up to 22,500. Hertz, That's the highest frequency we're going to be able to record. And to do that, we need to record at 44,100 samples per second, SPS samples per second. Okay, that is in accordance with the Nyquist theorem. So if we want to record 22500 hertz as like the highest frequency that we're going to be able to record. That's the high, That's a little bit higher than pretty much anybody can hear. Then we need a sampling rate of 44,100 samples per second, according to the Nyquist Theorem. That gets us one sample per cycle of a wave form moving that fast. So that's why we're at 44,100 samples per second. So back to our other question. What happens if we increase our sampling rate? It doesn't necessarily mean that we're recording it a higher fidelity. It means that we can record higher frequencies. So if we record a sampling rate at, Let's duplicate this, Let's get rid of all of this. So if we want to record up to the frequency of 441100 hertz, if we want to record that frequency twice as high as anyone can hear, then we need to record at 88,200. Now, why would we want to do that? That's twice as high as anyone can hear. Why would we record all of that data way up high? The reason is there are some theories and proven pretty much that while we don't consciously hear frequencies way up there, they still influence the tone in the range that we can hear. For example, if you hear a whole orchestra, There's a lot of upper harmonics way higher than we can hear that really kind of add a color to the sounds that we can hear. So recording all of that information influences the tone of what we do here. So if I was going to record a punk band, I'm not going to record them at 88,200 samples per second, right? That's just a waste of data. I might record them at 48 thousand samples per second only because that's kinda becoming a standard. That'll get me some of those upper frequencies of the distortion and stuff like that. But if I'm gonna record an orchestra, I'm going to record them at 88,200 samples per second because I want to get all of that extra stuff. Maybe even higher. Maybe I want to record it twice as high as that. So if you care about those upper frequencies, you're recording a very delicate sound, something like that. Then yeah, crank up your sampling rate and you'll be able to get those super high frequencies that we can't consciously here, but influence what we are here. But if you don't care about that record at 44,100, though, standard. 50. The Bit Rate: Okay, the second number that we have to deal with here is the bit rate. And you can think of the bit rate as kind of the fidelity of the sample. Okay? So if we look at our samples here, imagine that there's a grid on top of these, and we can only plot a sample on that grid. In fact, here's a grid. Okay? In this case, we are working in a 4 bit sample, k. This is only four bits, because let me just hide this one for a second. From 0 to one. There are only four steps. 1, 2, 3, 4, sorry, we're working in 6-bit. So there are six steps possible in between here. So if I want this sample to move to here, it can't be anywhere in between these two. It has to be on one of these lines. Okay? So that is actually going to impact the quality of our sample whole lot. So let's say this is a low sound, so it's moving slower. So our sample rate gets it. Say here, oops, here, here, here, here. I'm just kind of plotting this out. So you go there. Okay? So what you see here, like you see these four in a row here, it's because my sample, what I actually want to record is in-between the two lines, but I can only grab a line here because I only have five bits to work with. Now that's not normally how these things work. We normally work with a lot more bits than that. So I have six lines, but let's give ourselves more fidelity here. Okay? Now I have a lot more places I can plot those lines. And the more places I can plot them, the more accurate it is, right? It's duplicated again just to make it more fidelity. Okay? Now I have this many bits, however many lines that is. And the more spots I can plot it on the grid, the more accurate of a sound it is. So more so than the sampling rate. The bit rate makes it sound more accurate. So we normally record things that 16 bit, but you can increase it and you can decrease it. And in fact, if you want to make it sound like that old school video game sound, the way you do it is you decrease the bit rate. You lower the bit rate. And it's going to sound like an old school video game because those were recorded with sound or actually generated with sound at a much lower bit rate. So that's what the bit rate does. You can think of it as this kind of vertical or horizontal grid that we can plot samples on Google. If you want something to sound lo-fi, low fidelity, don't decrease the sample rate, decrease the bit rate. 51. Standards for Sample Rate and Bit Rate: Okay, so let's go back to what we were talking about with standards. What does just the standard numbers for these 44,100 is a standard sampling rate. We used to call that CD quality audio. If you were going to make a compact disc and old-school CD, it had to be at 44,100. That's what CD players need to read for a sampling rate, and it had to be 16 bit. And that's pretty good. On a DVD, a physical DVD, it had to be 48 thousand samples per second. And I think 32-bit on a, on a CD, or sorry, on a DVD. I think they had to be 32-bit. With digital files, there isn't as big of a need for standards. Mp3s work a little bit different. But generally, when I'm recording, I'm working at 48 thousand samples per second and 32-bit. If I look at this file, for example, this is at 44,100. And if I go to export it as a WAV file, you can see here, I can get a few different options for how to export it at different bit rates. We can go 16, I can go 24, 32. Then I can go to these floats, which without getting into the math of a flow at basically means a lot higher than 32. So it's kinda like a multiplier. So 32-bit float is going to be a lot higher. 64-bit float is going to be a lot higher than 64. These Yulan a law things. I don't really understand what these are and some of these other options are much more obscure options. So I'm either going to leave this at 16 or converted to 32 when I export it. So the standards aren't as important in digital files. We will later I think, talk about the right settings to export something as an MP3 file for the different platforms. But should always look that up like if we were submitting to iTunes or sorry, Apple Music or Spotify, we'll have standards that they want their audio files at. There'll be an mp3 with certain encoding, so certain sample rate and bit rate. I don't, there's a couple of other things that go into an MP3 also. But as a general rule, 44,100 is good. 48000 also get slightly better. Beyond that. It's just recording these extra frequencies way up high that you won't really be able to hear. But you can record it if you want to. And bit rate 16 is good. 32s, great. Under 16. You're going to start to get that lo-fi sound. You don't really feel that like video games sound until you get to like 8 bit or lower than that. There are effects you can use called bit crushers that simulate lower bit rates that get you that Atari game kinda sound. If that's what you wanna do, You can look at that later. Okay. So that's kinda where we sit with standards for digital audio. 52. The Dawn of Electronic Instruments: Okay, let's move on and talk about Midea. Now, before we get into Midea, we need to jump backwards a little bit. And I want us to pick up where we left off when we talked about the history of digital audio. So kinda where we left off there was Max Mathews, right? So Max working at Bell Labs. You went in there and he made the computer makes them sound cool. And then we kind of develop digital audio from there. We've covered that. But when we're talking about the history of middy, we need to go all the way back to the first kind of electronic instruments. That is, instruments that were producing sound electronically. Now, those instruments themselves are not midi. Midi didn't come to later. So I'm going to jump back a little bit before the invention of Midea, okay? Just to kinda help us get into how Midea came to be. Because middy, and we'll talk more about this later. But midi itself is a method for control. Midea is control messages for sound making devices and some non sound making devices actually more on that later. So first let's talk about those devices that needed controlling. And then we'll talk about how Midea came to be to provide them with some control. So as we started to build it out, we not me, but as people started to build, instruments, started to build circuits, right? Like once we started to play around with circuits as a society and figured out that we could build a circuit that made sound. Then it wasn't too far off for people to start to try to develop musical instruments that were electronic. Okay? Now the first real example of this, the first real successful example of this. We get in 1929 or actually that's when the patent was submitted, was 1929. So we probably developed it in 1928 or so, maybe even earlier. And this instrument is called the theorem. And now maybe you've heard of the theorem and it gets used in old timey, scary movies all the time to be like the ghost sound go. It's like the sound of experiment. It's a pretty weird sound. But let's go to a new video and let's talk about the inventor of this instrument, Leon Theremin, and how this instrument works. 53. The Theremin: Now as the story goes, Leon Theremin was a Russian inventor who moved to the US around 1920 or so. And he had an idea to develop alarm systems. And I think I've heard this told in different ways and I've read different accounts of this. Maybe he was trying to develop an alarm system and maybe he wasn't. But he had developed this kind of circuit that basically used electromagnetism given off by the human body and adjusted volume as you got closer to it. That's what it did. So as an alarm system, when someone got close to it, I could go and make some sound, right? That would make sense. But eventually, people started saying, I can control this thing and actually make it do some interesting stuff. And then we get a musical instrument. So his instrument was called the theorem and it is really widely considered the first electronic music instrument. You can still get them. Some companies like MOG, we'll talk more about in just a minute. Make them still. There are some modern ones. There are the old school kind, which are very valuable. You can even get a kit to make one yourself, which I've done once and it was actually really fun, so worth doing. So the way a theorem and works is you can kinda see this image of one on the screen here. It's a box and it's got two antenna coming out of it, one going straight up and one kind of curved. This way. The way it works is you don't actually touch the thing. You never touch it. You just get close to it with your hands. So one hand controls the volume of the note, or the other hand controls the pitch. Okay, so with the antenna that's going straight up and down, the closer you get to it, the higher the pitch goes with the antenna that's horizontal, the closer you get to it, the quieter the note gets. Okay, So you have to find your pitch in mid air, which is actually really tricky to do. It's really hard. But there are some tricks you can do to get good at it. And then you control your dynamics with your left hand. So this instrument became kind of popular around the 1930s or so. It was rumored that, you know, this is going to replace the piano is what people were saying at the time. Meaning that if you were an upstanding citizen, you had a piano in your house and somebody learned how to play it. And the idea was that this was going to be the new thing. People were going to replace. People. We're going to get rid of their pianos. And by theorem ins, instead, that never happened. Thank God. I love the theorem in, but it's a, it's an acquired taste. Let's listen to a little bit of it that I want to tell you about. Kinda what happened to the guy Leon Theremin. So here's a little excerpt. You're someone playing over the rainbow on a pyramid. Remember? Okay. Wow. 54. What Happened to Leon?: Okay, so here we have Leon Theremin, the inventor of the pyramid. So the theorem in got quite popular if you can imagine. And there were actually a whole bunch of different versions made of it and even made one catering to like dancers. So they were just really big ones, ones that were the size of the hole of a whole room. And dancers would choreograph themselves to dance with in it. And their dance would generate sound. Pretty crazy idea. There was even a virtuoso theorem and player. There have since been a few virtuoso theorem and players, but the best known is woman named Clara Rockmore. Clara Rockmore. You can find videos of replaying. I think she died around 1990 or so. So there are videos of her performing. And, you know, she plays all kinds of classical music really beautifully. And she was friends with the Omdurman when she was young. There's some sort of rumors about maybe they were involved whole other story. But speaking of who he was involved with, let me tell you really quickly that after he rose to fame for creating the theorem, when he disappeared. And it was kind of just a big mystery for about 30 years where he went. Nobody knew, he just disappeared. Now, most people I think thought that perhaps he had been murdered because he did. I think Mary, I think he married a dancer and African-American dancer, which if you imagine the 1930s he was in New York, but still 1930s. There were people who had a problem with that. And so a lot of people thought he was murdered and thrown into a river, the river or something. But we now know actually what happened to him. He was kidnapped by the Russian government and they brought him back to Russia because at the time, russia wanted their scientists back and he was a well-known scientist. He was a doctor who's Dr. Leon Theremin. And they brought him back and they put them in a lab working on something. We still don't really know what. And he worked in this lab for about 30 years until someone discovered him. And he was he was released in some way and then he was brought back to the US. There's a fascinating movie about this called phentermine and electronic Odyssey, where they actually document bringing him back to the US, meeting up with Clara Rockmore one last time. This all happen probably around 1990. He got an honorary doctorate from Stanford. And he was recognized for his great achievement to music, having developed the first electronic instrument. But it was, you know, 30-some years later because he had been essentially locked in a lab in Russia working on some secret project. So if this is interesting to you, I highly encourage you to watch this movie and theremin and electronic Odyssey. It's brilliant movie, It's fascinating, fascinating stuff. Theorem in both as a person and the instrument that Herman ended up inspiring a whole generation of people to make other instruments. And one of those people was a very young guy at the time named Bob MOG. So Bob MOG was fascinated by the Thurman. He built little pyramids on his own and his garage when he was a kid, figured out how to make them. And he made some. And he kept developing that until he started building more complicated instruments. So next on our lineage of electronic instruments comes Bob Moog and the Moog synthesizer. 55. The Moog: So Bob MOG was an inventor, I suppose we'd say a musician. He was many things an entrepreneur really. And he developed the Moog synthesizer company, which is still in business. What he did was he took the idea of the circuit that can make sound like Thurmond was playing with. And he added the ability to make it modular. So what that means is that we can take a bunch of them and connect them together. And we could control them with a keyboard, something that looked like a musical keyboard. So this is Bob Moog in his younger years. And you can see he's in front of one of his synthesisers. And you can see all these cables connecting things together. That's the modular element of it. And he's got a keyboard. He also has here, I believe, a ribbon strip, which was another way of controlling these kinds of synthesisers. We don't really use a ribbon strip anymore. It worth kinda like a theorem and where you can just run your finger across it and the pitch will go up and down. They're not as popular anymore. But controlling a synthesizer with something that we were already familiar with, like a, like a piano keyboard was really monumental. But there was a lot of people, there were a few people doing similar things at the time. The thing that MOG is really best known for is developing with big complex synthesizers. Here's an example of the big Moog synthesizer. Now this is a re-creation of one, a new version of the original Moog synthesizer. But it's the, it's the same sound, same basic idea sounds again, I have not played with a fixed filter bank in so many decades. So very, very thrilling. So now we'll go ahead and patch up some more complex things. So one of the first things you want to do is here three of the oscillators going at the same time. It's quite amazing. So we'll just plug in. Three of these beasts will take sawtooth out. So you can see here that it's a lot more like a musical instrument, right? Like this is something that people started to say. Like, wait a minute, I can maybe this is real. Maybe this is a real musical instrument. One that deserves to be studied in the same way that the violin deserves to be studied. That was a pretty controversial attitude at the time, and I don't think there was a lot of people taking it very seriously, worried about the 1960s here, early 1960s. So some pop acts started using it. You can hear this kind of a synthesis in a lot of the more popular music of the day. But in terms of a serious instrument and like really studying it, it didn't really get taken seriously until one person came along and became the world's first virtuoso of the Moog synthesizer. In the same way that we saw Clara Rockmore become the first virtuoso. The theorem in, and that helped to legitimize the instrument because people could see that there was like somebody who could study it and become virtuosic really good at it. The same thing happened with the MOG when we got Wendy Carlos to come and show us how the Moog synthesizer could hold up to the standards of traditional classical music instruments. Let's go to an end video and let's talk about Wendy Carlos and her contribution to electronic music in general, at this point in history. 56. Wendy Carlos: Wendy Carlos released a number of albums in the 60s. I believe maybe early 70 is called Switched on Bach. And the idea here was that she played the music of Bach on a Moog synthesizer. Now, this is really fascinating because what it did is it showed people that something as pure and complex as Bach could be interpreted on a synthesizer. Which really I think is the moment that people started taking synthesisers seriously. So here's a little bit of Carlos, back in 1989 explaining how she made some switched down buck. So let me play you just a snippet of this and then I'll play you a little bit of search on block two here. Synthesis. It works by first generating these harsh, bright sounds made up of many pure tones played together. We pass these bright waves into a filter, which in this case, literally like your tone controls on a Hi-Fi set. Remove portions or boost portions of the sound. We can make it sound very Dao, quite pure, or very bright. And you can do this dynamically in time. So sort of progressive will make it open up. Sit up there. The Moog was revolutionary, but as it generated many terms together with little control, it sounds were crude. Soon it was overtaken by a second revolution controlling individual turns back computer, unlike we'll get to their next part soon. But for now, let's hear a little bit of the switch down Bug Music. Hi. 57. What Happened in 1981: Okay, so we've got people like Bob Moog making synthesizers, making a bunch of different kinds. And there's other people making synthesizers to, there's a company called Buchla doing really fascinating things. Even some of the big companies like Yamaha and Sony are getting into the game by this point and making really cool synthesisers. So there's a bunch of different synthesizers out on the market come 1980 one. Then. And we have computers, computers are around. Computers have been around for a long time and go all the way back to Max Mathews and know about computers. But your average person didn't have a computer. But in 1981, IBM releases its personal computer. The first personal computing computer, Macintosh doesn't come along until 1984. But in 1981, we really start to get a picture of the personal computer, something that every person couldn't buy and have in their home, not these huge mainframe computers, things that people can have. At some point, people are saying, wait a minute, we've got these cool synthesisers that makes sound. We've got these cool computers. Is there any way we can connect these two things together? So also in 1981, there was kind of a convention and I believe it was actually like a real like all these people got together convention to figure out how best to make computers talk to some designers. That was the idea. And a number of people were involved. Sony was involved. A guy named Dave Smith was primarily involved. Dave Smith was the designer of a synthesizer called the Prophet 5. And the profit five was one of the first synthesisers to be able to use what would become midi technology, which means it was able to talk to a computer and the computer was able to talk to it. So it was a two-way connection between the computer and the synthesiser. So Dave Smith says, Hey, everybody, meaning computer manufacturers and synthesizer manufacturers. Let's find a way to make it so that all synthesizers can talk to all computers. Because with the Prophet 5 that Dave Smith worked on, he got it to talk to a specific kind of computer. So his and the satyr could talk to his computer. But he had this idea that all computers should be able to talk to all synthesizers. So if that was going to be true, then some kind of standardization had to be developed. And so these manufacturers got together and they decided to what the standardization was going to look like. And they decided to give it a name, Midea, which stood for and still stands for musical instrument digital interface. So that means it's a Musical Instrument Digital Interface. It's a way for the computer to talk to the instrument, the synthesizer, and a synthesizer to talk back to the computer. Cool. So this all kind of came together in 1981. Also in 1981, digital audio mix, a step forward because we get the first compact discs being manufactured. Compact discs wouldn't really come to the market for a couple years yet, but the invention and the technology was there to do it. So 1981, first personal computers. Midea is standardized and invented as a term and as a protocol and compact as come to market. Big year for music in 1981. 58. The MIDI 1.0 Spec is Born: So by about 1983, most computers and synthesisers were supporting miti, meaning you could do things, you can connect them. Now this led to one problem and it's problem we still have actually. And that problem is that so many different hardware manufacturers, computers and synthesized the cis makers. So many of them adopted the midi standard. That the midi standard that was created back in 1981 was called midi version 1. That's what it was called. And it was a hardware specification and so many people latched onto it that it was near impossible to update it later. And therefore, the version of middy that we are currently on now is Midea 1. Now there have been some changes to it, especially in the last year or two you, there's been some big developments with doing sort of a HD Midea, we call it sometimes MPI. There's different and things we call it, but there have been some updates to the way Midea works, but they're really add ons the main protocol as it was invented in 191981, still what we use. So Midea is not real high tech. It's actually fairly low tack once you get to know what's inside of it, which we'll talk about soon. So when they came together to build it, what they wanted to do was achieve a few things with Midea. They wanted a language and a syntax, keyboards, drum machines and computers all being able to talk to each other regardless of the platform, regardless of what kind of computer, What kind of synthesizer, all of these things should be able to talk to each other. They wanted a way to create music with fewer people involved. Meaning allowing a few performers to play a whole band. So multiple instruments could be played by one person. Kinda like how we saw Wendy Carlos play the whole orchestra parts. And the third thing was portability. Reducing the amount of gear that was needed to make this kind of music. So what we got was a system where the computer controls the synthesizer. All the sound is in the synthesizer, and the computer sends relatively simple messages telling the synthesizer what to do. It says, play that note, play that note, play that note. And the synthesizer then plays the notes. The computer doesn't actually have any sounds in it, and it isn't capable of making any sounds at all, at least not through midi. So when you look at a midi cable, which we'll do shortly, know that there is no sound going through that cable. There is none, no sound. There is only control messages. Only messages telling the other end of the cable what to do and how to do it. Play this note, play it that loud, play this note, play it that loud. Because that's what Midea is, is it's just messages telling one thing what to do and how to do it. 59. MIDI Instruments Today: So if we skip ahead about 50 years or so, midi instruments have come a long way. We really use the term generally to talk about any kind of keyboard or controller that's talking back and forces. The computer could be a synthesizer or it could just be a controller. So you can see that a lot of midi instruments look like pianos, right? They have a keyboard that looks like a piano, but they don't all do, and they don't have to look like a piano. There's no real reason for it, other than most people know how a piano works, that's why they tend to look like a piano. But if we look at something like it's only like this, this is called a wind controller. That's midi instrument that kind of looks like a clarinet. I got this one. This is kind of a, some kind of guitar shaped thing. This one looks like a hockey puck, and it's basically a midi instrument. You've got these big ones that look like old Moog synthesisers. Got these that have like a couple sort of keys and a bunch of knobs. This one is just pads. Like this crazy thing. And that crazy thing, right? There's no reason that it needs to look like a keyboard. There's tons of different midi controllers now. And you can really find them that emulate anything that you already know how to play. So if you know how to play the clarinet, don't buy a midi keyboard, biomedical or net. You can totally do that. There's a mini version of pretty much any instrument you can imagine. Let's talk just a little bit about one of my favorite midi guitars. 60. 61 OtherUsesOfMIDI: Okay, so remember a few videos ago, I said that what midi really is, is a way for one thing to control another thing, right? So either a computer controlling a synthesizer or one of these kinds of instruments controlling their computer, or one of these kinds of instruments controlling a synthesizer. It's a way for two things to talk to each other. Now that idea of control goes beyond musical instruments. Yes, it was designed to work with musical instruments. That's why it's called Musical Instrument Digital Interface. But that whole concept of turn this, play this note, play it that loud. And that's all it says. More or less. That system has been used in other ways. Also. So you can find something like lights that are controlled through Midi. Because if you have a whole lighting grid like the way you might have in a theater where there's a ton of different lights. It's easy for and you'd get a controller that says, turn that light on this bright, and then turn that light on this bright. Now turn that light off. That's Midea. That's the same protocol. So a lot of times they use middy. A really weird and fascinating thing you can do if you want to really trippy experience is if you can find your way into a big old theater that has a nice lighting grid like that. Go up into the control room. They'll have something that looks like a mixer. And that is a midi device that's controlling the lights. You can unplug that, Plugging in your keyboard or your midi guitar and play it a lights. You can totally do it. And it's really fun to start refund and play in weird stuff and watch the lights go crazy. You can totally do it because it's the same protocol. There's other stuff that use the midi specification, the midi language, I should say, things like robotics have used it. You know, if you build a robot, you want to say move this arm this much, move this finger this much. Some early robotics have used middy to communicate to the little motors all over the place. How much to move and to wear. Because it's really a simple thing. So food for thought. Just because of the way it was designed, media has turned into be used, turned out to be useful for a variety of things. So now that we know where it comes from, well, let's talk about how to use middy in music production. So back to making music. Up next. 61. 62 MIDIisAProtocol: Okay, So we've talked about where money comes from. But let's talk about what it actually is now and how we actually use it. So Midea, as I said earlier, it stands for musical instrument digital interface. Okay. It is a protocol. It is an event protocol really. So that means it's kind of a little language. It's a language that let's one thing, tell another thing, when to do something and a little bit of how to do it. Okay? So it says make this sound and then some parameters, perhaps, like make this sound this loud or that loud. The way we see it on the screen is with these little dots. So each one of these little dots says, turn on this note. In fact, it's actually a little more specific than that. Let me zoom way in here. So what it really says is, right here, the computer is going to send a message that says, turn on this note. And it's going to say turn on the note this loud. This is my velocity. Let's make that a little bit bigger. So it says turn on this note and turn it on this loud. Okay? Velocity is always written as 0 to 127, so 0 means off. And 127 is the loudest weekend go. So that's our range of volume or velocity. So turn this note on at a velocity of, of, let's see what this is about 80. Then that's the end of the message that noted on and it's gone. So at this point, the computer sends another midi message. And that message says, turn that note off. But this is where middy is a little funny. It doesn't actually say turn that note off. What it says is at this point, it says turn that node on. It's already on, but this is how you turn a note off in the Midea protocol. It says, turn this node on at a velocity of 0. That's how you turn a note off in, in Midea lingo. This isn't really something you need to keep track of. This is something that computer will really hide from all of us. But that's how midi messages work. They say, turn a note on at this velocity. And then at the end of the note, they say, turn that note on with a velocity of 0. That's how you turn a note off. It's a little strange, but it works. So this is why one of the drawbacks of midi actually is that you sometimes get something called Stuck notes. And that's because if you have like a ton of notes going at once, Let's do this. Let's say you have something like this happening. Okay? You might, we might get in this case and we'll try it in a second. It's something called the stuck note. And stuck note means we're going to play through this and one of the nodes isn't going to turn off, It's just going to stay on and it's going to hang there. Let's try. This might be a little painful sounding. That was extremely fast. We also don't have a great sound for this. Let's look at our sound and say a simpler sound on this. A sound that is more simple. That was a little too fast. It's just going to play these as a chord. So let's stretch this out a little bit. Well maybe not. We adjust our sound a little bit here. Not getting one. Ableton is pretty good about not doing it, but sometimes it happens. And when you get stuck, note what it is is It's the computer just losing track. So here we have play note F6 and then add a velocity of 80 in here, play F note six at a velocity of 0. But before that velocity of 0 came, we had play note F sharp six at a velocity of 80, and then played G6 at a velocity of AD, played G sharp six, et cetera, all these notes, and then all these note off message is, right. So if it doesn't get one of the note off message is, then you have, because there's just too much data flying really fast. It can get a little confused. And then you have what's called a stuck note, meaning one note in this whole bit might just sound like it's going on forever. You can usually fix it by turning the instrument on and off again and muting the track, unmuting the track, or just hitting that note again. Because if you, if this node is stuck and it's going, and it just keeps ringing. If you play it again, what you'll get is a note on and note off, and that'll shut the note off. So stuck notes can happen because of that. If you've ever played a synthesizer and then you stop playing and there's like a note ringing out of nowhere. Find the note that's ringing and play it again. Stop it. It's called a stuck note. And it's really just the protocol, too much stuff flying through the protocol at once. And the computer missing one of the note off messages doesn't happen very often anymore on programs like Ableton because they're pretty smart, but sometimes it does happen. So main thing to remember, Midea is a protocol. It's just a language that tells us to turn things on and turn things off at different points. 62. 63 MIDIisNotAudio: Okay, I've said this before, but I'm going to say it again because it is a super important concept when it comes to midi. Midi is not audio. There is no sound in Midea. These notes by themselves do not mean any kind of sound. There's no sound here. The reason we hear sound is because we have an instrument on this track. Okay? If I do something like this, Let's go to this track. Here's my, here's the little midi clip we're playing with right now. Let's make a new midi track. Here's a midi track. I'm going to put this midi clip on this track and then I'm going to solo this track. We only hear this track. Okay, So now here's all that midi information. Let's zoom out a little bit. There's my crazy chord. Okay, Now let's hear it. Nothing. We hear nothing. But Midea is still working. Everything is working correctly. All those messages are being sent, but they're being sent to the instrument that's on this track, which there isn't one. And so these notes are saying, turn this note on, turn that down, turn this note off, turn that note off. But nobody's listening, right? There's no synthesizer to listen. So I take something like I don't know operator, which is a synthesizer. I put that on the track. Now, when this says note on note off, it knows what notes to the operator synthesizer is going to say. I know how to play that note. And it's going to play those notes. Cool. So Midea by itself does not make any sound at all. It only tells something else how to make the sounds. 63. 64 MIDIChannels: Okay, Another thing that maybe has built into it is multiple channels. So there are 16. Can do that with my hands. But there are 16 channels in Midea. And you can think of each channel is kind of like a highway for the different messages. So you can see those here. Okay, so in a case like this track, what I have here is it says all channels. And that means this is the input, okay, So that means that it's going to listen for midi information on all channels. So if I play my keyboard, I don't need to worry about what channel my keyboard is sending on, because if this is armed record, it's just going to listen to the input from all the different channels. That's typically the best way to set it up for a midi input. But on the output, Let's look at this one. Now. This one doesn't have an instrument on it at all. So I have some options for the output. If I just don't have any instrument, it's going to default saying just no output, meaning like it's not sending the midi data anywhere, but I could send it somewhere else. So let's send it to, for example, the strings track. That's, that's up here. I think here it is strings that has an instrument on it that can take in multiple channels. So once I select strings, it says, Okay, Which channel, and I have 16 options. So that means that I could have a track of midi data here. I could have a whole bunch of tracks of midi data. Oops, I made a bunch of audio tracks. Let me just do it this way. Duplicate, duplicate, duplicate. Okay, so here I could say this one is going to go to strings on channel one. This one is going to go to strings on channel 2. So 1 is going to go to strings on channel 3, and this one's going to go to strings on Channel 4. So all four of those midi tracks are sending to the strings track, but on different channels. So why would I do that? The reason I might do that is because I have an instrument setup on that track that has different sounds for each channel. And I can do that. So it would be basically the same as putting different sounds on each track. But there are some times when it's better to do it this way where I'm sending all the midi data to one track. Pretty rare. Main reason you'd want to do that is just to save computer processing power if your computer is running slow. What I basically have is in that strings track, I have ascend loaded that has different sounds on each track. So basically one simple loaded. If I put the different sounds on each of these tracks, then I have four since loaded. So it can save on a little bit of processing power to do it that way. For the most part, It's not worth doing in live because it can handle all of these just fine. But you can play around with different channels if you want. Typically, if you're in a situation where you've got a midi keyboard coming in on all channels and you've got a sound on the midi track, then you don't need to worry about any of the channel issues, but know that that's how channels work. You have a maximum of 16 per track. And you can play around with assigning them to different things if you like. 64. 65 AnatomyOfAMIDIMessage: Okay, so let's look at kind of what's in a media message. So all midi messages are pretty similar in there, the way that they're formatted. And we're going to simplify the actual data that's flowing in. But this is basically what it looks like. A noted message. Or I should say a note on message looks like this. It's a little packet and there are three numbers in it. It's going to say It's gonna look something like that. So in that case, the three numbers are the note number, the velocity, and the channel. Okay, So in this message, we have note number 60, that's middle C. So every note has a number. Which is an interesting thing to think about because that means that midi doesn't know the difference between accidentals. So for example, note number 61 is going to be C-sharp, But D-flat is also note number 61. Midi doesn't really care about enharmonic spellings or any of that stuff. It just cares about footnote it is. So we have note number 60 and we have a velocity of 127. That's the top, that's as loud as we can get. And we have midi channel 1. So that's what a note on message looks like. A note off message looks like exactly the same 60, 0, 1. So it works the same note number. Velocity channel. So this is saying play note number 60 at a velocity of 0. That's how we turn it off. And on channel one. Quote, now there are other kinds of messages that get sent in Midea. In particular, control messages. Control messages work a little bit different, but the data is roughly the same. So control message will be, will look something like this. Actually it looked pretty similar. So when a control message comes, there's something that tells the computer that this is a control message versus a note message. But the guts of it, the numbers inside of it are kind of the same. So the first thing is the controller number. Second thing is the value, and the third thing is the channel. So a controller would be something like, like the mod wheel on your keyboard or any kind of button type thing that isn't going to send a note. So sliders, faders, any of that kind of stuff. Foot pedals for petals are a good one. Any kind of a sustained pedal, something like that. So let's use the sustain pedal, for example. So each possible controller is assigned a number. So I'm not, I don't remember what the number of sustained pedal is, but it's standardized. So computers know that your keyboard sustained pedal is going to be controller number something because they always are, it's always the same. So let's say sustained pedal is 60. So 127 would mean it's top value, so it's all the way down. And it's going to send on a channel also. So it's going to be on channel one. Hey, and then we can do the same thing to turn those off. You can send it out as 0, although you don't need to turn controller messages off the same way. Notes do. You can just leave it at 1.7 if you want. So controller messages work a little bit differently, but they're the same basic thing. There are other kinds of messages. Sometimes you see this written Sussex messages. So sex is shorten for system exclusive, which is a fancy way to say is proprietary to whatever weird keyboards you have. If you have a weird keyboard that does something weird, like it's got some contraption on it that no other keyboards have. Then they need to find a way to send that into the computer. And the way that they'll do that is through a system exclusive message. It just means that there's no standard number for this thing I'm sending you. Like, how are sustained pedal has a number assigned to it. That's always the same system. Exclusive messages are unique to different keyboards. And those messages exist just so that a computer knows to do something with them. So that's basically how many messages look. If you were to kinda pop open the hood on them and dissect them. 65. 66 NoteOnErrors: Let's go ahead and look at it. One more kind of oddity about Midea, and that is the way we start and stop notes and we've talked about what goes into the protocol. But let me show you a weird example of kinda the consequences of the way we do that. So here I have one long note. If I play this note, cool, and you can hear it with strings because I still have this track routed to strings. So actually just while we're here, if I hit play on it, you'll see there's no instrument here, but we're routing to the strings, which is here. And you'll see vo volume coming here. So that instrument is actually playing because I'm routing it through that channel. Anyway. That's not what I wanted to show you. What I want to show you here is that remember that while the node is playing, nothing happens in the midi message, right? The midi message says, turn this node on and then turn it off. So what happens if I start the note in the middle? Where to go? It's gone. It, it never got a message to turn on. That's a weird thing about midi. So let me explain that again. If I don't hit the start of this note, we are not going to hear it because it never got a note on message. Like if I start playing it right here, the note on message never came. And we're not going to hear that note. This is a weird thing about Midea and it frustrates a lot of people. If you're doing something with a lot of long notes and you don't start it at the beginning. You might not hear all those notes. Now there's a solution to this, and that is, a lot of programs have built in something called Chasing notes or velocity tracing, something like that. They each call it something different. I'll show you how to do it in Ableton in just a second. But but I want to I want you to understand why this happens. So there's no note on message and it just plays. And then when we get to note off message, it doesn't matter because note off message just says played at a velocity of 0. So it sends that at the end of it, that sends the play this note velocity is 0. And it doesn't care that there's not a note on. So I have to hit the beginning of that note in order to get the note on message. Okay? What if I stop the track before the note off message comes? Technically, that node is still ringing. We should still be hearing it. Now. Any piece of software knows that when you stop the track, it knows just to hit all the node off messages, send everything at a velocity of 0 just to make sure nothing stay is ringing. But that's something that the software is doing just to make sure that notes stop. Otherwise, that note would still be ringing because it never got the note off message. So the computer handles that part of it for us. It knows how to force a note off message when we stop playing. Okay, let's go to a new video and let's talk about how we set up this chase midi notes. 66. 67 VelocityTracing: Okay, So your software can force a note on message. If you start in the middle of a midi note, most software can, this is actually a relatively new thing for software to be able to do. But a lot of applications are able to do it now. So live, I believe does it by default now. And in fact to do, to make the previous video actually had to go in and turn it off. So I'm gonna go into the Options menu and select chase midi notes. What that's gonna do is if I start playing it right here, it's going to say we're in the middle of a midi note and it's going to jump back, find that, find what note we're on. And it's going to force a note on message, right where we start playing. Okay, so now we hear the midi note, even if we start in the middle of the, the thing, no matter where we start. And everything is fine. So chaste midi notes, sometimes it's called velocity chasing. Make sure that's on. As a general rule. Leave it on all the time. There's not really a need ever turn it off unless you're filming a demo about one of the weird things about midi notes. So assuming you're not doing that, leave it on all the time. 67. 68 AdvantagesOfMIDI: Okay, so what are the big advantages of using midi? There are two, there are two reasons that midi is really powerful when making music. And here they are. The first is the option to change the sound. Think about it like this. If you record a guitar, what you've recorded is the sound of that guitar. You can add effects and things like that. But at the base of it you have the recording of the instrument that you made. And that is your main tone. With many, we just have note on and off messages, right? So I've got this little core progression here. And that's cool. But what if I want it to sound like something really weird? I can just change the instrument. Change it again. I can just keep changing and all day long. I don't have to rerecord anything. So that makes it really powerful. The other thing is tempo. I can adjust the tempo without any change in the audio quality. Now, I think we've already talked about warping and being able to stretch and shrink audio recordings. That works. But every time we do that, we degrade the audio at least a little bit. If you do any really extreme changes of tempo with an audio file and it's got stretch way out. You're gonna get glitches and things like that. With Midea, you don't get those. If this was really short. Let me just take it and do that and just nudge it all together. It's still just note on and note off messages. So this didn't change the quality of the audio. Same thing is if I take this whole thing and stretch it out to be super long, well that was a bad example. Let me just take one chord, stretch it out to be super long. It's still just turn it on the note, turn off the note. So we didn't actually stretch the audio at all. And our audio quality is totally unaffected. So it is immune to being degraded by tempo changes. So that's why we really like working with Midea when we're producing because we can make stuff and then decide later if we want to change the sound, change the notes, which is another thing I haven't talked about yet, and change the tempo later. 68. 69 AdjustingNotes: Let's talk about changing notes. Here's a session I recorded of a whole Jazz Band was the Augsburg jazz-band. And you know, there's a ton of tracks here, of strings, brass, rhythm, percussion. But when I recorded the piano, I recorded the piano two ways. I recorded the audio out of the piano, which was a midi keyboard. So I recorded the sound of the thing, but I also at the same time recorded the Midea, right? So we have the sound of it and the midi notes. And why would I do that? Because I could bring it back to the studio, look at what the PNS played and it would give me some flexibility. So let me just show you something here if I go right here. Okay? That's what the pianos played. And I happen to know this chord has a wrong note in it. I can just fix it. And we can just plop it right back into the session. And everything's fixed. So you can do some really nice and subtle editing by recording the Midea of an instrument. So you can fix notes, I can fix rhythms. I can say this chord just came just a hair early. Let's nudge it over and just really make it a perfect performance. It was actually better before. So there you go. 69. 70 WhatAreMIDIEffects: Okay, so let's talk about midi Effects. Now. Our midi effects, midi effects are wildly different than audio effects k. With an audio fact, what we're doing is we're taking an audio signal which includes amplitude, volume, rhythm of some kind, and frequency content, right? But in midi effects, we don't have most of that. All we really have is data, right? We're going to put midi effects into the data stream so we can mess with the data. We can't change the sound, we can't change the frequency content because that comes leader, right? The, remember that the midi signal, the midi clip or the midi keyboard or whatever. It's just generating these note on and note off message is, right? And then those go to a synthesizer, okay? After the synthesizer, we can put audio effects on it. But before the synthesizer, we can only put midi effects on it. And so midi effects are really limiting. There's not a lot we can do with midi effects, but there's a few things that are really important and really handy. So in this section, I want to go into those effects. We will look at adding audio effects to a midi instrument. Soon by the end of this section, we're going to talk about that. But remember that midi effects come before the instrument. In fact, let me just explain that a little bit better. Here is a midi clip. Rup. Okay, There's neat. Here's the notes. If I look at the instrument, okay. I can put, I can go to midi effects. And if I put a midi effect on this instrument, it's going to come before the instrument. If I put it on this track, is going to come before the instrument. Remember these little dots. These say there's midi data here, and these lines say audio data. Okay, there's audio and midi. So middy, still Midea. This converts it to audio because it is an instrument. So any midi effects I put on something has to come before the instrument. Audio effects need to come after the instrument. Okay, because that's when we're already audio and we can start playing around with the audio signal. But before we turn it into audio, we're on midi effects. So let's go into some midi effects. Now I'm not gonna go through all of the midi effects here in Ableton Live because some are really unique to Ableton and really specific. But I do want to go through a lot of the common ones that you'll find in different audio programs. 70. 71 Arpeggiator: Okay, The first thing we're going to start with this, probably the most popular media fact and the most common to find. And that is an, an arpeggiator. So if you know the term arpeggio, then you know what this does. So in music, if I was to write a chord for piano, you would play it all the notes at the same time. But if I was to say arpeggiate that chord, it means play that chord. But one note at a time, like a heart. That's where the term comes from. So an arpeggiator is gonna do that for us. So let's make some chords here. So let's see I have an F. Let's just make this nice and simple. They're big F major chord. Let's make it one bar long. Let's maybe add another core. 0. What's, what's a good chord that goes with f? C will be easy. Okay, There's, see, I didn't sound right. F, C E, C E, G, T. And we go and let's do something weird. Let's do like, I don't know, E-flat four, it would be kind of fun. E, G, B flat, B flat. Let's take this up to their shirt. And then let's take this back on or off guard. Okay. So I'm going to get rid of that arpeggiator. And here is what our Miniclip sounds like. Okay, lovely. Now let's put an arpeggiator on it. So an arpeggiator says, play these one note at a time. So that means we have a lot of control over what we can do with the arpeggiator. We can tell it how fast to play, what direction to go, whether we want it to go up and then down or just up forever. So right out of the box. This is what it sounds like now. Okay, cool. So we can say style, we can say up, down, up, down, down, down then up, et cetera. We can say totally random, which is something I rather enjoy. So here's random. Okay? Neat. Speed is this rate right here. You can see rate is at an eighth note right now. So that means that it's going to play our notes 1 eighth note at a time. So I can speed this up to say 16th notes, for example. That's going to be twice as fast, but it's not going to play through the clip twice as fast. It's going to generate notes from the notes that we're giving it while still keeping the chord changes in the right spot. Hey, cool. Now if I want to get rid of those repeated notes, I can say random other. That means always play a different note to what you're currently playing. Okay? And then I've got some other options here that I can play with. But the main things you're always looking for an arpeggiator is the style or direction and the speed. Now one trick I like to do with arpeggio eaters, which was actually really fun, is set it to some kind of random like this and then duplicate it. So now I've got two and they're both random. So what that means is that, uh, most of the time going to get a harmony, right? Because we've got in most of these four possible notes, two of them are going to happen and every now and then they'll have the same one at the same time, but not very often. So it'll create this nice kind of moving harmony. Then if you really want to even reinforce it some more and duplicated again. But on this one, I'm going to take it down to eighth notes and put a different sound on it. Let's go to instruments and let's do something, some kind of pad or something. That's cool. Sure. Okay, So now we'll have this one moving much slower and add a kind of a big warm sound. And then these going really fast, really easy. One chord progression duplicated the track twice at an arpeggiator, played around with it. Super simple. I'm just gonna go nuts with it and add one more and get rid of the arpeggiator altogether. Now this one is, it's going to play the chords as they are like that. Okay, Now let's hear it. Maybe if I want this one to stand out a little bit, could select all and shift it up an octave, two octaves. Now this one will be up really high. Neat. So arpeggiator is our great. If you've got a chord progression and you're just trying to add life to it, thrown arpeggiator on it. And it'll just create a lot of motion with it. 71. 72 Chord: Okay, The next one is a chord midi effect. This is going to degenerate chords from a single note. So we wouldn't want to put this on this clip because it's already chords. So let's get rid of stuff here. Let's actually just see if we can find this is, this one's actually kinda tricky to use in a case like this, but we're going to try. So let's see. Let's try to look for a little melody in here. Let's do that. Okay, so here's my little malady. Not much. So with this chord thing, what you can do, I'm going to put it on here and see what we have is we're just going to add notes to the single note that it has. So it's got a note. So we're going to say add how many semitones above it. So let's add four. Actually, I think we start with 1, 0. So that's our original note. We'll leave that there. And four will be a major third, seven will be a fifth. Now we'll hear a major chord for every note in this one. Let's just solo it. Right? So there's our chord. What was a single node is now a chord. Okay, that's cool. However, this can be really dangerous because remember, we're dealing with notes here and it doesn't know or care what key we're in. So it's building a major chord off this note, which is a C. And that's not actually the cord that's playing here. This one, I think that was a C chord. Yeah, the second one is a C chord. So that's going to sound right? This is gonna make an a sharp corner or a B flat chord, which is right. But this one is an F chord. So it's building a C chord on an F chord. Anyway, I don't wanna get too much into music theory, but doing this is actually like kind of dangerous. Let's hear it anyway. You're going to hear some kind of definitive, not too dissonant things on this first and last chord. But this is what it sounds like. And turned everything else down just a little bit. Yeah, I see there's some weird stuff in there. One thing that the core thing is actually really good for, and it works all the time. If you have a melody, go in here, leave your original note, go to this one. Add 12, just add an octave. And that's always going to sound good. So good for that. You wanna get fancy with it at another octave by adding 24. Now it's going to be really screaming. Let's just play that. So we've got three octaves that male talker now. So that's a good way to use. Ct is just adding octaves. But otherwise, just remember that if you add notes to it, those are chromatic. They don't know what key you're in. So it can be easy to create some really dissonant stuff on accident. 72. 73 NoteEcho: Okay, Next is Note Echo. Now this is basically a delay, but a little bit different because it's a midi delay. So I'm gonna take this one, Add Note Echo to it. And we're going to look at it. Okay, so I still have my arpeggiator on here. Okay, So this is just our chords, but it's the arpeggiator is happening burst. And then this note echo or a note delay. So that means that the arpeggiated notes are going to get delayed. If I switch the order, it's going to be much less interesting. Because if I put this first, now, these chords are going to come in to the Note Echo. And that might be interesting, depending on how I have it set up, but it won't be very interesting because those are just holding on to a court with a delay like this that's more effective if you have something with rhythm. So those will go into the arpeggiator and then get arpeggiated. So it won't be very interesting, but if I go this way, first, the court is going to get arpeggiated. So a lot of rhythm is going to get generated. Then we can start playing around with the delay. So let me just solo this one. So with a delay time, all programs work different and how they showed delay. What Ableton is showing me here is number of 16th notes. So 1234568 and 16. So four is going to be a whole quarter note. Eight is going to be two quarter notes, and 16 is going to be a whole bar. If you want something a little more, a rhythmic to happen, a little more syncopated, choose an odd number, like three or five, or even one. Actually. So let's do one. Actually. Feedback is how many times is it going to come back at you? So if you imagine like a ping-pong ball dropping, it hits once, That's a delay with no feedback. It hits once and then again and again and again and again and again and again and getting, getting, yeah, that's a lot of feedback. Okay, So let's put it up about halfway. Okay, here's what we have now. Okay. So we're hearing a bunch of different notes, right? The reason is our arpeggiator is a 16th note and we're delaying it by a 16th note. So we're hearing the original and then 16th note, and then that same 16th note again and a new 16th note, that same 16th note again. And then the previous 16th note and another 16th note. And they're building up into chords. So I'm going to turn my feed back down quite a bit. Now shot k. So because we are delaying by 16th notes and our thing is based on 16th notes for getting. It's just kinda piling up. So let's change this to quarter notes. Now it's here. It's actually kinda neat where those syncopation that's here in context of our whole little jam here. And put a new sound on it. Let's hear now. Let's take it up an octave. Okay, now let's try it. Okay, I like that. Like weird bells going off. Anyway. No delay. 73. 74 NoteLength: Okay, Let's duplicate this one and add note length. Oops, no length. Okay? So with this one, I'm going to take my arpeggiator backup to a 16th note and get rid of the note delay on this track. Let me have just note length. Now what note length is going to do is a little deceptive. It's going to stretch out our notes. It's not going to change when the note starts, but it is going to change when the note ends. In other words, it's going to delay the note off message, right? So it's going to let those notes hang on for a lot longer. So you can see your triggers, the note on, so note on triggers, a delay of note offs basically. So I have this arpeggiator you go and 16th notes. So it sounds like this. Okay, now let's turn on the novel length. Okay, let's crank it out. She doesn't note off messages aren't coming until later. And it's almost as like holding the pedal down on a piano. I can crank down as gain to kind of tighten it back up, which is in this case going to basically undo what we did. Let's go, let's see how that works into our goofy that are tracking. It's cool. I like that. It kind of fade out in it. So note length, it's the secret of note link is that it's really just delaying your note off messages. 74. 75 Pitch: All right, Let's duplicate this, drag one more time and add the pitch on it. This is going to be just a straight up Transposition. You'll be able to find these in j