Machine Learning Guide with Hands-On Examples | Kerem Aydin | Skillshare

Machine Learning Guide with Hands-On Examples

Kerem Aydin, Software Developer, Unity Developer, Gam

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
15 Lessons (2h 7m)
    • 1. What We Will Learn?

      1:53
    • 2. Usable Environments

      5:30
    • 3. AI. Machine Learning and Deep Learning

      4:54
    • 4. Section 3 02 History of Machine Learning

      6:53
    • 5. Turing Test and Turing Machine

      12:13
    • 6. Machine Learning Workflow

      9:47
    • 7. Machine Learning Models

      21:51
    • 8. Gathering Data

      4:55
    • 9. Data Pre-Processing

      5:35
    • 10. Choosing The Right Algorithm and Model

      7:49
    • 11. Training and Testing the Model

      5:17
    • 12. Evaluation

      6:53
    • 13. Neural Network

      8:17
    • 14. Amazon Face Rekognition

      16:23
    • 15. Clarifai

      9:02

About This Class

Hi there,
Have you ever wonder hhat's behind the machine learning hype? In this non-technical course, you’ll learn everything you’ve been too afraid to ask about machine learning.
There’s no coding required. Hands-on exercises will help you get past the jargon and learn how this exciting technology powers everything from self-driving cars to your personal Amazon shopping suggestions.
How does machine learning work, when can you use it, and what is the difference between AI and machine learning? They’re all covered.
Gain skills in this hugely in-demand and influential field, and discover why machine learning is for everyone!

In this course, you will learn how to use Amazon Rekognition.

Amazon Rekognition makes it easy to add image and video analysis to your applications. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content.
Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Hope you will like the course

Transcripts

1. What We Will Learn?: Hello dear friends. So welcome to the machine-learning guide. In this course, we're going to learn what is machine learning and how does it work. This course is very well suited for everybody who's interested in the concept of machine learning. So in this course we're going to learn the basics of machine learning, machine learning workflow, models and algorithms and the neural network concept. So first of all, we're going to learn which environments can be used for developing machine learning projects will start at the very beginning of the machine-learning story with a little bit of history of AI and Turing Test. So during the course, we're going to learn what is artificial intelligence, machine learning and deep learning. What's the history of machine learning? What is the logic of machine learning? Such as understanding the machine learning workflow, machine learning models and algorithms, gathering data. Data preprocessing, choosing the right algorithm and model, training and testing the model evaluation. In this course, we're going to cover two different examples. Amazon face recognition and clarifies demographic. So that's our introduction. And in the next video, we're going to explore the environments that we can use to develop machine learning projects. Alright, so I hope to see you in the next video. Till then, have a great day. 2. Usable Environments: Hello dear friends. So in this video we're gonna talk about Jupiter notebook and Google collab. Jupiter is a loose acronym meaning Julia, Python, and R. So these programming languages were the first target languages of this Jupiter application. But these days, notebook technology also supports many other languages. Jupiter notebook is one of the most successful tools that we can use to keep our notes and calculations together. It's frequently used and repeatable research and data science. All right, so let's see how to use it. So when we first open Anaconda Navigator, we'll choose a Jupiter notebook and click Launch. And Jupiter will open up a webpage. So now we'll create a folder and change the name to Python lessons. First click new in the top right corner and choose folder. Now again, rename our new folder here. Open up our new file and click New Python three. Now open a blank page. So this page is our editor. And we can write our code or notes. So why don't I write an example, Hello World, and press Shift and Enter at the same time. And that runs the code. So the result is shown. And the outbox. Let's do another example, three plus four and R1. And the result is weight as I write it, seven. So as you can see, we can also make mathematical operations and Jupiter notebook. Alright. So we'll have a look at Google collab. So Collaboratory, or collab for short, is a product from Google Research. Colab allows anybody to write and execute arbitrary Python code through the browser and is especially well-suited to machine learning, data analysis, and education. But more technically, collab is a hosted Jupiter Notebooks service that requires no setup to use while providing free access to computing resources, including GPUs. Technically, Jupiter and collaborate, the same thing. But there are some differences between them. So Jupiter is of course, the open-source project on which colab is based. Colab allows you to use and share Jupiter notebooks with others without having to download, install, or run anything. Go lab notebooks are stored in Google Drive or can be loaded from GitHub. So let's have a look and see how to use it. So first up, we'll just open up our browser type in colab. Click on the URL. Now on this page, you've got a Google account. You gotta login. If you don't, you're going to need to create one. So then after we log in, just click the File tab up here at top right in the browser and select new notebook. Are no book is ready to use. So actually we use collab like Jupiter. So let's give it a shot. Let's do opened example, Hello World. And again, you can use Shift and Enter at the same time. And that runs the code. And the result is shown here. So let's do another example. Three-plus four. See how it does complex mathematics. There's your result. Now. Here's a very useful feature. You can change runtime types. So the helmet is feature we can accelerate our processes. So when you start to develop machine learning projects, you're going to realize that speed is really important. And with the help of this feature, we can increase our speed. We're not gonna get into that any deeper because that's it for now. We went over some of the basic things about Jupyter Notebook and Google collab. So now I think we're ready to start some of these lessons in the machine-learning. All right, so I'll see you in the next lesson. Until then, have a nice day. 3. AI. Machine Learning and Deep Learning: Hello dear friends. So in this lesson we will start our course on AI, machine learning and deep learning with Python. So let's go ahead and get started. So if we want to learn the concept of deep learning, we really should know some important terms like artificial intelligence and machine learning. And past few years. And certainly now artificial intelligence has become an incredibly popular subject and it's virtually inescapable in our lives today. We have seen and will continue to see many, many, many articles about AI and machine learning. And are they accurate? Well, you tell me if you work in IT or in a related sector. You can always see these terms cropping up every day and certainly more than a handful of times. So these are the days that we're seeing self-driving cars and traffic, or we talk with chat bot to online and we don't even know it. Shopping sites we use virtual assistants is it's pretty amazing, right? So then our first question really is, what is a i? So of course, in order to answer this question, I mean, really answer. We need to go back in time a little. As you know, the world was at cloud of gas and dust that well, wait a second. We don't need to go back that far. I'm just kidding. But what my point is is everything starts with a question. A few decades ago, the pioneers of computer science asks a question, what if computers could think like humans? And then of course, the journey began even a day. We're still looking for the answer to this question. That's why it might seem like it's an elemental question, but I think it's the very starting question. Now there's tons of different questions and theories and works and articles. Everybody's trying to get their very own definitive definition of artificial intelligence. Now of course, we've made some progress, but we still got a long way to go. So in my general estimation, ai is the effort to automate intellectual tasks normally performed by humans. And also, when we dive deeper into this definition, we figure that AI is actually Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals, right? So it's a demonstrated intelligence. So the pioneers, they tried to develop a II and they turn to code. In fact, early chess programs are really good example of this. And beginning the programmers coded these chess programs with hard-coded rules. But it didn't actually qualifies as a i. And for a while, these pioneers thought that if a sufficiently comprehensive set of rules was created, artificial intelligence could eventually be achieved. Now, this approach became named symbolic AI. Now it's very popular, especially in the 19 eighties. Symbolic AI achieved something like playing chess, but obviously that's not enough because the humanity as a whole needs a solution for more complex problems. Such as image classification, speech recognition, language translation. C, symbolic AI may be really good at playing chess, but when it comes to image classification or more complex stuff like languages, well, who was totally useless? And now enter the era of machine learning. We have arrived, my friends. So for now, I'm just gonna tell you the story and the history of machine learning in the next lesson. But be prepared for a breath taking story. Alright, until then. Well, I have a great day. And I hope see you in the next lesson. 4. Section 3 02 History of Machine Learning: Hello my dear friends. So in this lesson, we will learn the journey of the machine learning process. So why don't we get started? So earlier in the last lesson I said, You know what, everything has got to have been started by an idea. Maybe that idea was brought about by a question which came first, the idea of the question, well, in any event, what if machines started thinking like humans? Hmm? Alright, so way back in the 19th century, a very famous inventor invented a machine. Alright, inventor's name was Charles Babbage. The machine name was an analytical engine. So this machine's main purpose was computing certain mathematical calculation. Now, it was a visionary invention, and of course, it was way ahead of its time. After another decade, Another inventor realized this machine wasn't originating anything, right? It was only doing everything that the inventors knew how to order it to perform. And this inventor's name was Lady Ada Lovelace. Now this was an incredible milestone in machine learning. So this definition also started to form the basis of the Turing Test concept. Now we have another inventor who enters the scene. His name of course was Alan Turing, the inventor of the Turing machine. Now, basically the purpose of the Turing machine is to check to see whether it is logically possible to say that a machine can think. Basically it, test of machine's ability to exhibit intelligence behaviors that is roughly equivalent to or even indistinguishable from that of a human being. During suggested that a human evaluator would judge Natural Language conversations between a human and the machine designed to generate human-like responses. So this is a very famous phenomenon in the AI world. So I tell you the story about the Turing machine and the Turing test. But we'll wait till the next lesson. I want to get back to this story. So machine learning or rises from the question, can a computer learn on its own how to perform a particular task? So it's going beyond what we know how to order it to perform. And can a computer actually surprise us? Getting computer learn those rules automatically by looking at data rather than by programmers who reach in and manually create the data processing rule. So this question opens up a door to a new programming paradigm. As we spoke in our last lesson in classical programming and AI approach, it was based on humans input rules and data to be processed according to these rules and predictable outcome answers. But as you know, that's not AI. And machine learning. Humans input the Antar's expected from the data, as well as the data creating the rules. So these rules can then be applied to new data to generate original answers. And hopefully you can see this on the chart. It is the main idea of the machine learning process. All right, so in other words, in classical programming, the rules in the data are input and the answers are basically the outcome, the output. Well, in machine learning, data and answers are here and we get the rules here. So machine-learning systems are trained rather than explicitly programmed. It's presented with many examples of a task. It finds the statistical structure that allows the system to finally find rules to automate the task. So these days, some developers try to do just that, give an enormous amount of data and answers to the AI and then try to generate an answer. And there have been some really incredible developments in the last few years, right? We got some promising results. For example, if you want to automate the task of tagging your vacation pictures, you could provide a machine learning system with many examples of images already tagged by humans. And the system would learn statistical rules to associate certain pictures with specific tags. You can't get rid of doubt by replacing it with trust. Hm, that's pretty deep. But what do you think about this aphorism, how it was written by an AI. So a developer used AI and coded a website. It is designed in such a way that a new aphorism will appear in front of you every time the webpages refresh. Now, that's pretty cool. So what would you say about an AI that writes code for you? Is that possible? Well, imagine. You could just say the operation that you want to do and the AI will write the code that you need. Now, wouldn't that be something fascinating? And what would you do if I said it was already done? Will share. Today, AI can write a novel, just mimic the greatest writer. Even write a poem by imitating famous poets. Now, I know, I know you'd think it 2j, but these days, nothing is impossible. So aren't you thrilled? Because I am. And if you're ready, we are going to start on our journey. And I hope that you're going to be there in our next lesson. Until then, have a nice day. 5. Turing Test and Turing Machine: Hello dear friends. So in this lesson we will examine the Turing machine, the Turing test. Yep, it's a very popular phenomenon in AI. So you gotta know what? Well, all right, let's get started. So as you remember, in our previous lessons, we've talked about Alan Turing, the inventor of the Turing machine and the Turing Test. So now we'll examine the machine and the test. So automatic auto Mata and algorithm analysis, which constitute an important part of computer science, are one of the cornerstones of linguistics underlying the studies. It simply machine that consists of a head and a tape. Now the operations that can be done on the machine can be listed, such as right, read, fast-forward the tape, rewind the tape. Now the whole theory here is based on these four simple operations. And languages. And operations are classified according to whether job can be done using only these operations above, or whether language can be reduced to these simple for operations and extent. This classification is shown in the above Venn diagram. And at the same time you see the languages that are level one, type one for the Chomsky hierarchy. These can be accepted with the turing machine. And they include all type two and type three languages. That is independent languages and regular languages. In addition, the content can also process words in the form of a n, b n, c n. That independent languages cannot process. They cannot produce or parse. All right, so let's move on. So Turing machines are academically defined as what is on your screen. But I won't break it down for you. So the parts of the machine indicated here by the letter M are listed here. The symbol Q represents a finite set of states. So on the words, it is the state of the machine that it takes during processing. This symbol indicates the alphabet that includes all Letters found in the language. For example, if you're using binary numbers, it is accepted as 01. And here this symbol indicates the input set to be given to the machine. So it would be correct as say, sigma is a subset of gamma. As the input set cannot contain any symbols other then the letters in a language. Alright? This symbol holds the transitions in a language that the machine will use during its operation. This symbol denotes the spaces on the tapes. So in other words, this symbol is red when there is no information on the tape. The symbol Q 0 holds the initial state of the machine, and hence q 0 must be q. This symbol f holds the state of the machine and must be f as a subset of Q again. So just by using the symbols, we can build an example Turing machine just like this. For example, let's show the regular expression, a star, which is a simple word with a Turing machine and see if our machine accepts the three a and a form of AAA given to us. So to be clear, let's define our machine as follows. And if we are to interpret this machine, q 0, q one is given as the value of Q star machine will have two functions. A x is given as the value of. So in other words, the symbols used in our machine consist of a and x. A is given as the value of. So in other words, only a input it is acceptable by a machine. To transitions are given as the value of q 0 a approaches a r q 0, q 0, x approaches x l Q1, where r is right winding, L is left winding. And as you can see at holds, the transitions between states of the queue. So x is given as the value. So from this, it's understood that the symbol x is actually the empty symbol. And it is the value read when there is no value in the band. So with q 0, the state of the machine in its initial state is specified. The value of Q one is given as the F value. So when our machine reaches the Q1 state, it ends are halts. And if it does come to this state, it will accept the input up to this day. Now it's also possible to show this definition visually like this. Alright, so here's a yeah, great looking Turing machine. I will take no artistic credit. But let's examine our machines, sample study and band status step-by-step. So in the first step, let's assume that there are three pieces of a on our band. And let's go step-by-step to see whether our machine will accept this value of three pieces of a. And what we want to be able to do is build a machine that accepts three pieces of a value. So in this case, we do have the expected value on the tape. And then we wonder whether it will be accepted or not. The value that our machines head reads is the symbol a. So according to the transition design of our machine, we start in q 0. And when a comes up, we have to wrap the tape to the right and stay in state q 0. And now the new case, the value our head reads is the second letter a on a tape. And in this case it is designed to wrap the tape to the right and remain in the 0 state again in the state. And now in the third case, the value read by our head is again the symbol a. And similar to the previous two cases, we wrap the tape to the right as a result of reading the symbol a while in the Q 0 state and remain in a constant of the Q 0 state. Now, in the step four, the value that we read from the tape is this space symbol X. So this value is designed as we go to the Q1 state in the design of our machine. And we will order that tape to wind left. Now on the last date of the machine that Q1 state was designed as the acceptance and the end state of the machine, f set in the machine design. So our work ends here, and therefore, we accept the three pieces of a input as input. Now I know it's a lot all at once if you haven't yet encountered or if it's been awhile since you've encountered the Turing machine, but it's basically the working principle of the Turing machine. So let's just move ahead to the Turing test then. So the concept of a Turing test, well, it forms the basis of artificial intelligence. So as you may remember, this is a test that was created by Alan Turing. And it basically describes the ability of a computer to behave like a human. Will it or wanted. So if one of the goals of artificial intelligence studies is to make a computer that works like a human. Then. How can a computer work like a human? Right? It's philosophical and mathematical. All the 1's. So Turing explains this with a simple test. So let's say there is a human test subject looking for answers to questions. Now there are two computers behind a wall. One of them has a person in the keyboard. The other one has software programmed to answer questions. So then the test subject and puts a question and gets an answer back. It's then up to the test subject to decide which one of the computers has responded to the question. Is the one with a human typing the answer back? Or does he answered come from the computer that is software driven. So let me show you in this figure there is a test taker on the left side of the wall. And this person is connected to the two computers. And remember, one behind the wall is a computer from which a real person replied. And the other behind the wall is a computer that generates enters via a software program. So how does it play out? Well, the test, you're be able to figure out from the questions that they write and receive. Which computer is a real person and which one is software. Now the thing is there's no limit to the questions that can be asked here. In fact, any questions can be asked. For example, the computer could definitely get the square root of the number 4,096 and probably return the answer a lot faster than most humans that I know. And conversely, could it take software longer, if ever, to develop an appropriate response to a deep but simple questions such as what's up today. So interestingly enough, this question breaks the world down into two decidedly different types of people. We all of a sudden have those who think that this test will never be passed. And we've got an equally passionate group of people who think that, yes, software will be able to pass this test one day. So which one are you? Maybe you'll be the one that comes up with this software. That finally answers the Turing test. All right, my friends, so that's it. This is the video that we talked about, the Turing test as well as the Turing machine. I certainly hope that it either brought back some fond memories or you understand it a little bit better now. So we're going to move on in our next lesson until I see you then have a great day. 6. Machine Learning Workflow: Hello dear friends. So in this lesson, we're going to talk about machine learning workflows. So let's get started. Now, like I was saying before, basically machine learning. Its purpose is to make computers learn from the data that you give them. Your code provides an algorithm that adapts to examples of intended behavior, rather than writing code that describes the action that computer must take. So how do you do that? Well, let's have a look. So in this diagram you can see the machine learning workflow. When we use machine learning to solve our problems, we need to use the steps. Now of course, our first step is evaluating the problem. So when we start out, we need to think about how to solve the problem by way of machine learning. And then you can ask these questions yourself. Have we analyze a problem that we need to solve? Why do we need the information that we are trying to extract from the model? Is machine learning even the best approach to solve this problem? So yeah, the approach of machine-learning will come in very handy if we have a large data set in our problem. And unfortunately there is no exact amount. So you're just going to have to guess how much data is enough. But each feature that's included in the model increases the number of samples. Things like data records needed to properly trained the model. So in addition to all that, we'll need to divide our data set into three separate groups. Training, validation, and testing. We should look for an easier and more concrete way to solve our problem. How can we measure the success of the model that we've created? So one of the biggest challenges that will face when creating a machine-learning model is knowing when the model development phases over. It may be tempting to try to improve the model development process by making continuous improvements. So before starting the process, we've got to determine the definition of success that we need as clear as you can. And once we've done that, we can go to the next step. So this step is all about gathering data. The process of gathering data depends on the type of project that we desire to make. If we want to make an ML project that uses real-time data. And we can build an IoT system that uses data from different sensors. The data second, we collected from various sources such as file database sensors and many other sources like that. But the collected data can't be used directly for performing the analysis process because there might be missing data. A lot of missing data, extremely large values and unorganized text data or noisy data. So in order to be able to solve this problem, data preparation has to be prioritized. Now we could also use some free datasets which are present anywhere you search for on D internet. Kaggle and UCI Machine Learning Repository are repositories that are used, for the most part for making machine learning models. Kaggle is one of the most visited websites That's used for practicing machine-learning algorithms. So go there and spend some time looking around. Now our next step is data preprocessing. Alright, so what do we mean by data preprocessing? Data preprocessing is a process of cleaning the raw data. That is, the data that's collected in the real world. It's converted to a clean data set. So that means whenever the data is gathered from different sources, it's collected in a raw format. And this data isn't necessarily usable in that form for our analysis. Therefore, certain steps are executed to convert the data into smaller clean datasets. This part of the process is called data preprocessing. So why do we need it? Well, as you know, data pre-processing is cleaning rod data so that it can be used to train the model. So we definitely need this data pre-processing step to achieve the best results from the applied model in machine learning and deep learning projects. Now, we're gonna get into that a whole lot deeper in one of our next lesson. The next step is researching the model that's going to be best for the type of data. So as you may guess in a machine learning project, our main goal is training the best-performing model possible using this preprocessed data. So developing our model using established machine-learning techniques or by defining new operations and approaches. So here, this diagram shows us the machine learning process with categories. We're going to get into these a lot deeper in our upcoming lesson. So let's get to the next step. And the next step is training and testing the model on data. Now for a typical training model will initially split the model into three different sections, which are training data, validation data, and testing data. Alright, so we train the classifier by using the training data set, tune the parameters using the validation set, and then test the performance of the classifier on and unseen test data set. So an important point to be aware of here it is, during training, a classifier only can access the training and validation set. So that means that the test data set has got to be separated and not used when you're training a classifier. So the test set is only going to be available during the testing of the classifier. I tell you why, I'll show you a diagram and that'll probably make more sense and trying to explain it in English. Now check it out. The training set is the material through which the computer learns how to process information. Machine learning uses algorithms to perform the training part. A set of data that's used for learning. That is to fit the parameters of the classifier, the validation set. So cross validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. A set of unseen data is used from the training data to tune the parameters of a classifier. And the test set. It's a set of unseen data used only to access the performance of a fully specified classifier. So then once the data is divided into the three given segments, we can start the training process. So it's a lot, but it's enough for now for this step. Alright, so we'll move onto the next one. The last step is evaluation. So model evaluation, of course, would be an integral part of the model development process. It helps to find the best model that represents our data and how well the chosen model will work in the future. To improve the model, we might tune the hyperparameters of the model and try to improve the accuracy. And also, you've got to look at the confusion matrix to try to increase the number of true positives and true negatives. Alright, so that's it. In this lesson we talked about the machine learning workflow very briefly. It's just an overview, but it gets really complex. So in our next lesson, we're going to talk about that machine learning models. Hope see you there. Until then, have a nice day. 7. Machine Learning Models: Hello my friends, so glad you made it back. Now what we've been talking about this whole time is the machine learning workflow. So in this lesson we're going to get into learning models. Alright, so let's get started. So hopefully you remember that we said that if we give data and answers at the same time to a computer, the computer can learn the rules and then in the future, it can generate original answers. So this approach is called machine learning, and the process is named learning, just like you and me. Now, machine-learning was created to solve problems in situations where analytical models are just not adequate. When equations and laws are not promising. We use machine-learning techniques to derive a model using training data. As you can see, we need data for machine learning. And if we get enough data and answers, then AI can create the rules. So today on The Big Planet Earth, we've got some very big data. It's a catchphrase out there, right? Big data, big data, whatever. So obviously he is increasing every day and we try to figure out how to use it. So looking at this as a problem, developers thought that, well we can use machine learning. And indeed they have achieved something. And then every day they achieve something else, something new. So let's break it down a little bit. Basically, we try to develop machine-learning algorithms to create programs that can learn from data and develop from experience without intervention. Now these algorithm models can be categorized into two different categories. Supervised learning models and unsupervised learning models sounds very complex. However, when we get into it, let's have a look at our first candidate being the supervised learning model. So supervised machine learning creates a model that makes Evidence-based predictions within uncertainty. A supervised learning algorithm takes a known set of input data and known responses to that data, then trains up a model to generate reasonable predictions for response to new data. One way to use supervised learning is if you have known data for the output you're trying to predict. For example, you may want to train machine that will help you predict how long it will take you to get home from work. Okay, so here you start by creating a series of labeled data. And these will include weather conditions, time of day, as well as holidays. So all of these details go onto our input. Then the output is a time it takes to get back home that day. So we know instinctively that if it rains outside, it'll take longer to get home. But The machine needs data and statistics. It doesn't know instinct. So let's see how we can develop a supervised learning model for this example that will help determine commute time. So the first thing we need to do is create a training kit. Now this training set will include related factors such as the total commute time, whether time of day, et cetera. And then based on this training set, our machine can see that there is a direct relationship between the amount of rain and the time it will take to get home. So then as it did text, that the more rain it rains, the longer it's going to take to get back home. So then I can also see the connection between when we leave work and when we will be on the road. So the closer we are to 06:00 PM, the longer it will take to get home. So our machine might just find some relationships with our tag data. And then it begins to see how the rain affects the way people drive. And more people travel at certain times of the day. I mean, this gets in a really fascinating territory quickly, don't you think? Alright, so if you're ready for the next one, let's get into the supervised machine learning algorithms. So supervised learning uses classification and regression techniques in order to develop predictive models. So let's have a look at our first technique, which is classification. And classification is a technique of dividing our data into desired and different numbers of classes by which we can assign tags to each class. For example, it predicts different responses such as whether an email is genuine or spam, or whether a tumor is cancerous or benign. So classification models categorize input data. If your data can be labeled, categorized, or classified into specific groups, then you can use the classify example. I'll give you another example, handwriting recognition apps. They'll use classification to recognize letters and numbers. Common algorithms for classification also include Support Vector Machine or SVM. Decision trees, K, earnest neighbor, naive Bayes, discriminant analysis, logistic regression, and neural networks. So we can use classification techniques in speech recognition, handwriting recognition, biometric identification, document classification, et cetera, et cetera, so on and so forth. So let's get into this Support Vector Machine or SVM. It's a discriminative classifier, formally defined as a separating hyperplane. So in other words, given labeled training data from the supervised learning, the algorithm outputs and optimal hyperplane, which categorizes new examples. In two-dimensional space, this hyperplane is a line dividing a plane into two parts where on either side lay each class. Support vector machines are mainly used to distinguish between two classes of data in the most convenient way. Hyperplanes are great use for this. Now today, DEMs are used in many classification problems from face recognition systems to voice analysis. Some of the advantages will include they are certainly effective in higher dimensional spaces. They're more effective when the number of dimensions is greater than the sample size. The number of training points are used in the decision function or support vectors. Therefore, the memory that gets used is used more efficiently. And there are certainly versatile. Many different kernel functions can be used for the decision function. All right, so let's move on from that. Very good. So our next technique is regression. Now it's usually used to predict a continuous value. So how size, price, et cetera, estimating the prices of a house given its characteristics is one very common example of regression. You can use regression techniques if you're working with a range of data or if the nature of your responses are real numbers such as temperature or time to failure of a piece of equipment or something. Common regression algorithms will include a linear model, a nonlinear model, regulation, stepwise regression, Decision Trees, and Neural Networks. So at this point we can say that when training and artificial intelligence using supervised learning, we give it an input and tell it the expected output. So if the output produced by artificial intelligence is wrong, it will readjust its calculations. So this process gets repeated over and over and over on the data set until the error rate of artificial intelligence is minimized. Logistic regression is also a statistical method that's used to analyze data with one or more independent variables to determine a result. And the result is measured by a binary variable. So that means that there are only two possible outcomes, right? So in logistic regression, the dependent variable contains data coded as binary or nonbinary. Binary or binary. That is only one true success, pregnant, et cetera, or 0, false error, non-pregnant, et cetera. The purpose of logistical regression is to find the most suitable yet biologically plausible model to describe the relationship between a two-way characteristic, which is a dependent variable, equals response or outcome variable, and a set of independent predictive or explanatory variables. So there's also linear regression. Linear correlation and simple linear regression are statistical methods that examined the linear relationship between two variables. So it's worth highlighting here the following different correlation shows how related two variables r. Whereas linear regression involves creating an equation or model that allows one to estimate the value of one from the other based on the relationship between the two variables. So in that way, linear regression is used to find the straight line or hyperplane best suited to a set of points. So in other words, linear regression establishes a relationship between the dependent variable y and one or more independent variables x, using the best fit straight line, also known as the regression line. Now, tree-based learning algorithms are among the most supervised learning algorithms. So in general, they can be adapted to the solution of all the problems, classification and regression that are dealt with. So methods such as decision trees, random forests, Gradient Boosting, These are all widely used in all kinds of data science problems. Therefore, it's very important for data analysts to learn and use these algorithms. Decision tree algorithm is one of the data mining classification algorithms. So they haven't predefined target variable. And by their nature, they Offers strategy from top to bottom. So just quickly, a decision tree is a structure that's used to divide a data set containing a large number of records into a smaller set by applying a set of decision rules. So in other words, it's a structure that's used by applying simple decision-making steps, dividing large amounts of records into very small groups of records. Now, how the split occurs in decision tree algorithms is one of the factors affecting the accuracy of the tree. The division criteria for classification and regression problems are generally different. Decision trees use multiple algorithms to decide whether to split a node into two or more sub nodes. Creating child nodes increases the homogeneity of child nodes. So in other words, we can say that the purity of the node increases according to the target variables. So algorithm selection is based on the type of target variable makes sense. Now, random forest is probably one of the most popular machine-learning models out there because it gives good results without hyperparameter estimation. And it's also applicable to both regression. And classification problems. So to understand the random forest, it's necessary to first understand the decision trees, which well is the basic blog of this model. However, one of the biggest problems of decision trees, which is one of the traditional methods, is overlearning, over fitting. So in order to solve this problem, the random forest model randomly selects tens and hundreds of different subset from both the data set and the feature set and trains them. So it's with this method that hundreds of decision trees are created and each decision tree makes individual predictions. So then at the end of the day, if our problem is regression, if our problem is classifying the average of the estimates of the decision trees, we choose the most votes among the predictions. So a good example would be, let's say that you want to watch a good movie tonight and well, there's a lot out there, you're confused. So if you call a friend and they'll be able to tell you the type of movie that you prefer, maybe by duration, year, actress or director, if it's Hollywood or alternative or whatever. And if you make a prediction based on the movies that you have watched before, which is a training set with the various questions from the Question Set of your friends. In this case, this becomes a decision tree. So let's say that if you're 20 friends, choose different questions from this set of questions and advice based on your answers. And then you choose the most recommended movie that my friends will be the random forest. Now since training takes place on different datasets in the random forest model, the variance, in other words, overfitting, which is one of the biggest problems of decision trees, decreases. And in addition to that, we reduce the chance of finding an outlier in the sub datasets that we create with the Bootstrap method. Now another feature of the random forest is that it tells us how important the attributes are. The importance of an attribute is related to how much that attribute contributes to the explanation of the variance in the dependent variable. So we can give the random forest algorithm x number of attributes and then ask it to select the most useful, which would be y. And then if we want to, we can use this information. Let's go ahead and move on to unsupervised learning algorithms. So at its most basic level, unsupervised learning is a machine-learning technique where we don't need to control the model. Instead, we need to let the model run on its own to discover the information. Unsupervised learning algorithms enable us to perform way more complex processing tasks compared to our Guzman supervised learning. But in learning without denture, the system is not taught, right? It learns from data. Unsupervised machine learning finds any unknown pattern in the data. So unsupervised methods will help us find properties that can be useful for categorization. And if you train an artificial intelligence using unsupervised learning, you allow the artificial intelligence to make a logical classification of data. So a good example of unsupervised learning is artificial intelligence that will predict e-commerce for an e-commerce website. Because here it's not going to use a labeled input and output data set, so it hasn't learned anything, but instead, it's going to create its own classification using input data. So it'll tell you which types of users will be able to buy which different products. So in unsupervised learning, we can use two different techniques. First technique is clustering. Clustering is an important concept when it comes to unsupervised learning, I'll tell you why. It mainly deals with finding a structure or pattern in a collection of uncategorized data. Clustering algorithms will process our data and find natural clusters or groups if they exist in the data. And we can also modify how many clusters are algorithms should identify. And then it also allows us to adjust the granularity of these groups. And of course, there are different types of clustering that we can use as well. K clustering. So the k stands for an iterative clustering algorithm. And that helps you find the highest value for each iteration. Initially the desired number of clusters is selected. And in this clustering method, you'd need to cluster data points into k groups. Now larger k equally means smaller groups with more detail. Hierarchical clustering. These clusters, your data points into super sets and subsets. So you can divide your customers, for instance, into younger and older ages and then divide each of these groups into their own individual clusters. Probabilistic clustering. This will cluster your data points and the clusters on a probabilistic scale. So obviously they're all each very different in their usage types and each of them will have their own pros and cons. So let's hit up our second technique. It's association. So the association rule is unsupervised learning where the algo rhythm tries to learn without a teacher as data won't be labeled. So an association rule is descriptive, not predictive as its method, but it's generally used to discover interesting relationships that are hidden, enlarged datasets. So the relationships are usually represented in the form of rules or frequent itemsets. Association rules mining are used to identify new and interesting insights between different objects in a set. For instance, frequent pattern in transactional data or any sort of relational database. They're commonly used for market basket analysis. Which items are bought together? Customer clustering and retail, which stores people tend to visit together. Price bundling, assortment decisions, cross-selling, and a host of others. It certainly can be considered an advanced form of the what-if scenario, right? If this then that so that's it. That is your introduction, right? So in this lesson we have learned what machine learning models actually are. So I hope you understood that. Go back and review if you're a little fuzzy on something. But I hope you're going to be there in our next lesson. So until then, have a great day. 8. Gathering Data: All right, hello dear friends. Says you may remember in our last lesson we talked about machine learning models. So in this lesson we're going to talk gathering data. Alright, so let's get into it. So machine-learning algorithm is require huge amounts of data to function. When dealing with millions or even billions of images or records, it's really hard to pinpoint what exactly makes an algorithm perform badly. So when compiling our data, it's not enough to gather vast reams of information, feed it into your model and expect really good results. The process needs to be much more finally tuned. So in general, it's best to follow a series of iterative stages until you're satisfied with the outcome. So the process basically runs like this. Select your data distributions, split the data into data sets, train the model. Selecting data distributions in machine learning. So the first step requires us to think about who will be interacting with our model and the various data that it's going to be handling. And as a result. So yeah, in fact, what you're doing by gathering data, I think that this is clearly the most important step in solving any supervised machine learning problem. See, because our text classifier can only be as good as the data set it's built from. So if you don't have a specific problem that you want to solve and you're just interested in exploring text classification. In general, there are plenty of open source datasets available. Now, on the other hand, if you are tackling a specific problem, you're going to need to collect the necessary data. Many organizations provide public APIs for accessing their data. For example, the Twitter API or the New York Times API. You may even be able to leverage these for the problem you are trying to saw. Now, some important things to keep in mind when collecting data. If you're using a public API, understand the limits of that API before using whatever it is that you find. For example, some APIs set a limit on the rate at which you can make queries. More training examples you have the better. And this will help your model generalize better. Make sure the number of samples for every class or topic is not overly imbalanced. That is to say you should have a comparable number of samples. Each class. Also make sure that your sample is adequately cover the space of possible inputs, not just the common cases. So these can best be explained with an example that illustrates what happens when we don't take this into account. So imagine that you are building an image recognition model to automatically label furniture items for an online store. To train the model. And you collect a bunch of images from various manufacturers, catalogues, professional shots that share common attributes such as distances and angles. However, in production, you let users upload their own sometimes bad images from their phones. So there's a good chance that these are going to be little quality, blurry, badly lit, framed improperly or at unusual angles. Certainly different than how a professional photographer would frame it. So the system might perform poorly, right? You see where we're going here, because the image is used for training and production came from two completely different sources or distributions. Alright, so what do you think? That's pretty clear. I think so. I hope that you understand how important that data collection process is for machine learning projects. Yes, a lot of work has to go into it. So in this lesson, we looked into learning, gathering data steps for machine learning projects. Hope you have fun with it, and I want to see you in the next lesson until then, have a great day. 9. Data Pre-Processing: Hello my dear friends. So as you remember in our last lesson, we talked about the gathering data steps, right? So in this lesson we're going to talk about the data preprocessing. So let's get into it. When it comes to creating a machine-learning model, data preprocessing is the first step marking the initiation of the process. So typically real-world data is incomplete, it's inconsistent, it's inaccurate often. And what I mean by that is that it will contain errors or outliers. But it often lack specific attribute, values or trend. So data being fed into a machine learning model needs to be transformed right before it can be used for training. Now on one hand, machine-learning models expect their inputs in a given format, which is very often different from the format where you find the data. On the other hand, what models do is learn and evaluate the cost function. They do so by minimizing the function error during training. So in mathematics, you might be familiar with a term optimization problem. Well, that's what we're faced with here. And certain characteristics of the data can affect how fast a computer will find the solution, the maximum or the minimum. So some examples of data cleaning techniques used during preprocessing include normalization, clipping, or binning. So any database is a collection of data objects, right? We can also call them data samples, events, observations, records, whatever you call them, each one of them is described with the help of different characteristics. So in Data Science lingo, These are called attributes or features. So first of all, we'll need to have a good look at the database and perform a data quality assessment. Random collection of data often has irrelevant bits. Then quite often you might mix together datasets. It use different data formats. And it's the mismatching like integer versus float or UTF-8 versus ascii. Now when you aggregate data from different data sets, for example, from five different arrays of data for voice recognition, three fields that are present in one of them could be missing and for other arrays. So let's imagine that you have data collected from two independent sources. And as a result, the gender field has two different values for women, like woman and female. So to clean this data set, you gonna have to make sure that the same name is used as a descriptor within the data set. So in this case, it can be woman. And let's talk about outliers a little bit. For example, within 200 years of daily temperature observations for New York, there were several days with very low temperatures in summer. This is why outliers can be very dangerous. A strongly influence the output of machine learning models. So usually the researchers evaluate the outliers to identify whether each particular record is a result of an error in the data collection, or indeed a unique phenomenon which should be taken into consideration for data processing. Now you might also notice that some very important values are just totally missing. So these problems will arise due to human factors, programming errors or many other reasons, but they will affect the accuracy of the prediction. So before going any further with your database, you need to do this data cleaning. So now you know why we need to do this preprocessing of data. Because by this data preprocessing, we make our database more accurate. We eliminate the incorrect or missing values that are there as a result of human factors or bugs. We certainly boost consistency. And when there are inconsistencies in data or duplicate, it will affect the accuracy of the results. Make the database more complete as well. So we can fill in the attributes that are missing when needed, smooth the data. And this way we'll make it easier to use and interpret. So that's it for this step I hope is clear. I'm sure that you see it is important. So in this lesson, we've learned what data processing is and why we need to do it. Alright, so play around with that. And I will see you in the next lesson. Until then, have a great day. 10. Choosing The Right Algorithm and Model: Hello dear friends. So as you remember in our last lesson, we talked about data preprocessing. Now in this lesson we're going to talk about choosing the right algorithm and model for our machine learning project. So let's go ahead and get started. So until now, we've done these step-by-step, right? First, we understand the problem. Second, we collected data which we need to solve the problem. And third, we perform data preprocessing by eliminating missing or incorrect data in order to not receive errors in the results of our project. So now we're here, and in this step we're going to choose the right algorithm and write model for solving whatever problem we're solving. So in this step, we will need to decide what we want because there are several elements that affect the choice of a model. So we set up a machine learning pipeline that compares the performance of each algorithm on the data set using a set of carefully selected Evaluation Criteria. None other approach is to use the same algorithm on different subgroups of datasets. The best solution for this is to do it once or have a service running that does this an intervals when new data is added. Alright, so we know the different algorithm types, we know how they differ, and we know how to use them. The question now becomes when to use each of these algorithms. So to answer this question, we need to consider for aspects of the problem that we're trying to solve. The data, the accuracy, the speed, features, and parameters. So of course, knowing our data is the first and foremost step of deciding on an algorithm. Though, before we start thinking about the different algorithms we need to familiarize ourselves with our data. So simple way to do that is to visualize the data and try to find patterns within it yourself, try to observe its behavior and most importantly of all, its size. So we need to know critical information about our data because this will help us to make an initial decision on an algorithm that we implement. So the critical information is the size of data. So some algorithms perform better with larger data than others. For example, small training datasets, algorithms with high bias and low variance classifiers will work better than low bias, but high variance classifiers. So for small training data, Naive Bayes will perform better than KNN. And the characteristics of debt. And what I mean by that is how our data is formed. Is our data linear? Then maybe a linear model will fit it best such as regressions, linear and logistic or SVM support vector machine. However, if our data is more complex and we'll need an algorithm like random forest. Consider also the behavior of data. Are our features sequential or chain? If it is sequential or we tried to forecast the weather or the stock market, then would be best if we used an algorithm that matches that such as Markov models and decision trees and indeed the type of data. So we can either categorize our input or output data. If our input data is labeled, then uses supervised learning algorithm. If not, it's probably an unsupervised learning problem. So on the other hand, if our output data is numeric, then use regression. But if it's a set of groups, then that's a clustering problem. Alright, so now that we have studied our data, analyze its type, characteristics and size, will need to ask ourselves how much does accuracy matter to the problem that we're trying to solve? So the accuracy of a model refers to its ability to predict an answer from a given observations set close to the correct response for that observation set. Sometimes getting an accurate answer isn't necessary for our target application. If an approximation is good enough, we can get our training and processing time significantly just by choosing an appropriate or approximate model. So approximate methods will avoid were born even perform overfitting on the data, such as linear regression on not so linear data. Now often accuracy and speed stand on opposite sides. We need to make some trade-offs between the two when deciding on an algorithm. Higher accuracy typically means more extended training and processing times. Algorithms like Naive Bayes and linear and logistic regression are easy to understand and implement and therefore they will have fast execution. A more complex algorithms like SVN, neural networks and random forests will need a much longer time to process and train data. So which is of more value to our project accuracy or time? If it's time, going with a simpler algorithm will be better. While if accuracy is the most important thing, then choosing a more complex algorithm will work better for your project. So the parameter of our problems is numbers that will affect how the algorithm you will choose to behave. Parameters or factors such as error tolerance or the number of iterations or options between variants of how the algorithm behaves. The time needed to train and process your data is often related to how many parameters you have. The time required to process and train a model increases exponentially with a number of parameters. However, having many parameters typically indicates that an algorithm is more flexible. So in machine learning or data science in general, a feature is quantifiable variable of the problem we are trying to analyze, right? So having a large number of features can slow down some algorithms, making training time much longer than it needs to be. If our problem has many features then using an algorithm such as SVM, which is well-suited to applications with a high number of features, then that would be the best way to go. So here, let's have a look at this diagram and this could give you a few ideas about this step. Alright, so that's it, and I hope that is clear enough for you. So we've gone about having a look at how to choose the right algorithm and model for our machine learning project. Study it well, and I hope to see you in our next lesson until then, have a great day. 11. Training and Testing the Model: Hello my dear friends. So glad to see you back. And I want you to remember in our last lesson because we talked about choosing the right algorithm and model for our machine learning project. So in this lesson we're gonna talk about this step that training and testing our machine learning model will take. So let's get started. So far, we've done these step-by-step. First, we understood the problem. Second, we collected the data which we need to solve the problem. Then third, we perform data preprocessing by eliminating missing or incorrect data in order to not receive errors in the project. And then fourth, we chose the right algorithm and model for our machine learning project. So that means we are at this step where we're going to learn what training and testing the model is and how do we break it down? So for training a model, we initially split the model into three different sections, which are training data, validation data, testing data. We train a classifier using the training data set, tune the parameters using the validation set. And then we test the performance of our classifier on an unseen test data set. On an important point to note here is that during training the classifier only has access to training and, or the validation set. The test data set must not be used during training. You get that point, right. The test set will only be available during testing of the classifier. So the training set is the material through which the computer learns how to process information. Machine learning uses algorithms to perform the training part. A set of data is used for learning, and this fits the parameters of the classifier. Now, cross-validation is primarily used in applied machine learning in order to estimate the skill of a machine-learning model on unseen data. So set of unseen data is used from the training data to tune the parameters of a classifier or a set of unseen data, use only to assess the performance of a fully specified classifier. So once the data is divided into the three given segment, we can then start the training process. Now in a data set, a training set is implemented to build up a model. While a test or validation set is built to validate the model. Data points in the training set are excluded from the test or validation set. And usually a data set is divided into a training set, a validation set. Some people say test set instead, but they are in each iteration or divide it into a training set, a validation set, and a test set in each iteration. So the model uses any one of the models that we've chosen in step three or 0.3. So once the model is trained, we can then use the same trained model to predict using the testing data, that is the unseen data. So once this is done, we can develop a confusion matrix, which is hopefully not what you've been developing this whole course. Just kidding. The confusion matrix will tell us how well our model is trained. So confusion matrix has four parameters which are true positives, true negatives, false positives, and false negatives. Now we prefer that we get more values in the true negatives and true positives to get a more accurate model. The size of the confusion matrix. Well, that'll completely depend on the number of classes. So here, true positives, meaning that these are the cases in which we predicted true and our predicted output is correct. True negatives mean. These are the cases in which we predicted falls and our predicted output is correct. False positives mean that these are the cases where we predicted true and our predicted output is false. And false negatives are the cases in which we predicted false and our predicted output is true. So that's it. I hope that's cleared some things up for you. We have learned the step that includes the training and testing of our machine learning model and what they mean. Alright, so I hope you liked that. Learn it well, my friends and I'll see you in the next lesson. Until then, have a great day. 12. Evaluation: Hello, my dear friends. Are you ready for a little bit of an evaluation? I know you are because you remember in our last lesson, we talked about this staff that includes training and testing and what they actually mean in machine learning. So that means in this lesson we're going to talk about this step that includes the evaluation of our machine learning model. So let's get started. All right, so these are the steps we've done so far. We've understood the problem. We collected the data. We perform the data preprocessing, eliminating missing or incorrect data so that we don't receive errors or too many anyway. Then we chose the right algorithm and model for our machine learning project. And then the fifth step, we split the data training set and a testing set. And that leads us here, the final step. So it's in this step that we're going to learn how to evaluate our model. So what do we mean by evaluating and indeed, how do we evaluate our model? So evaluating our machine learning algorithm is an essential part of any project that you encounter. Your model may give you satisfying results when evaluated using a metric, say, accuracy score, but may give poor results when evaluated against other metrics such as logarithmic loss or any other sets metric. Most of the time we will use classification accuracy to measure the performance of our model. However, that too is not really enough to truly judge our model. So yeah, there are some different evaluation metrics that we use for evaluating our models. First one, I'll tell you about his classification. Accuracy. Sounds good, doesn't it? Classification accuracy is what we usually mean when we use the term accuracy. So it's the ratio of the number of correct predictions to the total number of input samples. And the formula is pretty simple. Accuracy equals the number of correct predictions divided by the total number of predictions made. So it works really well. Only if there are equal number of samples belonging to each class. For example, consider that there are 98% samples of class a and 2% samples of class b in our training set. That our model can easily get 98% training accuracy by simply predicting every training sample belonging to class a. Now on the same model is tested on a test set with 60% samples of class a and 40% samples of class B, then the test accuracy would drop down to 60%. So yeah, classification accuracy is Dre, but it gives us a false sense of achieving a high accuracy. So the real problem arises when the cost of misclassification of the minor class samples are very high. So if we deal with a rare but fatal disease, for instance, the cost of failing to diagnose the disease of a sick person is much higher than the cost of sending a healthy person to have more tests. And the second one I mentioned is logarithmic loss. So logarithmic loss or log loss works by penalizing the false classifications. And it works very well for multi-class classification. So in working with log loss, Classifier must assign probability to each class for all the samples. So suppose there are n samples belonging to M classes, then the log loss is calculated like this. So here, y i j indicates whether a sample i belongs to the class or not. P i j indicates the probability of the sample i belonging to class j. So log laws has no upper bound and it exists in the range of 0 through infinity. So log loss nearer to 0 indicates higher accuracy. Whereas if the log laws is way away from 0, then of course it will indicate a lower accuracy. So in general, minimizing log loss gives greater accuracy for the classifier. Makes sense. So let's dig into the confusion matrix, which is the third one. So the confusion matrix, as the name may suggest. Well, it gives us a matrix as output and describes a complete performance of the model. Now, I'm sure you remember in the last lesson we talked about the four terms. True positives, true negatives, false positives, and false negatives. Says by using these terms that we can calculate the accuracy with this formula. Accuracy will equal true positive plus true negative divided by the total sample. The fourth one is mean absolute error. Mean absolute error is the average of the difference between the original values and the predicted values. So it gives us the measure of how far the predictions were from the actual output. However, they don't give us any idea of the direction of the error. And what I mean by that is whether we are under predicting the data, were over predicting the data. But mathematically it's represented like this. All right, very good. So that's it. Now, of course, there are a few more different evaluation metrics, but these are enough for us now to delve into. So we've examined the last step of our machine learning workflow. So I want to see in the next lesson and we'll have some more fun until then, have a great day. 13. Neural Network: Hello my dear friends. So in this lesson we're going to talk about what our neural networks. So let's go ahead and get started. So of course, the first question might be, what is neural networks? Neural networks are a model pretty much inspired by how our human brain. Well, mammal brains, brains in general work similar to neurons in the brain are mathematical neurons, or also intuitively connected to each other. So they'll take inputs like dendrites, do some simple computation and then produce outputs or axons. So the best way to learn something like this is to build it. So let's start off with a simple neural network and we'll solve it by hand. So this will give us an idea of how the computations flow throughout a neural network before we get too complicated. Now if you look at the figure above, most of the time, you will see a neural network in a similar way. But it's gone as simple-looking act gives you the right idea. But the picture hide a bit of the complexity. So it's expanded out. Cool. So now let's go over each node in our graph and see what it represent. So these nodes represent our inputs for our first second features, X1 and X2. And then define a single example we feed into the neural network. And that's why we call it the input layer. So W1 and W2 represent our weight vectors. Now in some neural network literature, it's denoted with a theta symbol. Intuitively, these dictate how much influence each of the input features should have in computing the next node. Now if you're new to all this, just think of them as playing a similar role to the slope or gradient constant in a linear equation. Weights are the main values at our neural network has to learn. So initially we will set them to random values and then let the learning algorithm of our neural network, this side, the best weights that will result in the correct output. So this node represents a linear function. Simply takes all the inputs coming into it and creates a linear equation and or combination out of them. So this node takes the input and passes it through the following function called the sigmoid function. Because of its S shaped curve, is also known as the logistic function. The sigmoid is one of the many activations functions that's used in neural networks. The job of an activation function is to change the input to a different range. For example, if z is greater than two, then sigma z, it's approximately one. And similarly if z is less than minus two, then sigma z is approximately 0. So the sigmoid function squashes the output range to 01. In fact, the parenthesis notation here implies x cuz have boundaries. So it never completely outputs 0 or one as the function asymptotes, but reaches very close to boundary values. So in are above neural networks since it isn't a last note, it performs the function of output. You follow. So the predicted output is denoted by the letter Y with the circumflex, or you might know it as y hat. So we would get the logic of the neural network. It's really quite fascinating. And there's a lot more to it than that. But now let's talk about convolutional neural networks, CNN or conv net. So a convolutional neural network is a deep learning algorithm which can take in an input image, assign importance, learnable weights and biases to various aspects were objects in the image and then be able to differentiate one from the other. So the preprocessing required in a CNN is much lower as compared to other classification algorithms. While in primitive methods, filters are hand engineered with enough training, CNN have the ability to learn these filters and characteristics. So the architecture it was CNN is analogous to that of the connectivity pattern of neurons in the human brain. And indeed was inspired by the organization of the visual cortex in particular. So individual neurons respond to stimuli only in a restricted region of the visual field, known as the receptive field. A collection of such field overlaps to cover the entire visual area. So a CNN is able to successfully capture the spatial and temporal dependencies in an image. Through the application of relevant filters. The architecture performs a better fitting to the image data set due to the reduction in the number of parameters involved and reusability of weights. So in other words, the network can be trained to understand the sophistication of the image better. So CNN image classifications can take an input image, process it, and classify it under certain categories. So let's break it down even further. Computers will see an input image as an array of pixels. And it depends on the image resolution of course. But based on the image resolution, it will see height times width times dimension. For example, an image of a six by six by three array of matrix of RGB, where three refers to RGB values. And an image of a four by four by one array of matrix of grayscale image. So technically deep learning CNN models to train and test each input image will pass it through a series of convolutional layers with filters or kernels, pooling fully connected layers fc and apply Softmax function is to classify an object with probabilistic values between 01. So what you're looking at in this figure is a complete flow of a CNN to process an input image and classifying the objects based on values. Alright, so that's it for that. I hope you know now what we mean by neural networks and CNN's, and what they do is useful. And I hope they're now useful to you. So yeah, we've gone over neural networks and CNN concepts in machine learning. Now we're going to do something else in the next lesson and use all of our knowledge. So until then, have a great day. 14. Amazon Face Rekognition: Hello my dear friends. So in this lesson we're gonna talk about Amazon's facial recognition app. Amazon recognition. So let's get into it. So first, of course, we'll need to know a little bit about facial recognition and how it works. Let's go to Wikipedia. A facial recognition system is technology that's capable of matching a human face from a digital image or a video frame against a database of faces. Researchers are currently developing multiple methods in which facial recognition systems work. The most advanced face recognition method, which is also employed to authenticate users through ID verification services, works by pinpointing and measuring facial features from a given image. So while initially form of computer application, facial recognition systems have seen wider uses in recent times on smartphones and other forms of technology such as robotics. Because computerized facial recognition involves the measurement of a human's physiological characteristics, facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems is a bio-metric technology is as lower than iris recognition and fingerprint recognition for instance, it's certainly widely adopted due to its contact lists and non-invasive process. So facial recognition systems have been deployed in advanced human computer interaction, as well as video surveillance and automatic indexing of images. So how does it work? Well enough to date face recognition system starts from a digital image in high-definition HD or UHD. First face detection programs find that human faces in the image and normalize the frames and faces. That is to say it makes me images taken from the front as much as possible, resizes them, and adjusts the light and contrast. Then artificial intelligence software, it comes into play and creates a fingerprint like face print. This scar consists of up to a 120 measurements, consisting of the dimensions of the eyebrows, eyes, nose, mouth, chin, ear, et cetera, and their distances between each other. So in the last stage, learning machine software, which works with artificial neural networks, compares the determined face print with the facial marks in the database, and then finds the most similar one to that phase. Thus, the identity of the owner of a face in the image is determined. Now facial recognition methods have some differences depending on the application and the manufacturer. However, they generally have a mechanism consisting of several steps. And these include detection, which is when the face recognition system is connected to the video surveillance system. Let's say the software will scan the image to detect the face images in the camera's field of view. It'll record each face like image and then it'll send it to the face processing system. The system then estimates the position of the face, direction, and size of the head. Then in order for the face to be detected as a face by the system, it must be rotated at an angle of at least 35 degrees to the camera. Normalization. The necessary procedures are performed in order to record and map the detected image in a suitable position and size. This is called normalization. The software then extracts the geometry of the face to outline the outline. So that means the distance between the eyes, the thickness of the lips, distance between the chin and forehead, so on and so forth. The information resulting from this process is called a face signature representation. So after the facial signature is removed, the system then converts it into a unique code. So this code simplifies the transactions between the acquired data and previously saved data matching. So it's at this stage where new data obtained Dan previously recorded data are compared. So if a match is found with one of the images in the database, the software extracts the details of the matching phase and informs the user. For example, the face recognition system used by popular phone manufacturer works like this. For this, the phone asks the user to look at the phone and illuminates the phase with an infrared light beam, and then sends about 30 thousand light points to the face with an infrared laser. The infrared camera, the phone also detects the three-dimensional image and is specially developed artificial intelligence or neural network engine transmitting it to the chip. Thus, the identity of the user is determined instantly and with great accuracy. So this and similar technologies are now being used in many different phone brands and models. So if that clears a few things up for you, now we can get into the application called Amazon recognition. Alright, so what is Amazon recognition? According to Amazon, this program does image and video analysis to your applications using proven highly scalable deep learning technology that requires no machine-learning expertise to use. So with the help of Amazon recognition, we can identify objects, people, text, scenes at activities and images and videos, as well as detect any inappropriate content. So Amazon recognition also provides highly accurate facial analysis and facial search capabilities that we can use to detect, analyze, and compare faces. For wide variety of user verification. People counting and public safety use cases. With Amazon recognition custom labels. You can identify the objects and scenes in images that are specific to your business needs. For example, you can build a model to classify specific machine parts on your assembly line or to detect unhealthy plans. Amazon recognition custom labels takes care of the heavy lifting of model development for you. So know machine learning experiences required. You simply need to supply images of objects were scenes that you want to identify. And this service handles the rest. Amazon recognition provides to API set. We use Amazon recognition image for analyzing images and Amazon recognition video for analyzing videos. So both APIs analyze images and videos to provide insights so we can use in your applications. For example, we could use Amazon recognition image to enhance the customer experience for a photo management application. When a customer uploads a photo, for instance, our application can use Amazon recognition image to detect real-world objects or faces in the image. Then after our application stores the information returned from Amazon recognition image, the user could then query their photo collection for photos with a specific object or face. Deeper querying is possible. For example, the user could query for faces that are smiling or query for faces that are a certain age. So we can use Amazon recognition video to track the path of people in a stored video. Alternatively, we can use Amazon recognition video to search a streaming video for persons whose facial descriptions match facial descriptions already stored by Amazon recognition. Now the types of analysis that Amazon recognition image API and Amazon recognition Video API can perform. I'll outline here, labels, custom label, faces, face, search, people pad, celebrities, text detection, inappropriate or offensive content. So let's examine it more closely. So first let's open up our browser, type in Amazon recognition and click on the link. Alright, so here's the official page. Click the Get Started with Amazon recognition. And now on this page will need to create an AWS account. And we'll just fill in the blanks. And then on this page we need to specify our account type and enter our personal details. Now will need to add our credit card details. And cell phone. That way, Amazon will send a verification code. And when we get the SMS, type the code in here. Alright, so of course, right at the moment we're on the free tier, but we're a member now and that's what it's important. So if you're ready, let's get into it. So let's open up AWS management console type. I am in the search bar. By the way, they had just means Identity and Access Management Panel. Alright, so here we can manage our roles, groups, policies and account settings. We could also see our access report and click on roles for now. So here we can see our roles and these roles last activities. So let's create a new role. So on this screen will choose the lambda option and click Next permissions here. So now we can choose policy, visual choose lambda, execute policy. Now we need one more policy here. Recognition full access. And now click next tags. Now here on this part is you can see it is optional so we can skip it. And now we need to type in the role name. You can type in whatever you want and click create role. Alright, so that makes it roll ready. Let's continue. Click services and choose lambda. And here we can create our own functions. So let's create one, click Create function. And here we can type in our function name. Now here we'll need to choose our language to use. So I'll choose Python. And now we'll need to choose the execution rule. So we'll choose an existing rule, right? And select a role. And click Create function. So it's here on this string that we get a designer's section and a code section. And we can add our own code. So I'll add one just for an example. And when we click deploy, our code is deployed. And now I can click test. So for this we'll need to configure test events. So let's go ahead and do that. Here we need to choose a template and an event name. Type your event name, and click Create. So now we can test it and execution succeeds. So let's have a look at the logs. Here. We can see the labels, confidence and other staff. Very cool. So that's the code section. If you want to, we can see the application section. But I'm gonna do it anyway. So click services and type recognition. Choose Amazon recognition. Now in this page, just choose tried demo. All right, that lets us in and we can do some specific things here. Object and scene detection, image moderation, facial analysis, celebrity recognition, phase comparison, text in image, personal protective equipment detection, and video analysis. So let's choose object and seem detection. So on this, we can upload our own images. So let's use our image again. Click upload and choose an image. And here are the results. Now there's a car here and a truck and some people. Alright, so that's it. It's a quick look for Amazon recognition. It's pretty quick, but you gotta look and that's how to run it. And if you want to, you can, of course, go back and check on some of the other things like facial analysis or text in image, whatever it's, it's actually quite fun. So I hope you enjoy it. So we've tried to learn here what facial recognition is, how it works. And then we took a look into Amazon recognition and some of what we can do with it. Now, I'll see you in the next lesson. We're gonna get into something else. Until then, have a great day. 15. Clarifai: Hello my dear friends. So in this lesson we're gonna talk about clarifies demographic API. Now according to clarify, their predict API works like this. That predict API returns the coordinate location of the bounding box for each detected human phase. And a list of probability scores on the persons age, appearance, and their classifications are basically starts very young and goes to more than 70. And the gender appearance say basically break it down a male and female and multicultural appearance. And they have a variety of cultures. Now the return bounding box values, these are the coordinates of the box outlining each face within the image. They are specified as float values between 01 relative to the image size. So the top left coordinate of the image is 0 dot 0, 0 dot 0. And at the bottom right of the image, it's one dot 01 dot 0. So if the original image size is 500 by 333, then the box above corresponds to the box with top-left corner at 208 x and 83 y and the bottom right corner at 175 X and 139. Why the note please that if the image is re-scaled by the same amount and x and y, then the box coordinates remain the same. And to convert back to pixel values multiplied by the image size width for left call and write call, and height for top row and bottom row. All right, so let's see how it works. So first we'll open up our browser and type clarify in the search bar. Click on the URL. And here we are at clarifies website. Click the Developers tab, select model Gallery. Alright, so we get lots of models here. We'll choose a demographic model. And here's a demographic model. So let's try it out. On this photo. You can look at it yourself and you can see that the model will show us some information about the woman's appearance. It says the probability is 62% that she's 23 years old and 47% sure that she is black or African-American. Actually, there's an 11% chance that she's Asian. Now here in this photo is contains more people. So let's have a look. So this man is possibly Black or African American, and that probability is 0.842 is age 39, but that probability is 0.430. So I guess this man is a tough guess, is 36 and the probability is 0.613. And he's 30% Hispanic, Latino or of Spanish origin, 29% white, 10% Asian. So if you want to, you can try it on your own photo. I guess you get the point though. The side also gives us a chance to develop our own models. So if you want to, you can sign up for free. Let's try it out. So here we sign up. We enter our information here and except the privacy policy in terms of US then signed up. So we create our own application, right? Let's try it out. So I've got a bunch of images in a data set. Its name is Yale face data set. Now of course, I already shared this data set with you. Yes. Yes. Click Create application, which is the button on the top right of the screen. So here we've gotta give an app ID and right example, click Create. Now, the screen shows us the detail of our application. So here we can add our data set with the help of this button and we can see our application workflows. So we've got three workflows here. User provided by clarify, by myself. And of course, we can create a new workflow by ourselves. Alright, so here in this section we can customize our models and create our own models. Here in this section, we can delete our data inputs, models, and applications. And here is where we can change the base workflow. So let's give it a shot. First, we can add our data set here with this button. On the screen, we'll add our images with browse files, enter URL or drag-and-drop. So clarifying gives us three option. I'm going to click on browse files and choose all the images. Alright, so that's done. So now let's look at our result. So click the explorer button here. And some of our images are shown to us. So let's click on one of them. Now we can see the results about this photo. So it's sorted by value. So if you want to, you can change it to be sorted by alphabetical. But let's examine the results. Fine looking. 0.9.2, man, 0 dot 99, guy, 0 dot 79, adult 0, 0.998c. And actually these are pretty good results. So I'll choose another photo. And there's a lady in this one, so let's check it out. Adult 0.998c, woman, 0, 0.9c. Six, portrait, one dot 00, Happiness, 0 dot 76. Alright, it's so much fun, isn't it? Alright, so that's it. I hope you enjoyed this video. So we've tried to get into the clarify predict model a little bit, but we certainly created our own and we can use our own application with a help of clarify. Now of course that's not all to it. There's a it's just a small drop in a bucket really. You can try some very different data and of course you can create your own models and workflows. So go for it. Don't be afraid to try it any which way you can have a lot of fun though. Alright, so this is the course where we have learned what machine learning is, how it works. And this is the end of our machine learning guide course. So I want to thank you for your attention and patience and practice. I wish you much success in your machine learning adventure. Then remember, every end is just a new beginning. So hope at the end of this course will be the beginning of a brand new opportunity for you. And may they blossom? So I'll see you on another course until then, have a great day.