Transcripts
1. Welcome to the course!: Hello and welcome to this hands-on course on TensorFlow. I'm pro began a Data Scientist at the click riddle, and I'll be your instructor for this course. Tensorflow is an end-to-end open source machine learning platform where we can develop and train machine-learning models. Google recently came out with TensorFlow version 2, and it is not one of the most easy to learn, powerful machine learning libraries out there. In this course, you will gain an in-depth knowledge about how to build and trained machine learning models using TensorFlow. Discourse is aim for beginners, intermediate, as well as experts alike. So if you fall within any three of these categories, filter design us. You will be learning how to create dancers and perform dense operations. Build a linear regression model from scratch, as well as with the sequential model from good US. Build a logistic regression model as well as a deep neural network classifier and much more. So if you're ready to learn TensorFlow using a hands-on approach, let us hop onto the course and start learning. See you there.
2. Installing and Importing TensorFlow: Hello and welcome to this lesson. In this lesson, we'll be learning how to install and import TensorFlow in Python. So TensorFlow is an open source library that is mostly used for building and training machine learning models, including deep neural networks. I will be using Jupyter notebook as an ID for this course, but you can use any odd ID that you are comfortable with. So let's install TensorFlow using Pip first mixture to have the latest version of Python installed about version 3.5. And as far as I remember, tens of flow only supports to version 3.8. And recently, python 3.9 also came out. So make sure that you have version between 3.53.8 for TensorFlow to be installed. Then open up your command prompt. If you're on windows and open up your terminal, if you're on Linux or Mac and write the following command. Pip install tensor flow. This will install TensorFlow for both CPU and GPU uses. So let's go and install it right now. So I have my Command Prompt Open overhead. So if I write have pip, install, denser flow. Remember that item should be installed in your system before writing this command. So if I write this and press Enter, this will take some time. But once this is finished, dense outflow will be successfully installed. There. We have successfully installed TensorFlow 2.3.1. I'll just do a little bit. So there we have it. And here 2.3.1 denotes the version of the TensorFlow library that we have just installed. So let's go back to the Jupiter notebook and check if we have successfully installed tens of law not by importing it in Python. Okay, now let's import TensorFlow in Python. So here we have the line import TensorFlow as df. And this is just a simple import convention where we're importing TensorFlow as df in Python. So I'll run this. And since there are no headers, I've successfully imported TensorFlow SDF in Python. Now let's finally check the version of the installing TensorFlow library. So we can do that by calling the version property of, of the TensorFlow library. So if I print it out, we get the version has 2.3.1 and we indeed had seen earlier that are tens and was 2.2.1. And one thing I have to mention is that the tutorials for this course works for TensorFlow with aversion greater than 2. So if your TensorFlow version is greater than 2.0.0, then you are now ready to build deep neural networks using TensorFlow. So now let's work on our first machine learning project. And I'll see you in the next lesson.
3. Scalar and Tensors: Hello and welcome to this section on scalar and tensors. A scalar is a real number that can be expressed using a quantitative value. Down mostly used along with some units of measurement. So for example, in kilograms and five feet have scalar values as 105 respectively. Similarly, a denser are tensors, are multi-dimensional, ad is with a uniform datatype. The name TensorFlow is derived from the notion that data is stored and flowed in a neural network using tensors. The rank of a tensor meter minus the number of dimensions of an ATI. So a scalar is a rank 0 tensor since it has magnitude but no dimensionality. Hair 105 Buddha ranks EDR densities. On the other hand, a vector is a rank one tensor since it has a single dimension, it is also called a 1D tensor. Here we have an add a of 1234 as elements, and this forms a vector which is a 1D Densa. Similarly, a matrix is a rank two tensor since it has two dimensions, it is also called a duty Densa. Here we have a matrix of values 1234, making up a duty tensor with two rows and two columns of data. In this section of the course, we'll be learning how to create different kinds of tensors. We will also be learning how to perform various operations using dances. So let us start by learning how to create scalars and tenses. In the next lesson. See you there.
4. Introduction to Tensors: Hello and welcome to this lesson. In this lesson, we'll be introduced to dancers and the basic properties in denser flow. As we have studied, densities are multidimensional, ad is with a uniform datatype. To create tensors, we'll need the TensorFlow library. So let us start this lesson by importing it. Here. I'm importing the TensorFlow library, SDF in Python. So I'll run this line of code and make the import happen. Grade. Now, let us learn the basics of a denser by creating a scalar which is a rank 0 Denzel will be using the constant fluttered off the TensorFlow library for this. So here we have Judensau flu library, SDF, and according the constant method off of it. Then we're passing in a numerical value will be here. This will make about scalar or our rank 0 denser. Finally, we're printing it out by wrapping it with the print statement. So if I run this, we get our tensor. So here we have the value as three. The shape as notes, are empty since a scalar does not have any dimensions. And then we have the data type or D type as int 32 because this is an end-user. Furthermore, we can also assign tensors to Python variables. So here we are creating the denser as before, and we're assigning it to a python variable called scalar. Finally, we're printing out the scalar variable. So if I run this, we get the same output as before. Also, we can change the data type of the Densa. So when we call the constant muttered, we are passing in the value, but we can also pass in the datatype we want for the value. This means that we want to specify that we want this datatype, booby thought 64. So if I run this and remember, we're assigning it to the scalar variable again, and we're printing the scalar variable. So if I run this, we get a tensor with value 3 because we had specified it as a float. And again, shape is empty or null and the data type is float 64, as we had written over here. So in this way, you can also change the data type of a tensor by using this property. And then disable for tensor can be obtained using the ship property. So instead of printing out the entire Densa, we can just get it saved. For this, we can call the SIP property off of the Densa, which is scale over here. And if I run this, we get an empty duple printed out because the SIP is known and this is because the dimensionality is 0. Next, the data type of the tensor can be obtained using the D-type property. So we're calling the D-type property off of the scalar tensor, which is this dense over him. So if we look at the datatype, it should be Float 64. So I'll run this and we have the datatype as Float 64. Next, the data held by the Densa can be obtained using the NumPy method. So if we call the NumPy method off of the Densa, we'll get the data, which is 3. Now, let us create a 1D 2D and 3D Densa using TensorFlow. A 1D Densa has a similar dimension or axes and is a vector. So we're using the constant method again from the TensorFlow library. But instead of passing a single numeric value, we're passing in a list which is a vector. And we're assigning it to the variable rank one tensor and we're printing it out. So if I run these lines of code, we get the following denser. So here we have the data as a list, which is a vector. And we have the same as three comma. And this is because we have two different elements over him. And the D-type is interdigitate because all three of these data types in deserts, we can again specify their datatype as df, Float 64, and we'll get a Densa with floating point values and the data type as Float 64. Next, or duty denser as do dimensions or access. And he's a matrix. So here we're defining a 2D tensor by calling the constant method and passing in a list of lists. Some overhead. We have an outer list and we have the elements as list. So here it is, the first list, the second list, and the third list. Once we've created this stanza, we're assigning it to the variable rank two tensor, and we are printing it out overhead. So let's run these lines of code and see the output. Okay, we have our 2D denser over him. And as you can see, the CPS known three comma two. This is because the number of rows or the number of, the number of rows of the matrix is three. And we have two columns of data, so it's overhead too. And if we look at the list itself, we can see that there are three elements. And this is why there are three number of rows and we have two columns overhead. This is the first column, and this is the second column. Okay? And we have the data type as int 32. Finally, a 2D denser as two-dimensions or axis. And this may look confusing at first, but let's break it down one by one. So we have an outer list overhead. Then we have an inner list containing two lists overhead. Again, another element is an inner list consisting of do less. So if we print this rank three Densa, we'll get the following output. And let's look at the shape. The shape is null, x2 comma two, comma three. So let's break down what is happening over here. For this, let us start by looking at the outer list. So here we have the outer list. The older lists contains two elements, this list and this list. So the CAB stool. Next, we have two elements inside of this list. So 12, so this list has two elements. Similarly, this list also has two elements and therefore the safe is to. Finally, We have three overhead because the inner list contains three elements and each list has three elements over here. So in this way, we have created our 3D denser. Keep in mind that you can create an endogamous sirtuins are using the constant metered, but since it will get quite tough to understand what is happening, let's stop at 34 now. So in this way, we can create tensors of any dimensions that we want. And in the next lesson, we'll learn some other ways to create tensors. See you there.
5. Different ways to create Tensors in TensorFlow: Hello and welcome to this lesson. In this lesson, we'll be learning the different ways to create dancers in denser flow. The TensorFlow library provides us with Tandy functions that allows us to create dancers without ever having to specify the values one by one, as we did in the previous lesson. So let us start this lesson by importing the library in Python overhead unimportant the TensorFlow library as df. So I run this line of code and make them book happened. So the first method that we are about to look at is the ones method. The ones method creates a cadenza with all the elements as one. So hey, I'm recalling the ones method off of the TensorFlow library and we're specifying the shape of the Densa that we wanted to create. So if I bring this out, we can see the output. And we have a two-by-two tensor with all elements as one. So if I be around with the same, let's say maybe I'll add another two over here. So we should have a two by two by two, which is a 2D Densa overhead. So if I run this, we have a 3D denser with shape Gu Baidu, Baidu. And in this way, we can use the ones method to create a tensor with all elements has one. Similarly, the zeros method creates a cadenza with all elements as zeros, a shape defined as a parameter. So if we run this, we get a 2D denser with all the elements at 0 and the shape is due Obama. Then one another thing I forgot to mention is that you can look at the datatype and is fraught tardy to by default. So when creating a tensor using the ones or zeros method, we get all the elements as short tardy do. Next, let us look at dye method. The I muttered is used to create an identity matrix. So we're specifying the number of rows and the number of columns for the identity matrics. So if I run this line of code, we can see, and I didn't do the matrix with the diagonal elements as one and other elements as 0. And the SAP's trigonometry. Next, we have the field method. The field method can be used to create a tensor field with a scalar value. So we have the field method overhead being called from the TensorFlow library. And we're specifying the shape of the tensor that we want. But it also specifying the value that we want to fill the density. So in contradiction to the zeros and ones method, we can actually specify the values that we want to fill the tensor wet. So if I run this, we get a two by two tensor with all the values filled. As for now, the linspace method is used to create a 1D tensor with evenly spaced sequence of numbers. So here we have the linspace method and we're specifying the starting value, the stopping value, and the number of elements that we want to generate was you stand. So if I bring this out, we can see the output. Here. We have a tensor which is a 1D vector auto 1D tensor consisting of n elements 0 through nine. So if I change this stdin to five, that means we want to generate. Five numbers evenly spaced between 0 to nine. We'll get the following output. So we have a difference of 2.25 between each element, 0 to 0.254.56.759. So we have a total of five elements. Next, we can use the rains method in order to create a 1D tensor with the numbers in the range provided. Here, we're using the random muttered offer the TensorFlow library, and we're specifying three parameters. The first parameter is the starting value, as we had in the linspace. Then we have the stopping value. And finally, we have the step that we want to take. And I'll run this and we can see the output. So here the starting value 0 and the step is two. So we're getting it ends with an increment of 202468. And since the next number is 109 is the stopping value. We're not getting the ANOVA here. Another thing, so if I choose B23 and if I run this, we get a denser with values 036. And this is an increment of three. So here we have to notice that the stopping value doesn't get included when creating the denser. So if you ever use the starting and stopping value, always remember that the stopping value will always be ignored. Which means we will have to specify r plus one value if you want to include this value as well. That means if I put in over here and run this, we get 0369 and the baby's full. Finally, we can create a denser using the variable method in order to reassign its values if needed. So here we're using the constant method from the depths of your library and we're specifying the list of elements as its values. So this is a 1D Densa. Then we're creating a variable off of it by calling the variable method from the TensorFlow library. Notice that we're using a capital V And notice small v. So when we pass the 1D denser wood available method, it becomes available. And if I assign it to a python variable called x, and if I run it, we get the variable in the output instead of a denser. Next, we can now assign new values using the assign method. So here we have the variable and we're using the assign method off of it. And we're passing in the input or the bandwidth as our 1D tensor with element 4-5-6. Now, if I print this, we get a variable with the NumPy, which is the data as full 5-6. And there you have it. This is it for this lesson on the different ways to create denser and denser flow.
6. Perform Tensor operations in TensorFlow: Hello and welcome to this lesson, where we will be performing various dense operations in denser flow. So let us start this lesson by importing the TensorFlow library in Python. Hair, we're importing TensorFlow SDF in Python. So I'll run this line of code and make the import happen. Great. Now, for the purpose of this lesson, let us create do 1D tenses. So here I'm creating two variables, d1 and d2. And I'm assigning them with tensors, which I'll 1D tensors of values 10111213 and of values 1234. So we have 1D tensors overhead assigned to variables T1 and T2 respectively. So if I run both of these lines of code, the assignment is done. Now we're going to perform our first tends operation. So we can perform element-wise addition by using the add method off of the TensorFlow library. So I'm calling the AG muttered off of the density of your library and I'm passing in the parameters as D1 and D2. So if I run this line of code, we should get a Densa, which is the result of the addition of both of these tenses in an element-wise manner. So ten plus one is 1111 plus two is 13, to help us, three is 1513 plus four is 17. We can also do the same by using the plus operator. So if I run this, we get the same result. We can also perform element-wise subtraction by using the subtract method. So here I'm calling the subtract method off of the tens of your library. And I'm passing in the battery does as D1 and D2, which either two tensors. So if I print output, we get the resulting tensor has 9999. And this is because ten minus one is 911, minus two is 92, and minus three is 913, minus four is nine. And wanting to care about overhead is that we have to make sure the numbers are the densest that we've put overhead Artin order. So if I put d2 comma one, the output will be different. So if I run this, we get minus nine minus nine minus nine minus nine. Because right now the subtracting one from d2 instead of the other way around. So I'll just show you what this back to what it was. We can also do subtraction using the minus operator. So if we run this, we get the same result. Next, let us perform element-wise multiplication using the multiply method. The multiplying method can be called off the TensorFlow library and we can pass in the two pi meters as our 1D tensors. So here we have d1 and d2. So if I run this, we get the multiplied resulting tensor as disk. So here we have the values 102388. And we, if we look at the densest ten into 11011 into 22, dwell into 33613 into full, 52. The same can be done using the abstract operator. So if I run this, we get the same result. And finally we have the divide method. So further divided method and creating two new tensors. And I'm assigning them to the variables b1 and do do again. So the first denser, D1 has a values 2468 and is a 1D denser. And similarly there d2 variable as the values 1234 and is a 1D denser again. And if I run this, both of these assignments will happen. So finally, when it called a divide method from the density to library, and we pass in the parameters as T2 comma T1. We're dividing DB2 by d1. So if I run this, we get the output as 0.50.50.50.5. And if we want the switch, these balmy does, again do so. So if Agenda precedents, we get the output as buh buh buh buh. And if we look at D1 divided by t, do we get du divided by 124, divided by six, divided by boo, boo, and it divided the full two. So we're going to also do the same using the backslash operator. So if I run this, we get the same result. Now, we can square the values of a denser by using the square method. So here I'm plotting the square method from the density liability. And I'm passing in one of the dancers, which is p1 over half. So if I run this, we get output as full 163664. And if we look at the numbers, the square root of two is four, the square root of four is 16, to square root of six is 36, and the square root of eight is 64. And we can do the same for the dee, do tens as well. So if I run this, we get the squared result as 14916. And if we look at the two, the square root of one is one, the square root of two is four, the square root of three is nine, and the square root of four is 16. So this is absolutely correct. Now, we can also find the mean of a denser by using the reduced mean method. There Judas mean method is also called off the TensorFlow library and we pass in a parameter as our Densa. So when you pass in D1, we are expecting to get the mean of the d1 denser. So overhead we have their D1 tensor, which is blue for 68, and the mean should be five. And if you don't know how to calculate the mean, is simply by taking the sum of all of these elements, two plus four plus six plus eight, and dividing it by the total number of elements, which is four. And this will give us five. So over here, if I run this, we get five. Similarly, we can find the mean of the T2 tensile. And if I run this, we get it as BU. One thing I have to mention overheads that when we look at the T2 tensor and the values are 1234. So the mean overhead is 2.5. but we're getting the mean overhead as true since they're datatype is being defined as integer. Do. So remember this when you're calling the mean function or the reduce mean function and getting the mean of a denser. Finally, let us concatenate multiple demes operations and see the result. So overhead and adding the mean of denser one with MC1, then I'm subtracting it with dense, I won again. So here we're doing element-wise addition. So this means that the values of D1 should be added with five, then subtracted with the values of p1. So if I run this, we should get the output as 5555. This is because both d1 and d1 are being subtracted with each other, but the mean is being added and hence were performed a chain dense operation. So there you have it. In this way, we are going to perform various dense operations in TensorFlow.
7. Linear Regression from Scratch using TensorFlow [The Mindset]: Welcome to this machine learning music on building a linear regression model from scratch using TensorFlow. Keep in mind that you will need to have a basic knowledge about linear regression to effectively grabs all the concepts explained in this project. So our linear regression model will take in a single feature as an input and output or prediction. So our mathmatical formula is y hat equals to wx plus b, where y hat will be our prediction. W will be our weight, acts will be our input, and B will be our bias. The workflow for this project is as follows. We will first prepare the data and then we'll randomly initialize the weights and bias. Next, we'll build a model architecture and iteratively update the weights and train the model using gradient descent. Will finally see the prediction of the model and evaluate if we have made an accurate linear regression model or not. Our dataset for this project will be self-made. But you can use your own dataset if you like. We will have two different variables, x and y, consisting of six data points each. The X variable is our input and the data points for 123456. The y variable is the dependent variable or the target that we are wanting to predict. So the data points for a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. Our goal is to build and train a linear regression model to predict the values of Y given X. So our predictions will be denoted by y hat. So let's start coding and prepare the data as the first step.
8. Linear Regression from Scratch: Hello and welcome to this lesson where we will be creating a linear regression model from scratch using TensorFlow. So let us start by importing TensorFlow as df. I'm also importing the random library for generating random numbers, which we will do in a later while. So I'll run both of these lines of code. We import TensorFlow STF in Python, and also the random library. Now, let us define our independent and dependent variable. Independent variable will be Ax, and it is a 1D tensor which values 1-2-3, 4-5-6, and the data type will be float 32. Similarly, we have our dependent variable y over here, and it is also a 1D tensor with values a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. And the data type is float 32 as well. So here, the main goal of this linear regression project is to predict the value of the actual dependent variable based on only the input, which is the independent variable. So we should be aiming to get a 100 as an output or as an prediction when we pass in the input as one. Similarly, 200 as a prediction when you pass in the input S2. So on. So I'll don't both of these lines of code and the assignment is done. So let us also define the weight and bias for our linear regression model. So since we're creating a linear regression model with this formula, y equals wx plus b. And the value of x is our input. And the value of w and b is our weight and bias. So we shouldn't be initializing the weight and bias randomly since we want to train the machine or to learn to train a model in order to find the actual values of the weight and bias to get the right predictions. So for that, we're using the random library and recalling the uniform method off a bit and we're passing in 0 comma one. This means that we are trying to generate a value between 01. Then we are passing the randomly generated value to the Pashtun method in order to create a tensor. So this is a rank 0 tensor because the output of this uniform method will be a scalar value between 01. Then with converting the rank 0 tensor into a variable. So using variables, we will be able to reassign the value of these tensors when training the modal later on. Similarly, I'm doing the same for the bias and where denoting bias as B. So I'll run these lines of code and let us do the assignment. Great. So till now, let us look at the variable. So print x. So this is our 1D tensor, which is the independent variable. And then let's do print y. And this is our 1D tensor, which is the dependent variable. Similarly, if we print out W, We get our variable with the data value as 0.6 for 44283. And similarly, if we print out b, we get a variable with the data value as 0.06 231. So this is randomly generated and there is no logic behind it except that we're generating it from the range of 0 to one. Okay, let's move on. Now. Let us create our mortal architecture. And since this is a linear regression model, our formula will be y equals to wx plus b. So for this will be creating a function called predict, which will compute the following formula. So here we have defined a function named as predict, where we're passing in documents as the independent variable, the weight and the bias. Then we're computing the prediction and we're assigning it to the variable y hat. So let's look at this computation. So here, first we're multiplying w and x to get wx. Then we're just adding in the bias by using plus B. So we get this formula over here. And since this is a prediction, a writing it as y hat. And finally, we're returning y hat when the function is called. So I'll run this line of code and let's initialize our function. Now. Let's just make a prediction from our model and note that we have yet to train the model. So this predictions will be quite off. So if I run this, we get the following tensor. And by the way, we're just calling the predict method and passing in the arguments as the independent variables, the weight and the bias. And when we get y hat return from this, the assigning it to the variable prediction and we're printing it out. So we have the six predictions overhead. So ideally, the predictions would have been a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. But we are getting some random numbers because in this formula we have randomly initialize our weight and the bias. So the next step is to train our model and find the right values for w and b. So let's do that. So to train the model, we will be using the mean squared error as our loss function. And the mean squared error is calculated in the following manner. So even if there are some mathematical terms over here, it is quite simple to understand. We're just taking our predictions, which is Y hat, and we're subtracting it with the actual values, which is y, then this is Tom as the error. So this era become out as negative if the value of y hat is smaller than that of y. So that's why we're squaring it. And finally, we're finding its mean by dividing it by the total number of data points. So we're getting our mean squared error in this way. So I'm defining a function called loss to calculate our mean squared error. Loss function takes in two parameters, the y-hat or which is the prediction, and y, which is the actual values that we want to predict. Then we're calling the square muttered. And we're passing in the values as y hat minus y. So with this piece of code, we're calculating this part of this equation, y hat minus y squared. Then we're assigning this user to a variable called squared error. And we're finding the mean of the squared error. So this is the complete equation and returning it as MSE. And we're also returning MSE from the loss function. So I'll initialize this loss function by running all of these lines of code. And now let us check the loss of our prediction that we had made overhead. The jacket. What we can do is we can simply pass in our prediction, which we made overhead and our actual dependent variable. Do the loss function and we can print it out. So if I run this, we should get an MSE. And here we have the value of the MSE, or the mean squared error, which is 14 thousand. Oh my bad. It's one lab 49,285.19. And in an ideal case, or when the modal is actually trained, we should have an MSE of about 0. And that is when the modal is perfectly trained or the model is a 100% accurate. It's no wonder that our loss is such a big number because we can see the predictions are pretty far off when they should have been a hundred, two hundred, three hundred and so on. So let us train the model. So far this I'm creating a train function over here. The train function takes in four parameters, the independent variable, the dependent variable, the weight, and the bias. Then I'm defining the learning rate as 0.05. And we'll understand why I'm defining it overhead as we go along. So then we using the gradient tape method and we're using it as d. So with dF dot gradient tape as p, where getting the prediction as y hat equals to predict and we're passing in the independent variable, the weight and the bias. So we're getting a prediction. Then we're computing the loss by calling the loss function and passing in our prediction and our dependent variable. So with this, first, the prediction will come out from this function over here. Then, when we call the loss function, we get these lines of code to be executed. Then we have defined the loss at step loss. And this is because we'll be calling the train function over and over again to train our model. And thus why, when the tree and function is called for the first time, this will be just a step in our learning process. So I am calling it as step loss. And the reason we have used gradient tape over here is because we want to record the value of the loss when the green function is called multiple times. So degraded tape function does just that. And now we are performing gradient descent. So here we're calculating the gradient and we're passing in the parameters as the step loss and the values of which gradient we want to compute. Since we had use gradient tape and we had recorded the value of step loss, what we can do is now since gradient tape is T, we can call the gradient method off of it and passing the recorded value. Then we can specify the values of the variables that we want to get the gradient of. So with this, we are getting the gradient of w and the gradient of B, which I wrote over here as a tuple unpacking statement. Next, what we can do is we can update the value of our weight and the bias by using the assign submitted. The assignments submitted does not only simply assign the value that is partial over him, but it assigns it in a way where the tensor is subtracted with the value that is partial behind. So when you use the assign submitted, what we're actually getting is W minus learning rate into the gradient of w. And this is a step in gradient descent. Similarly, we're updating the bias by using the assign submitted and we're passing in learning rate into grad B. So this would update the value of b as b minus learning rate into gradient of B. So I'll run these lines of code and less initialize our train function. Great. Now let's train our model for 2 thousand steps. And by training our modal, I simply mean we're updating the value of w and b such that the steep loss is minimized or the state loss is decrease to possibly 0. So if I run this for, for evoked in Rails 2 thousand, which means that we're running it for 2 thousand steps and we're just calling it step as Epoch. Then overhead, we're calling the train function. So if we run this line of code for 2 thousand times, the values would be updated for 2 thousand times. So I run this and let's wait. Okay, the training is done. Now, since the model is trained, we want the predictions. Do we as accurate to the actual dependent variable values, which is a hundred two hundred, three hundred four hundred, five hundred, six hundred. So here, once the training has been finished, I'm calling the predict method and I'm passing in the independent variable, the weight and the bias. And I'm assigning the prediction, do the variable prediction, and I'm printing it out. So if I run both of these lines of code, we get our ten psi and we can see over here, we get the values a 100.00003, which is when rounded off a 100. Similarly, this is 200. This is three hundred, four hundred, five hundred and six hundred. So we have successfully trained our linear regression model. Now, let us also calculate the loss, so way calling the loss function and we're passing in the prediction, which is this, and the dependent variable which is y. So if I run this, we get this value. So it means that we are getting 2.79 and e to the power minus 09 means that this is a decimal value in the ninth place. So this is something like 0 to seven. And this is when rounded off actually just 0, since we're pretty accurate with our predictions overhead. And this is how we can build a linear regression from scratch. And this is it for this lesson. I hope the concepts were not that hard to understand. And if you know the basic idea, sailboat linear regression and gradient descent, you would have understood these concepts, gradual.
10. Logistic Regression using Sequential Model [The Mindset]: Welcome to this machine learning project where we will be performing logistic regression using the sequential model of TensorFlow. Our logistic regression model will be made using a sequential model consisting of a dense layer with one neutron. And it will take the input as a 1D denser, and it will output the probability as a 1D denser as well. The neutron will take in an input and output the clasp reliability by calculating the sigmoid of the linear combination of the weight and the input plus bias. So the sigmoid function is a function that will give up its output within the range 0 to one. And as formulas, one divided by one plus e to the power minus wx plus b, which is our linear combination plus bias. The workflow for this project is as follows. We will start by preparing the data. Then we'll build a model architecture in TensorFlow ab.js finally will train the model using stochastic gradient descent as optimizer and binary cross entropy as our lost function. Our dataset for this project, self-made, but you can use your own dataset if you like. So we will have the independent variable or the input to the model defined as acts. And its values will be 1-2-3, 4-5-6. And given these inputs, will be trying to predict the actual class given by y. And the classes are 000111 for each of these inputs. So our goal is to build and train a logistic regression model to predict the values of Y given X. So if you're ready and excited for this project, let's start coding.
11. Logistic Regression using Sequential model: Hello and welcome to this lesson. In this lesson, we will be creating a logistic regression model using the sequential model from dense of law. So to get started with this lesson, let us import TensorFlow as df in Python. So I'll run this line of code and the import is successful. Now, let us define our independent and dependent variable for this classification project. So far this, we're defining the independent variable as x. And x is a 1D tensor with the values 1-2-3, 4-5-6 with a datatype of Lord 32, then the dependent variable will be denoted by y. And it is also a 1D Densa, which values 000111. And all of these values are off throw tardy Do datatype. The objective of this project is to classify the values of the input to the respective classes when we only pass the input to the model. So when the modal is perfectly trained or is a 100% accurate, the model should classify the input one with class 0. Similarly the input to with class 03 with 0. But it should classify the input four with 15, with 16, with one as its class. So this is our objective and let us initialize both of these variables with these values. So I'll run these lines of code and assignment has been done. So now let us create our model architecture by using the sequential model from the chaos module of TensorFlow. Our logistic regression model will compute the sigmoid off bilinear combination of the weight and the input plus bias. So far that we're creating, are we using the sequential model from the chaos module of the TensorFlow library? And as a parameter, we're passing in the number or the type of layers that we want. So here we're passing in our first layer as a dense layer. And in this case we only have a single layer. So the densely I can record of the volume of the gas module of the TensorFlow library. And our dense layer will consist of one neutron specified by units equals to one. And the shape of the input that will be passing into the dense layer is a 1D tensor. So the independent variable x is a 1D tensor. And we have written one over him. And since we want the bias, we're using the property, use underscore bias and setting it to true. And finally, since we want activation function to be a sigmoid, we're passing the parameter activation as sigmoid. And with this, we have created our sequential model. We're also assigning this sequential model to the variable logistic underscore mortal. So if we run this line of code, the assignment is done and our model architecture is created. Now let us make some predictions using the model and mine deck. We have yet to tend the modal. So I will output, will not least significant bit clues to the actual dependent variable values. So for that, let's just print our predictions by calling the predict method and passing in the independent variable as a bandleader. Okay? And hey, you can see that we have a bunch of numbers and we just don't have 01. This is because when we use the activation function as sigmoid, we get the output in a probability range between 01. So anything less than 0.5 will be our class 0. And anything equals two or above 0.5 in the prediction will be hourglass one. In this case, we can see that all of our values are below 0.05. so our logistic regression model is computing all of the inputs classes as 0. Finally, let us train the model and improve our predictions so that our modal wills correctly classify the independent variables and assign them with the right classes of the dependent variable in the predictions. So to train our model will be using the loss as binary cross entropy and the optimize as stochastic gradient descent. We can configure the model by specifying this matrix using the compile method. So we're calling the compile method off of the sequential model, which is termed as logistic model. And wave passing in the optimize as a Stochastic gradient descent, we just call it from the optimizers module of the TensorFlow library. And we're setting the learning rate as 0.05, then we're specifying the loss as binary cross entropy. So if I run this line of code, our mortal will be configured with stochastic gradient descent as optimizer and binary cross-entropy as our lost function. Now, we can finally train the model calling the fit method. When calling the fit method, we have to pass in the input variable, the dependent variable, which is the values that we want to predict, and our number of epochs, which is overhead 2 thousand. So we're training the model for a total number of 2 thousand epochs. So if I run this line of code, we should see the training happening. And if we look at the loss at the start of training, which is 2.5, we can see that when it reaches to a total on the staffs of both holes. And so I'll scroll down over here. So when it reaches that tells the steps the losses 0.1531. And at first our loss was 2.5. So if the losses exactly 0, that means our model is perfectly trained and is a 100% accurate. But in this case, we do have some loss over him. But let us see our predictions after training so far that I'm just calling the predict method and passing in the independent variable or the input variable as a parameter. And we're printing out our predictions. So if I run this, okay, we get these values. So as I told before, the sigmoid function gives us a probability reigns between 01 in the output. So far the input, let's just have a look at our independent and dependent variable. So we have the inputs 1-2-3 with class 0 and 4-5-6, the class one. That means when you pass in an input of one, I will prediction probability should be less than 0.5. And similarly, for all 1-2-3, It should be less than 0.5. And for full 5-6 it should be equal or greater than 0.5. So let's have a look here for one of input probability is 0.01, which means that it is less than 0.5. So our class's 0 and overhead also is less than 0.5. So our class is 0. Overhead is less than 0.5 again. So here's, our class is 0. So we have correctly classified 123 as glass 0. Then we're classifying the remaining 4-5-6 as class one since the values are above 0.5. So we have made a logistic regression model using sequential model. And although were correctly classifying the values, we can see that the values 34 are very close to the number 0.5. And this means that the modal is not a 100% certain that, okay, this input falls in this class. So to tackle this in the next lesson, we'll be building a deep neural network classifier, which will consist of more than one layer of a dense layer. So I'll see you in the next lesson.
12. Deep Neural Network Classifier: Hello and welcome to this lesson. In this lesson, we'll be creating a deep neural network classifier using TensorFlow. So to get started, let us import TensorFlow as df in Python by running this line of code. Next, we have our independent and dependent variable, as we had in our previous lesson. So the independent variable is a 1D tensor with values 1-2-3, 4-5-6, and the data type as float 32. Similarly, our dependent variable as a 1D tensor with values 000111. And the data type is float 32 as well. So I'll run both of these lines of code and make the assignment happen. Now, let us create a modal architecture for the deep neural network classifier by using the sequential models from chaos. So here I'm calling the sequential model from the chaos model of the TensorFlow library. And as a parameter, I'm passing in a list of layers. So we are using Dense layers and we have a total of three layers over him. The first layer in which the input will be passed has a total of 20 neurons and the input shape is one, which means that our input will be a one-day tensor, which is the independent variable. Then the next layer after that also has 20 neutrons and is also a dense layer. Then the final layer is also a dense layer consisting of a single neuron where the bias is enabled using the US bias spam it as true. And activation function is sigmoid. So the output from the neural network will be a prediction probability between the range 0 to one. So I'll just initialize this modal and I'll assign it to the variable classifier underscore modal. So I'll run this and the assignment is done. Now, let us make some predictions using the model. And for that, and calling the predict method off of the modal. And I'm passing in the argument as our independent variable as the input to the model. So I'll run this. And we have our list of prediction permeabilities. And as you can see, all of these probabilities are coming off as random since we have yet to train the model. Now, let us train the model by using the loss as binary cross entropy and the optimizer as stochastic gradient descent. We'll configure the Moodle by specifying this matrix using the compile method off of the sequential model. So here we have said the optimizer as stochastic gradient descent with learning rate 0.05 and the loss as binary cross-entropy. So I'll run this line of code and the modal has been configured. Finally, we can train the model by calling the fit method. And in the fit method, we're passing in the input to the model the values that we want to have as an output, and the number of epochs. So here, when I run this, we will train the model for 2 thousand epochs. And let us look at the loss of the First Republic. It is 1.14. And if you remember in the previous lesson, we had a loss of about 3.5. So if we scroll down, people talk to 1000 and let us see the final loss of our model. It is 0.0028, which is very small. So our model has been trained. Now let us make a prediction after training. So for this, we're calling the predict method off of our sequential model and a passing in the input as the independent variable. And we finally printing it out. So let's see. So as we can see over here, our model predictions are as follows. And to clarify what is happening over here, this e minus 11 or E minus 07, indicates that the value over here is in the 11th decimal place. So that means 0 and till we reach the relentless, it's 043. So the prediction over here means that the first input is of class 0, since this prediction is way below the threshold of 0.5. Similarly, all of these three predictions are way below the threshold of 0.5. Then we have the predictions for the last three values, which is 4-5-6. And we can see that it's 0.9, says it is E minus 01, so 0.90.91. This means the model is predicting that the class of the last three input variables, 4-5-6 is one. And in this way, we have created or very highly accurate classifier by building a deep neural network classifier. So this is it for this lesson on deep neural network classifier.