TensorFlow JS - Build Machine Learning Projects using Javascript | Pragyan Subedi | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

TensorFlow JS - Build Machine Learning Projects using Javascript

teacher avatar Pragyan Subedi, Entrepreneur and Data Scientist

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

25 Lessons (1h 17m)
    • 1. Welcome to the course!

      1:22
    • 2. Setting up TensorFlow JS

      3:32
    • 3. Introduction to Scalar and Tensors

      1:53
    • 4. Scalar and Tensors in TensorFlow JS

      3:44
    • 5. Different ways to create Tensors

      5:23
    • 6. Performing Tensor operations in TensorFlow JS

      4:02
    • 7. Linear Regression from Scratch using TensorFlow JS

      2:03
    • 8. Preparing the data

      2:14
    • 9. Building the Linear Regression model architecture

      3:24
    • 10. Training the Linear Regression model

      6:29
    • 11. Linear Regression using Sequential Model

      1:37
    • 12. Preparing the data

      2:14
    • 13. Building the Linear Regression model architecture

      3:24
    • 14. Training the Linear Regression model

      4:26
    • 15. Viewing the change in loss

      3:25
    • 16. Using multiple features as input

      3:43
    • 17. Logistic Regression using Sequential Model

      1:57
    • 18. Preparing the data

      2:14
    • 19. Building the Logistic Regression model architecture

      2:35
    • 20. Training the Logistic Regression model

      3:21
    • 21. Creating a Deep Neural Network Classifier

      3:56
    • 22. Image Classification using MobileNet

      1:26
    • 23. Getting the image

      3:41
    • 24. Loading the model

      1:14
    • 25. Predicting the dog breed

      3:50
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

64

Students

--

Projects

About This Class

Learn how to build Machine Learning projects using Javascript in this TensorFlow JS Course created by The Click Reader.

In this course, you will be learning about scalar as well as tensors and how to create them using TensorFlow.js. You will also be learning how to perform various kinds of tensor operations for manipulating and changing tensor values.

You will be performing a total of three Machine Learning projects while learning through this TensorFlow JS full course:

1. Linear Regression from Scratch

You will be learning how to create a Linear Regression model from scratch using TensorFlow.js. You will be preparing the data, building the model architecture as well as training the model using a custom-made loss function as well as an optimizer.

2. Logistic Regression using a Sequential Model

You will be learning how to create a Logistic Regression model using a Sequential Model with TensorFlow.js. You will be preparing the data, building the model architecture, training the model as well as build a deep neural network classifier.

3. Image Classification using MobileNet

You will be learning how to use a pre-trained model called MobileNet to build a dog breed classifier with TensorFlow.js.

By the end of this course, you will have learned how to build your own Machine Learning projects from scratch using TensorFlow JS.

Meet Your Teacher

Teacher Profile Image

Pragyan Subedi

Entrepreneur and Data Scientist

Teacher

Hi, I'm Pragyan! I'm a Data Scientist, a Kaggle Expert as well as a Data Science Consultant for companies around the world. I also run my own data science company called Kharpann Enterprises.

I am listed on Toptal as one of the top 3% freelancing data scientists on the site and I have over 5 years of experience working in Time-Series Analytics/Forecasting, Machine Learning (including Deep Learning) as well as Python programming.

I hope to impart a lot knowledge to students on Skillshare.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Welcome to the course!: Hi, I welcome you to this course on TensorFlow ab.js and began a data scientists at the clicker down. And I'll be your instructor for this course. Denser flow ab.js as a machine learning library that allows us to develop and use machine-learning models directly through the web browser. It is also one of the most popular machine learning libraries for ensuring data privacy, since both modal training and inference can be done on the client side, this means that no data has to be sent back to the server in order to make model predictions. In this course, you will gain an in-depth understanding about how to build, train, and deploy machine learning models, TensorFlow, dot js. This course is aimed for beginners, intermediate, as well as experts alike. So feel free to join us. You will be learning how to create tensors and perform dense operations. You will also be learning how to create regression models as well as classification models using deep neural networks. And finally, you'll also be learning how to use pre-trained models in order to perform tasks such as classification and much more. So if you excited about learning this JavaScript library made specifically for machine learning, the latest hop on to the course and start learning. See you there. 2. Setting up TensorFlow JS: Hello and welcome to this lesson on setting up TensorFlow ab.js. In this lesson, we will be discussing about two files that we'll be using throughout this course. The first file is the index.html file or webpages that will host all of our HTML code for this course. We will also be loading the tensile floor ab.js library using this SDL file. Then we have our script.js file, which will host all of our JavaScript code for this course. So if I open up both of these files using a text editor, which is sublime text over him. We can see its content. You can use any other text editor that you want. But I prefer Sublime Text because it's quite easy to read. So in the index.html document or file, we have our sample SAML boilerplate. We have our regular SML tax. We have a head tag, and we have our body tags. In the head tag, we have a title tag defining the title of the web is, which is denser flow, not Jazz Tutorial. Then we using a script tag. Lord, the latest version of TensorFlow ab.js from the source as this URL. So if I open up this URL, which is of a content delivery network on my web browser. We can see that it is essentially a minified JavaScript file. And this is the compressed version of the tensorflow dot js liability. And we're just importing this file using this URL as source and the script tag. Then we're importing the main script file, which is script.js when we'll be writing all of our JavaScript code. Next, in the body tag, we have a div containing a heading which is H1 hating as hello. So if I open up this index.html document without using a text editor, and by just double-clicking it. We can see we have hello as our heading overhead, and this is as HTML document. Then let's talk about the script.js phi. The script.js file, as I told before, contains our JavaScript code. So for this example, I've wrote a simple line of JavaScript which essentially logs this string, which is hello Tensor Flow ab.js onto our console. So in layman terms, you can also understand that we're printing hello tens of Node.js onto the console of our webpages. So if this line of code works, we can be certain that our import of the tensorflow dot js library has been made successfully. And we're also successfully imported the script.js file into a web base. So if I open up the whip is again, then if I press control shift j, I can see the console of a ham. And here we are outputting Hello tens of Node.js. So we've successfully imported the TensorFlow Node.js liability. Great. 3. Introduction to Scalar and Tensors: Hello and welcome to this section on scalar and tensors. A scalar is a real number that can be expressed using a quantitative value. Down mostly used along with some units of measurement. So for example, in kilograms and five feet have scalar values as 105 respectively. Similarly, R tensor, R tensors are multidimensional. Ad is with a uniform datatype. The name M60 dot js is derived from the notion that data is stored and floor in a neural network using tensors. The rank of a denser medium minus the number of dimensions of an ATI. So a scalar is a rank 0 tensor since it has magnitude but no dimensionality. Hair 105 Buddha ranks EDR densities. On the other hand, a vector is a rank one tensor. It has a single dimension. It is also called a 1D tensor. Here we have an add a of 1234 as elements, and this forms a vector which is a 1D Densa. Similarly, a matrix is a rank two tensor since it has two dimensions, it is also called a duty denser air. We have a matrix of values 1234, making up a duty tensor with two rows and two columns of data. In this section of the course, we'll be learning how to create different kinds of tensors. We will also be learning how to perform various operations using densities. So let us start by learning how to create scalars and tenses using TensorFlow ab.js In the next lesson. See you there. 4. Scalar and Tensors in TensorFlow JS: Hello and welcome to this lesson on How to Create scalars and tensors intensive low dot js. So over here on the left side of the screen, I have my text editor opened up with script.js file. And on the right side of the screen, I have no whip is where I've opened up the console by pressing control shift j. So for this lesson, we'll be going through different methods to create scalar as well as tensors. Now, to create a scalar is quite simple. We just have to call the scalar function off of the TF library, which is the TensorFlow library, and pass in a numerical value. So by doing this much, we are creating a scalar. And by calling the print method off of this scalar or off of the rank 0 tensor, where printing it onto the console. So over here we can see that TF got scalar three dot print gives result as this. And notice here that we're getting a tensor because a scalar is again a rank 0 tensor. Now, to create a one-dimensional tensor or a 1D tensor, we can use the tensor 1D method off of the GEF library and passing an edit of values or elements. This is essentially a vector. And we're printing the 1D added or 1D tensor by calling the dot print method. So over here we have the output as 1234. So this is a 1D tensor. Then similarly for 2D tensor, we pass in a matrix of values. So we have the outer Eddie overhead. Then we have to add is in the inner part of the OH daddy. And we're using the insert duty function off of the TensorFlow library to create the 2D tensor. And by printing it, we can see over here we have a matrix 1234. Great. Now to create a 3D tensor, we can simply use the tensor 2D function of, of the TensorFlow library and pass in an array of nested arrays. So over here we have the outer added over here, then we have an added over here. And then we have another added inside of that overhead. So we're going deeper into the dimensionality. Sets are creating more ad is inside of each atom. And this is a 3D tensor which can be seen overhead. Great. So one method that is common for all of this tensor says we can simply call the tensor method. And with this, we are able to create 1D 2D 3D or any kind of N-dimensional tensor using this method. But in practice, it's a Juno convincing that we specify the tensor that we are using so that we're able to find what tenser is it actually in the code. And it wouldn't be quite troublesome when we just write ends up otherwise. So over here, so we, since we have 1234 as our vector and we are passing it through the tenser method. This is a one-dimensional tensor and we can see the output over here. Similarly, if we pass, suppose this 2D tensor. And if I run this, so I'll save this file and I'll refresh my web is we have our two D tends overhead. So this is all for how to create scalars and tenses. And in the next lesson, we'll find out different ways to create tensors a using TensorFlow Node.js. See you there. 5. Different ways to create Tensors: Hello and welcome to this lesson on How to Create answers using the different methods of TensorFlow, Node.js. So denser flow ab.js provides handy functions which we can use to create tensors without ever specifying the values one by one. So in the last lesson, we learned how to specify the values by using Densa 1D tensor, 2D, tensor 3D, and so on. But in this lesson, we'll learn how to create tensors by just specifying the number of rows or the number of columns, or the type of values that we want in the tensors. That is, how do we create such lenses without specifying the values one by one. So overhead. The first method that we are going to look at is the ones method. The ones method takes in the dimension of the tensor that we want and creates a tensor with all elements as one in that dimensionality. So if you pass school by two as an ATI overhead in the ones method, we get a two by two tensor with all of the values as one. Similarly, the zeros method also does the same. So if you pass two-by-two in the zeros method, print out the tensor. We get a tensor with all values filled by 0. Now, we can also use the emitter to create an identity matrix. So we're specifying the number of rows and the number of columns of the identity matrix, which is three by three, and we are printing it out overhead. So therefore, we get an identity matrix as follows. Then we can use the tf dot fill method to fill in the values of the tensor by specifying the values as well as the dimensionality of their tensor that we want. So we want a two-by-two tensor with the values as four. And if we print it out, we get the tense as follows. Great. Now we have the linspace method. And the linspace method or function takes since the starting value, the stopping value, and the numbers that we want to generate. And it creates a tensor based off of that. So we will get a 1D tensor when we use a linspace method. So if I print this tends out, we can see that we have values from 0 to nine, which means that there are ten different values overhead, as we had specified with the range 0 to nine or ten different values. So if I change this to maybe say five, and if I request a web base again. So we can see that we have five different values, ranging from 0 to nine, but they're spaced out in 2.25 defense because we want five values covering the range 0 to nine. And then we have the rains method, which actually takes in the starting and the stopping value. And it takes it a step of how much numbers that we want to have or how much space there should be between each number within the range. So over here we can see that we have a tensor of 02468, which means that we are starting from 0. We are going two steps will denoted by this and we're going to nine. But since nine doesn't fall when we go to two, since 02468, and we should have got ten overhead, but since we're just stopping at 99 doesn't get printed out. So this is the rains method to create a 1D tensor. Also, let's just try out gyms into two, maybe three. And let's see what happens. So now we have 036. Notice that we don't have nine again overhead because the stopping value is not included into the tensor itself. Great. Now let's create a tensor and chances value using the assign method. So over here we're creating a tensor 1D, which is a one dimensional tensor with values 123. Then we are passing the tensor through the variable method, which is dF dot variable. And we're assigning this variable to x. Then we're assigning the values for the tensor as tensor 4-5-6, anogenital weight is to write tensor 1D. So we're changing the values of this tensor using these values and we're printing it out. So we have the output as 4-5-6. So to understand it better, I'll just print out the value of x before assigning the new tensor 28. So x dot print. And I'll save this and I'll reload the web is. So here we have 123, which was overhead since we've assigned this variable as this denser. And we're changing the assignment using the assign method overhead. And therefore we're getting 4-5-6 as the final output. Great. I hope now you have learned how to create tenses in a much more easy way. And you can try out these all the different methods and see for yourself what kind of output you get. So I'll see you in the next lesson. 6. Performing Tensor operations in TensorFlow JS: Hello and welcome to this lesson where we will be learning how to perform different tensor operations intensive Node.js. So for the purpose of this lesson, I've created two tenses and assign them to T1 and T2. So the first tensor is a 1D tensor with values 10111213. And the second tenser is also a 1D tensor with the values 1234. To add both of distances, we can simply call the add method off of any of any one of these tensors and pass in the next tenser as an argument. So here we necessarily doing P1 plus P2. And if we look at the result, we're getting the added sum. So ten plus one is 1111 plus two is targeting two. L plus three is 1513 plus 417. So we're doing element-wise addition. Similarly, we can switch this up. We can do T2 plus T1, T2 dot ETag D1. It is doing d2 plus d1. And we can see the sum is the same. Now the subtract two tenses. We can use the submitter of anyone of the densities. So overhead, we're doing d1 minus d2. So there isn't is 9999 because n minus 19, 11 minus 292, l minus 3913 minus 49. So overhead, if we switch this up to t2 minus t1, and James Edo has, well, this will give out minus nine minus nine minus nine minus nine because we're subtracting the T1 tensor from the T2 denser in an element wise manner. Grade. The next method is a multiplier method, which is MUL. So we're multiplying T1 into D2, D1 into d2, and the output as, as follows. Again, we're doing element-wise multiplication. So ten into one is 1011 into two is 22, dwell into three is 3613 into four is 52. Similarly, for division, we have the div method and we're essentially doing t1 divided by t2. So ten divided by one, we get n, then 11 divided by two. We get 5.52 L divided by three, we get 413 divided by four, we get 3.25. Next, we can square denser by using the square root method of the density itself. So we're squaring that T1 tensor and we're printing the output overhead, which is a 101.4416900. Similarly, we can square the P2 tensor by just replacing T1 by T2 and refreshing the web is, so we have 14916. We can also find the mean of a tensor using the mean method. And it's called In a similar way as a square meter. So the mean of the p1 tensor is 10,213, so it should be around 105, which we have overhead. So finally, let's learn how to concatenate various operations. The first tensor, d1, is being added with the mean of the T1 tensor, which is less than 0.5, then is the whole result of this operations is being subtracted using the T1 tensor. So we should get the result as Elon B15, B15, B15, D15 because we're just adding the mean, do it, but we're subtracting the entire tensor itself. And therefore, we have the mean lift in all of the elements. So I hope this lesson was pretty easy to understand. And in the next lesson, we'll start off by learning how to create a machine learning project. So see you there. 7. Linear Regression from Scratch using TensorFlow JS : Welcome to this machine learning music on building a linear regression model from scratch using TensorFlow ab.js. Keep in mind that you will need to have a basic knowledge about linear regression to effectively grabs all the concepts explained in this project. So our linear regression model will take in a single feature as an input and output our prediction. So our mathematical formula is y hat equals to wx plus b, where y hat will be our prediction. W will be our weight, acts will be our input, and B will be our bias. The workflow for this project is as follows. We will first prepare the data and then we'll randomly initialize the weights and bias. Next, we'll build a model architecture and iteratively update the weights entry in the modal using gradient descent will finally see the prediction of the model and evaluate if we have made an accurate linear regression model or not. Our dataset for this project will be self-made. But you can use your own dataset if you like. We will have two different variables, x and y, consisting of six data points each. The X variable is our input, and the data points for 123456. The y-variable is the dependent variable or the target that we are wanting to predict. So that datapoints for ER, a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. Our goal is to build and train a linear regression model to predict the values of Y given X. So our predictions will be denoted by y hat. So let's start coding and prepare the data as the first step. 8. Preparing the data: If you've downloaded data resource with this lesson, you'll find two files, the index.html file and the script.js phi. So if I open up both of these files using a text editor, which is sublime text, we can see that contained the index.html file contents and has tamed boilerplate where we're importing the TensorFlow Node.js library using the content delivery network as the source. And we're also importing the main script file where we will be writing all of our JavaScript code for this project. So if we have a look at the script.js phi, where preparing the data for this project. Head were defined x as the independent variable with the values 123456, and this is a 1D Densa. Also, we have defined the dependent variable y as a 1D tensor by using the tensor 1D method and passing in the values as 000111. So as this is a logistic regression or classification project, we have two classes overhead, the 01. And our objective with this project is to classify the inputs, do the actual classes, which is either 0 or one. So if we pass in the input as one, the output should be 0. Similarly for 203044, we should have one as the output class for five, again 146, again one. Finally, let's look at both of these tenses by using the print method off of them and viewing it on our console. So if I open up the index.html file, and if I open up the console by pressing control shift j, we can see both of these tenses are being printed out overhead. So we have successfully prepared their data for this logistic regression project. I'll see you in the next lesson where we will be building the logistic regression model needed for this project. So see you there. 9. Building the Linear Regression model architecture: In this lesson, we'll be building off our previous lesson where we had created the independent variable x, the dependent variable y. And we randomly initialize the weight w and the bias B. So in this lesson, we'll be building a model architecture as y hat equals to wx plus b. For this, I'm creating a Javascript function called predict. You can use any other name that you want. But I'm going with predict. And we're passing in three arguments to the periodic function, which is the input to the model, the weight, and the bias. And we're computing y hat by performing a series of dense operations. So let's look at what is happening over here. We have the weight w, then we're multiplying it with x to get wx. Then we're adding it with the bias, which is wx plus b. And finally, we are getting y hat, which is our prediction. So since this is our prediction, we want to return it from the function. So we're doing return y hat. And this is the function for our model architecture. And we're computing y hat equals for wx plus b. So let's get the predictions. And remember that we are not training the model right now. And this is just a simple prediction to say that this function is actually working. For this, we're calling the predict method or the periodic function that we have just made were passing in the input as x, which is this one d tensor. And we're passing in the weight which has been randomly initialized. And we're passing in the bias which has been randomly initialized as well. Then we're calling the print method off of it. So let's look at what this gives out in the console. So if I open up index.html, and if I open up the console, we have our predictions. And this is nowhere near to the actual values of y, which is a hundred, two hundred, three hundred and so on. This is because the model has not been trained. We do not know the right values for w and b to get the right predictions. By the way, one last thing, since we have a function of a hair and in this function, we are calculating y hat by taking in w, x and b. Remember that x is always the same input. Therefore, only the values of W and b will change. And this is why whenever I reload the webpage, the predictions will come different because the values of W and b are changing. Grid. I hope you understood what is happening and in the next lesson, let's learn how to train our model. So see you there. 10. Training the Linear Regression model: Hello and welcome to the final lesson of this project. Due now we have prepared our dependent and independent variable. We're randomly initialize our weights and biases as w and b. And then also build our model architecture by using the function predict. So the good deal now is overhead. In this lesson, we'll be training our model and making accurate predictions of, of it. For this will be needing a loss function as well as an optimizer. So let's start with the loss function. For the loss function, we'll be using the mean squared error. And here we have a JavaScript function called loss, where we're passing two parameters or arguments or lie hat, which is the predicted output and the actual output or the dependent variable y. Then we're computing the error, which is the mean squared error, by subtracting y from y hat. And since this can be negative, therefore we're squaring it. And finally, we're finding the mean to make it mean squared error. Now, when we've done all that, we're returning the error back when the function is called. And then we have our train function for model training, which takes in four arguments. They input x, the independent variable y, the weight w, and the bias be overhear, we're defining an optimizer as the Stochastic gradient descent. We are calling the stochastic gradient descent function and assigning our passing in the learning rate as a parameter, which is 0.05. The stochastic gradient descent function can be found in the train module of the denser flow library. Next, recalling the minimax function from the optimizer, which is the stochastic gradient descent. And the parameter for the minimize function is a function itself. So in this function where first predicting the value of y hat by passing in the input variable or the independent variable x, the weight w, and the bias B to the predict method. Then we're calculating the loss by passing in y hat, which is the predicted outputs, and the dependent variable y, which is the actual output. So when we call this function the error and gets calculated overhead. First, y hat minus y, then this is squared. And finally, the mean is taken out. So we get the loss when this happens. Finally, Reggie turning the step loss, which is the loss for a single step of optimization. And the optimizer tries to minimize the returning value, which is the step loss. So once we have our train function, we can now iteratively drain our model. So head, we're running a for loop and we're calling the green function for 2 thousand steps. In each step, all of these lines of code will be run. So in the first step, the stochastic gradient descent will be initialized with learning rate 0.05. Then the optimizer, which is the SDD optimizer, will minimize the step loss. And in this way, when we reach 2 thousand iterations or 2 thousand steps, the loss will be very much minimized. And once the training is completed, that is, once to total steps have been reached. We can now call the periodic method and again, passing the input variable, the weight and the bias. And we can see the predictions by calling the brink method off of it. So let's look at the index.html file. So here we have our index.html file. And if I open this, it will take some time to load because the model is failing for 2 thousand steps. And once Hello is written over here, the model is trained. So okay, now the model is trained and we can check the console by pressing control shift j. Okay, we have the predictions from the model after draining. And here we can see that we have a hundred one ninety nine point nine, nine nine, which is 299.99. Again, this is 300. And similarly, four hundred, five hundred and six hundred. So we have successfully to end our model. So let's see what would happen if we Jens, they intrusions to, let's say about maybe just ten steps. And if I save this and reload this again, we can see the predictions are close, but they're not that accurate. Similarly, if I change this to maybe a 100 or even 1000, for example. And if I save this and again, the Lord, the web-based is taking some time to drain. Okay, now we've got our prediction which is close to the actual y values. And even for just 1000 iterations, our linear regression model has been trained. Great. And this is it for the project building a linear regression model from scratch using tensorflow dot js. 11. Linear Regression using Sequential Model: Welcome to this machine learning project where we will be performing linear regression using the sequential model off of the tensor flow Node.js library. Our linear regression model will be made using a sequential model consisting of a dense layer with one neutron. And neutron will take input as a 1D denser, and it will provide the output as a 1D Densa as well. The New Dawn retaking and input and output our prediction by calculating y hat equals wx plus b, which is the linear combination of the wit and the input plus bias. The workflow for this project is as follows. We'll start by preparing the data for this project. Then we'll build a model architecture in TensorFlow Node.js. Finally, will train the model using stochastic gradient descent as optimizer and mean squared eta as our loss function. Our dataset for this project will be self-made, but you can use your own dataset if you like. I'll input to the model will be denoted by x and the values will be 1-2-3, 4-5-6. The output for these inputs will be denoted by y and a hundred, two hundred, three hundred four hundred, five hundred and six hundred. Our goal is to build and train a linear regression model to predict the values of y given x. And the predictions will be denoted by y hat. So if you're ready and if you excited for this project, let's start coding. 12. Preparing the data: If you've downloaded that as resource, you'll find two files present over there, the index.html file and the script.js file. So I've opened up both of these files using a text editor that is sublime text. The index.html file contains an HTML Boilerplate with that TensorFlow Node.js library being imported from the source as this CD and u at L. We're also importing the script.js file, where we will be writing JavaScript code. Now, this look at the script file here with creating the independent variable x and the dependent variable y. It has 1D tenses. So for x, we have the values 1234564. Why we have the values a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. And we can look at both of these values by printing it using the dot print method off of these tensors. So if I open up the index.html file, and if I look at the console, we can see both of these tenses being printed out. And the objective of this linear regression project is to predict the value of y given only x as an input. So we will be trying to predict a 100 and given one as an input, will be trying to predict 200 given to as an input and so on. So this is it for preparing the data. And in the next lesson, we'll be building our model architecture. See you there. 13. Building the Linear Regression model architecture: Hello and welcome to this lesson where we will be learning how to build a linear regression model architecture using the sequential model off of the TensorFlow Lord JS library. So building off on our previous lesson, I've already defined the independent variable x and the dependent variable y overhead. Next, we're creating an assumption as function called linear regression model and passing in two arguments, X, which is the independent variable or the input to the model, and a Y, which is the dependent variable and the actual output that we should be expecting from the model. Then we're creating a linear modal variable, which is a sequential model. The sequential model is used for stacking of layers of neutrons in TensorFlow Node.js. However, for this purpose, will be only using one layer of one neuron for creating a linear regression model. So here we are defining the layers as a single layer called a dense layer. A dense layer has every single input connected with every single neuron of the layer. And in this case, we only have one neutron, which is defined by units as one. So our dense layer only has one neutron. And we've also initialize the US bias as true, which means that we will be needing a bias in this Neutron. And our input shape over here is defined as one. Which means that the neutron that we have defined as one, we'll be taking in an input of ship one, which is a one-dimensional tensor. As x over here is a one dimensional tensor. So this is our model architecture. And this is a linear regression model because we're not defining any activation functions over here. So the one neuron that we're initializing overhead computes y equals to Amex. And since we are putting US biases as true, the equation will be Y equals MX plus B. So let's make a prediction off of the model by passing in the input values. Keep in mind that we have known train the model, and therefore the predictions will not be accurate. However, we can just try it out to see if the model is actually working on not. So I'm calling the linear regression function overhead and I'm passing in x and y as the input arguments. So if I run this by opening the index.html document, and if I press control shift j, we can see here a list of tensors. So here we have six different outputs for the six different inputs of x. And if I reload this, we can see that the tensor predictions are different. This is because the weight and the bias have not been trained for the neutron. Therefore, we will be needing to train the model in order to fix the values of the weight and bias so that the predictions come out more accurately. So let's learn how to do so in the next lesson of this project. 14. Training the Linear Regression model: Welcome to this lesson where we will be training our model. So to train our model is quite simple. We just have to define a loss and an optimizer for training the model. And for this, we can define it in the following format. So here we have defined the loss as mean squared eta and the optimizer as stochastic gradient descent, which can be called off the train module of the TensorFlow library. And we're defining the learning rate of the SDD optimizer as 0.005. So we're assigning all of these values to loss and Optimizer variable. Then we're compiling the linear model using the.com method off of it and passing in the argument as the loss and the optimizer for this linear model that we had created in the previous lesson. Then finally, we're training our model using the dot fit method off of the linear model, which is the sequential model. So here we're passing in the input values as x, the output or the actual values as y. And we're defining the epochs as 2 thousand. When writing the await keyword over here, because this is an asynchronous function and this is a promise that is being written back. If you do not know JavaScript, don't worry, because this is a simple way to say that we want this line to be executed. When does synchronous function is being called? And we wanted to wait 40 to execute. So let's finally see the predictions of the model once it has been fitted or the model has been trained. And for this, we'll just calling the predict method off of the linear model or the sequential model and passing in the input values as x, which is the values that we have trained the model on. And we're printing it out. So if everything has worked well, when we call the linear regression model function and passing the values as x and y, the mortal should give accurate predictions. So I'd saved this and I'd open the index.html document. And this will take some time to load, but the training is done and now we can see the output. So I will open the console by pressing control shift j. And then we have the output as a hundred, two hundred, three hundred four hundred, five hundred, six hundred eight rounded up. And this is quite accurate for the linear regression model, and we've successfully trained our model. So as a quick recap, let's look at what we have done. First, we have defined our independent variable as x. Then we have defined our dependent variable as y with this stanza. Then we've created an asynchronous function or linear regression model, which takes in arguments x and y, which is the independent and dependent variable. And we've created a sequential model with a single layer, which is against Leah having one neutron with bias enabled. And the input ship of this dense Leah is one, since we're taking a 1D tensor, which is x as an input to this layer. Then we're defining the loss and optimize as mean squared eta and stochastic gradient descent with learning rate 0.005. So we are then compiling the sequential model or the linear model bypassing and loss and Optimizer as it Bambaataa. Finally, where training the model using the fit method and passing in the input values, the actual output values and the number of ebooks for this training to happen. Then we are calling the predict method off of the drain model and passing in the input values and we're printing it out as we had seen, just overhead. And in this way, we have successfully train our linear regression model. 15. Viewing the change in loss: Hello and welcome to this lesson. In this lesson, we'll be trying to view the chains in loss when the modal is being trained. So as before, we have all the code that we had written with the exception of this line where we are calling the fit method. So in the fit method, we can pass in a call back function so that we can see the changes in laws with respect to each step of the optimization. So here, as before, we are passing in the input values as x, the actual output values as white. And we're defining the box as 2 thousand in the fit method, but as an exception or an addition. In this lesson, we're using the callback property. The Colbert property can be used to call a function on the end of each epoch of training. So on the end of each epoch, we're calling an asynchronous function way the epoch and locks Boston as a parameter. And we're just logging out this following line into the console. So we're writing the string epoch and we're printing out the EPA, and we're writing the string loss, and we're printing out the loss by writing locks, not loss. The logs contains our loss and therefore, we're calling off this variable. Then we're waiting for each epoch to end by giving the next frame, though await keyword. And therefore, when we use this callback, we shall see how the model is being trained and the losses being changed in each epoch of draining. So let's open up the index.html file and let's look at the console. Okay, as we can see, we are at about 200, the loss is steadily decreasing. This means that our network or our linear regression model is being trained. And as you can see, as the number of epochs increasing, the losses decreasing more and more. So if we wait awhile when it reaches to tos, any box will get the final loss of the modal. There we have it. This is the final loss of the model. And we have successfully printed out the predictions as, as close as we could do the input values. And in this way, we can view that James in loss. Again, we have just used a callback function on epoch end. And we're just passing in the epoch and the logs as the parameters for this asynchronous function. And we're awaiting each epoch end in order to print a callback on the next epoch. So there you have it. And in the next lesson, we'll learn how to pass in multiple features as input and a single dependent variable. So I'll see you in the next lesson. 16. Using multiple features as input: Hello and welcome to the final lesson of this project. In this lesson, we'll be trying to create a linear regression model which takes in two features as input and tries to predict the output variable or the dependent variable using these features. So far that we've created our x variable as 2D tensor by using the tensor to the metal. Here we're passing in a nested attic where the outer Adi is overhead and the internet is the elements of the jadi. Also, if we look at the elements of the inanity, we can see that the first feature is at the left side of the attic and the second feature is at the right side of daddy. And if we look at which are the two features overhead. So the first feature is 123456. So these are the values for the first feature. Then we have the values for the second feature as 12 3456. And using both of these values will be trying to predict the y or the dependent variable, which is a hundred, two hundred, three hundred, four hundred, five hundred and six hundred. So let's look at our linear regression model. As before, we've created an asynchronous function, our linear regression model, that we're passing in the input as x and the actual output value as y. Then we're creating a sequential model or linear model. And we're creating a single layer, which is a dense layer with a single neutron given by units one. And we're enabling the bias as used bias set to true. But here we're defining the input SIP as two because we're inputting a 2D tensor. Next, we're defining the loss and optimize as before. So we have our loss as mean square data and our optimizer as stochastic gradient descent with learning rate 0.005. Then we're compiling the model by using the compile method and passing in the loss and optimizer. Finally, we're feeding our model using the input value x and the actual output value y. And we're draining it for 2 thousand epochs. And finally, we're predicting the output values from the model by passing in the input values to the method predict. And we're printing out our predictions. So if I call the linear regression model function and passing the input values as x and the actual output values as y, we can get the output in our console. So if I open up the index.html file, then if I look at the console by pressing control shift j, it is taking some time to load the prediction because it is still training. So let's wait for a moment. Okay, we have our predictions. And again, these are very close to the actual output values. And here we have successfully created a linear regression model by taking two inputs and giving out a single output, which is the actual variable or the actual values that we wanted to predict. So this is it for this project. I hope you've learned how to use the sequential model effectively. 17. Logistic Regression using Sequential Model: Welcome to this machine learning project where we will be performing logistic regression using the sequential model of TensorFlow. Lord, yes. Our logistic regression model will be made using a sequential model consisting of a dense layer with one neutron. And it will take the input as a 1D denser, and it will output the probability as a 1D denser as well. The neutron will take in an input and output the class provability by calculating the sigmoid of the linear combination of the weight and the input plus bias. So the sigmoid function is a function that will give up its output within the range 0 to one. And as formulas, one divided by one plus e to the power minus wx plus b, which is our linear combination plus bias. The workflow for this project is as follows. We will start by preparing the data. Then we'll build a model architecture in TensorFlow ab.js finally will train the model using stochastic gradient descent as optimizer and binary cross entropy as our loss function. Our dataset for this project, self-made, but you can use your own dataset if you like. So we will have the independent variable or the input to the model defined as x. And its values will be 1-2-3, 4-5-6. And given these inputs, will be trying to predict the actual class given by White. And the classes are 000111 for each of these inputs. So our goal is to build and train a logistic regression model to predict the values of Y given X. So if you're ready and excited for this project, let's start coding. 18. Preparing the data: If you've downloaded data resource with this lesson, you'll find two files, the index.html file and the script.js phi. So if I open up both of these files using a text editor, which is sublime text, we can see that contained the index.html file contents and has tamed boilerplate where we're importing the TensorFlow Node.js library using the content delivery network as the source. And we're also importing the main script file where we will be writing all of our JavaScript code for this project. So if we have a look at the script.js phi, where preparing the data for this project. Head were defined x as the independent variable with the values 123456, and this is a 1D Densa. Also, we have defined the dependent variable y as a 1D tensor by using the tensor 1D method and passing in the values as 000111. So as this is a logistic regression or classification project, we have two classes overhead, the 01. And our objective with this project is to classify the inputs, do the actual classes, which is either 0 or one. So if we pass in the input as one, the output should be 0. Similarly for 203044, we should have one as the output class for five, again 146, again one. Finally, let's look at both of these tenses by using the print method off of them and viewing it on our console. So if I open up the index.html file, and if I open up the console by pressing control shift j, we can see both of these tenses are being printed out overhead. So we have successfully prepared their data for this logistic regression project. I'll see you in the next lesson where we will be building the logistic regression model needed for this project. So see you there. 19. Building the Logistic Regression model architecture: Hello and welcome to this lesson where we will be building the logistic regression model architecture using the sequential model of TensorFlow, Node.js. So here we have already defined our independent variable x and our dependent variable y for the logistic regression problem, we've also created an asynchronous function or logistic regression model, which takes in two arguments, the independent variable x and the dependent variable y. And then over here, we're creating a sequential model with the layers as a single dense layer consisting of one neuron, where the bias is enabled, setting the US bias property as true. And the input shape is one, since we're passing in the input as a 1D Densa. Finally, by using the activation function as sigmoid. And this will give us the probability range between 01. And then I'm calling the predict method off of the modal for just checking if this model is actually working on not. And I'm just passing in the input or the independent variable as the parameter for the predict method. Finally, we're printing out using the print method mine deck. We haven't trained the model yet, so the predictions may not be that accurate, but still it would be worthwhile to see which values the model will predict. So let's see. I'll open up the index.html document. And if I press control shift j, we have operations overhead. So as I told, we are getting the prediction in the range of 0 to one. Which means that anything below the number 0.5. represents that the input is off to class 0 and editing equal to 0.5 or above. It represents that the input is off the class one. But overhead, we can see that each prediction probability coming at below 0.05. and if I'm going to reload this web is this would again genes because we have yet to train the model. So I'll reload it. And here we have another different set of probabilities. So for one last time, I reload the webpage. And again, the predictions are changing. So in the next lesson, we'll finally train our logistic regression model to perform declassification task. See you there. 20. Training the Logistic Regression model: Hello and welcome to this lesson where we will be training our logistic regression model. So till now, we have already defined independent variable, the dependent variable, and we've also made an asynchronous function called logistic regression model, where we have built the model architecture. Now to train the model, we need to specify the loss function and the optimizer. So here we are defining it using this syntax. And we're defining the loss as binary cross entropy. And we are defining the optimizer as stochastic gradient descent with a learning rate of 0.005. We are assigning these values to the variable loss and optimism. Then we're compiling our logistic model, which is this sequential model, bypassing the loss and optimize a variable to the compile method. Finally, we're fitting our modal, which means that we're training our model by passing in the independent variable x, the dependent variable y, and the number of epochs for draining. So it's 2 thousand overhead. And we're using the await keyword so that we read for the training to finish when the function is run. Finally, after training were predicting the output classes of the independent variables by passing in the independent variable x in the predict method and we're printing the result out. So when I call the logistic regression model function and pass in the arguments as the independent variable and the dependent variable. We shall see the predictions after the modal training in the console. So let's have a look. I'll open the index.html file. And if I press control shift j. So this will take some time because the training is going on and, you know, while the predictions that come out. So here we have our predictions. And as I've told, if the prediction probability is blue, 0.5. the input variable will belong to the class 0. And if the prediction probability is equal to 0.5 or above, then the predicted class of the input will be one. So over here, we can see that the first two input values are being correctly predicted as 0 class, since the probability is below 0.05. however, this point has been misclassified since the probability is coming above 0.5. And this class for the input is one. So we have a logistic modal, but the output is not very accurate since the prediction probability is wrong overhead. And we can also see that although these two classes are being predicted as 0, still, the prediction priority is going a little bit more towards 0.05. and preferably, we would want to have it below 0.2 for all of these predictions. So in the next lesson, we'll learn how to build a deep neural network classifier in order to accurately classify the input to the output class. So see you in the next lesson. 21. Creating a Deep Neural Network Classifier: Welcome to the last lesson of this project. In this lesson, we'll be creating a deep neural network classifier that will classify the output class. Given the input, we have already created our independent variable x and our dependent variable y in the previous lessons. Now, we're creating an asynchronous function called Deep classification model. And it takes in two arguments, x and y, which is the independent and the dependent variable. Then we're creating a sequential model consisting of three layers. The first layer is a dense layer which has gone into neutrons. The input SIP is one overhead because we'll be taking in the 1D tensor which acts as an input. Then the next layer is also a dense layer consisting of 20 neutrons. And finally, we have the last layer consisting of a single neuron and the activation function is sigmoid. So the output should be of a permeability ranges between 01 and anything below 0.05. is the class 0 and ending equal above 0.5, probability is the class one. So after initializing the sequential model, we're assigning it to the variable classifier model. Then since we need the loss function and Optimizer, we're defining both of them here as the loss is set as binary cross entropy and the optimizer is set as stochastic gradient descent with a learning rate 0.01. Then the compiling the modal but passing in the loss and optimize a variable as an input to the compile method. Finally, we're training the model using the fit method and we are passing the input as x and the output or the actual class that we are wanting to predict as y, then total number of epochs is set at 2 thousand. Finally, we're printing the predictions after training by using the predict method and passing in the bam it as the input variable. Then we're printing the predictions out by using the print method. So when I call the function deep classification model and passing the arguments as the input x and the dependent variable y. And then we should get a list of predictions for these values. So let's open up index.html and see the predictions. Since this is a deep neural network, it will take some more time to train. So I'll open up the console by pressing control shift j. So let's wait awhile. There we have our predictions. And as you can see right off the bat, we have the predicted probability much better than the logistic regression model. So for the input 1-2-3, we're outputting the prediction probability at below 0.2, which was our intended goal. And we can see that all of these three inputs are giving out the class 0 since the prediction probability is below 0.05. similarly, for 4-5-6, we're getting the prediction probability above 0.8, which is much high. And this concludes that the classes for the input 4-5-6 is one. So there we have it. We have successfully made our deep neural network classifier using TensorFlow Node.js. As a next step, I would suggest that you play around with the layers and maybe even add some more List to get the predictions highly accurate. So this is it for now. And I hope with this knowledge, you'll be able to create a deep neural network classifier for your own datasets. 22. Image Classification using MobileNet: Welcome to this machine learning project where we will be performing image classification using mobile net. Tensorflow guard JS, a general use to deploy a pre-trained models. Since training a deep learning model using the web browser isn't very practical, we normally use Python and TensorFlow to train a model. And once it has been trained, we used a preteen model in TensorFlow, ab.js. In this project, we'll build an image classifier using a pre-trained model called mobile net. Mobile Nate is a CNN modal or a convolutional neural network model that can be used on the browser. Since a sizes small and it is able to compute predictions, verify. Our goal with this project is to predict the breed of the dog when the input to the model is given as DMAs offer dog. So here we have the image of a chihuahua passed into the mobile led model, and we have, our classification has two hour. The workflow for this project is as follows. We will first get the images from the internet. We will then load the pretrain mobile net model using tensorflow dot js. Then we'll predict the breed of the dog using the mobile net model. So if you excited for this project, let's start coding. 23. Getting the image: You feel downloaded that dash resource. You'll find two files present in the Duke folder. To phosphite is the index.html file, which will be our WIP is for this project. The second fight if the script.js file, where we will be writing all of our JavaScript code. So I've opened up both of these files using a text editor, which is Sublime Text. Here we can see the contents of the index.html file. And this lesson will be getting the image of a dog from the internet and will also be preparing for the next lessons so that we can perform image classification using mobile lit. And that includes loading the mobile led model. So here we have our regular HTML tags. Since this is a SDM, a document. And we have a head tag consisting of the title of the web is, which is Tensor Flow guard JS tutorial. Then we're using the script tag, the Lord, the latest version of TensorFlow ab.js, by specifying the source as this CDN you Adam. And we are also learning the mobile LET model by using another script tag and specifying the source as this CDN you EDL. So if we open up both of these URLs in the web browser, we can see a minified JavaScript file. Here we have the minified JavaScript file for the mobile net mortal. Similarly, if I copy paste this UFO into the web browser, we get the denser flows as minified JavaScript fight. Now let's move on. Then we importing the main script file by using another script tag, specifying the source as script.js, since this is locally present. And in the body, we have created a div inside which we have another div with the id console. And this is needed for the mobile knit mortar to work. So this is just a convention we have to put in order to make the model work in the future lesson. Then we're getting the images of the dog by specifying it using the image tag. We're using the image tag and keeping the id as IMG. Then we're making it off gross orders in and we're specifying the source as this, you add up. So if I copy based this on my web browser, we can see that is to images of their dog. And this is it for our SDL file. So we will be using this file for the next two lessons of this prosaic. Next list, look at the script.js file. For now. I'm just logging out Hello. And so floor dot js to make sure that both of these inputs are working and we do not get any error in the process. So if I open up the index.html file by double-clicking it. And here we have the images of the dog since we had been importing it overhand. If I press control shift j, we can see in the console that hello tensorflow dot js is being printed out and there are no errors present. So we have successfully fetch the image after dog from the Internet and will be trying to predict which breed of dog is this by using the mobile net model and the future lessons. So I'll see you then. 24. Loading the model: So far we have fixed the images of the dog from the internet. Now, let's load our mobile led model using the following lines of JavaScript code. Here in the script.js file. I'm defining the model by using the keyword let. Then I'm creating an asynchronous function called app, where we are initializing our, we are loading the mobile LET model by calling the Lord method of, of this. And we're assigning it to the variable model. Also, we are using the await keyword so that we wait for the asynchronous function to happen and this line of code to be executed. And in this way, we have successfully loaded our mortal. So we've just initialized our mobile led model by just using a single line of code, which is more valid, Lord, Lord. And this is why dense absorbed or jazz is so popular among the deep learning community since it makes Model Deployment very easy. So in the next lesson, let's finally predict the breed of the dog by using the images as an input to the model. So I'll see you in the next lesson. 25. Predicting the dog breed: Welcome to the final lesson of this project. In this lesson, you will be surprised to see how easy it is to make a prediction using the mobile net model or a pre-trained model with TensorFlow, Node.js. So here we have defined our modal variable by using the keyword lead. Then we've created our synchronous function called add. And we've loaded the mobile LET model using the Lord method of, of the mobile net liability and we've assigned it to model. Then we're getting the image by using the JavaScript code document dot getElementByID and passing in the ID as IMG. So we had specified the id as IMG for the IMG tag. And we're assigning the aim is to the dog, him is variable. Then we're predicting the top three classes for this image by using the classify method off of the mobile lead liability, which is defined here as modal. So in the classified method, we're passing in the dog images and we're already the predictions to be made. Once the predictions have been made, Dow signed to the British and variable. And we're logging out the prediction to the console. So if we call that function, one of these lines of code will be executed and we should see the prediction from the model onto the console. So let's open it up. So every have the ms after dog. And if I open up the console by pressing Control Shift J, I associate the top three predictions from the model. So we have an array of three elements. I'll open this up and we have our predictions. And the top prediction is at UAA, which is absolutely correct. So the probability of the prediction as 0.878, which means that the model is 87.8% showed that this image is, as you all are great, we have successfully classified using mobile net. But before we end this lesson, let's look at the top other do predictions as well. So as a second prediction, we have the Pembroke welds Kogi with a probability of 0.096, which is 9.6% is. And target prediction as messenger, which is at 0.7 probability. So this is 0.7% is. And let's look at what this Docs look like and why the model is predicting the second, third gulp, most predictions as these dog breeds. So if I copy this and if I've bested on Google, let's go to us is OK, I can see the similarity right now. So here we have some golden founder back, some white fall on the neck, and a golden for on the head, which is quite similar to the mess of koala. Just look at how is stating IQ. Then let's check out mess and z. So if I just copy paste this. And if I go to him is, is it's just a similar trait. And we have some Boolean for awesome Bronislaw overhead light for, and then again some golden for on the head. And this looks like name is, but it certainly doesn't have the phase. So it's a good thing. I will moisten it, predicted it as it is 7% is short of being a Jew alpha. So this is it, this is it with the project. And I hope the knowledge that you've gained with this project can help you build other awesome projects using TensorFlow, Node.js.