Machine Learning Deep Learning Model Deployment | Engineering Tech | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Machine Learning Deep Learning Model Deployment

teacher avatar Engineering Tech, Big Data, Cloud and AI Solution Architec

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

51 Lessons (3h 59m)
    • 1. Introduction

      2:09
    • 2. What is a Model?

      1:19
    • 3. How do we create a Model?

      2:13
    • 4. Types of Machine Learning

      3:21
    • 5. Creating a Spyder development environment

      2:39
    • 6. Python NumPy Pandas Matplotlib crash course

      14:21
    • 7. Building and evaluating a Classification Model

      14:42
    • 8. Saving the Model and the Scaler

      4:08
    • 9. Predicting with deserialized Pickle objects

      3:04
    • 10. Using the Model in Google Colab environment

      4:20
    • 11. Flask REST API Hello World

      4:51
    • 12. Creating a REST API for the Model

      5:05
    • 13. Signing up for a Google Cloud free trial

      1:28
    • 14. Hosting the Machine Learning REST API on the Cloud

      5:32
    • 15. Deleting the VM instance

      0:33
    • 16. Serverless Machine Learning API using Cloud Functions

      9:18
    • 17. Creating a REST API on Google Colab

      4:09
    • 18. Postman REST client

      2:07
    • 19. Understanding Deep Learning Neural Network

      4:53
    • 20. Building and deploying PyTorch models

      10:06
    • 21. Creating a REST API for the PyTorch Model

      3:41
    • 22. Deploying TensorFlow and Keras Models with Tensoflow Serving

      7:09
    • 23. Understanding Docker containers

      2:29
    • 24. Creating a REST API using TensorFlow Model Server

      7:14
    • 25. Converting a PyTorch model to TensorFlow format using ONNX

      3:17
    • 26. Installing Visual Studio Code and Live Server

      2:31
    • 27. Loading TensorFlow.js on a web browser

      2:36
    • 28. Deploying TensorFlow Keras model using JavaScript and Tensorflow js

      3:07
    • 29. Converting text to numeric values using bag-of-words model

      4:28
    • 30. Tf-idf model for converting text to numeric values

      4:11
    • 31. Creating and saving text classifier and tf-idf models

      10:07
    • 32. Creating a Twitter developer account

      2:21
    • 33. Deploying tf-idf and text classifier models for Twitter sentiment analysis

      5:37
    • 34. Creating a text classifier using PyTorch

      3:27
    • 35. Creating a REST API for the PyTorch NLP model

      3:50
    • 36. Twitter sentiment analysis with PyTorch REST API

      4:51
    • 37. Creating a text classifier using TensorFlow

      1:28
    • 38. Creating a REST API for TensforFlow models using Flask

      2:44
    • 39. Serving TensorFlow models serverless

      6:42
    • 40. Serving PyTorch models serverless

      3:05
    • 41. Model as a mathematical formula

      9:33
    • 42. Model as code

      6:59
    • 43. Storing and retrieving models from a database using Colab, Postgres and psycopg2

      10:16
    • 44. Creating a local model store with PostgreSQL

      5:58
    • 45. Machine Learning Operations (MLOps)

      2:06
    • 46. MLflow Introduction

      0:55
    • 47. Tracking Model training experiments with MLfLow

      8:48
    • 48. Why track ML experiments?

      0:53
    • 49. Running MLflow on Colab

      3:16
    • 50. Tracking PyTorch experiments with MLflow

      3:02
    • 51. Deploying Models with MLflow

      2:01
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

30

Students

--

Projects

About This Class

In this course you will learn how to deploy Machine Learning Models using various techniques. 

Course Structure:

  1. Creating a Model
  2. Saving a Model
  3. Exporting the Model to another environment
  4. Creating a REST API and using it locally
  5. Creating a Machine Learning REST API on a Cloud virtual server
  6. Creating a Serverless Machine Learning REST API using Cloud Functions
  7. Deploying TensorFlow and Keras models using TensorFlow Serving
  8. Deploying PyTorch Models
  9. Creating REST API for Pytorch and TensorFlow Models
  10. Converting a PyTorch model to TensorFlow format using ONNX
  11. Deploying TensorFlow Keras models using JavaScript and TensorFlow.js
  12. Tracking Model training experiments and deployment with MLfLow

Python basics and Machine Learning model building with Scikit-learn will be covered in this course. TensorFlow and Pytorch model building is not covered so you should have prior knowledge in that. Focus of the course is mainly Model deployment.

Meet Your Teacher

Teacher Profile Image

Engineering Tech

Big Data, Cloud and AI Solution Architec

Teacher

Hello, I'm Engineering.

See full profile

Related Skills

Technology Data Science

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Welcome to this machine learning deep learning model deployment course. This is going to be a very exciting journey. In this course, you learned how to take your models to production or how to make your models accessible to others. The focus of the course will be more and moderate deployment. However, we'll also be building a very simple model so that we can demonstrate how you can take your models to different environments using various techniques. You will learn how to Pratt machine learning experiments and deploy model using ML flow, which is a popular ML obscured. Sun wordless is the next big thing in cloud computing. In this course, you'll understand how to make serverless rest API for your machine learning models. If we're completely new to Python, you will get a crash course in Python and all the liabilities. And also we have explained in detail how to build models from scratch using cyclic law and also how to build a neural network using TensorFlow. So machine learning, deep learning background is not required for this course. However, if you have prior knowledge building models that will help you in picking up the concepts of deployment quickly. We also have a section on natural language processing. You'll understand how to deploy NLP models to do sentiment analysis for real-time to its. For some of our labs, we'll be using Google Cloud or GCP. You don't require any prior background in GCP. We're provided step-by-step instructions as to how you can setup a GCP account for a free trial and priority, various things. Discourse is very hands-on and touches upon somato, practical aspects of machine learning and deep learning. Let's dive in and get started. 2. What is a Model?: Let's understand machine-learning. In machine learning, we read patterns from data using a machine learning algorithm and then create a model. Then we use that model to predict output for new data. For example, if a model is trained to predict customer behavior, you can feed in a new customer profile and it can predict whether the customer wrote BYOD not based on its age, salary, and other parameters. If a model is trained to classify an image, whether it's a cat or dog. The new feeded new image_id to predict whether it's a cat or dog. A sentiment analysis Modern can read text and predict whether the sentiment is positive or negative. So what exactly is a model? So model can be a class or object or it can be a mathematical formula. And how do you deploy and use the model? The model can be stored in the file system in binary format. It can be stored in a database column, in blog or other formats. How you can take the model and create a rest API and make it accessible to applications or what HTTP protocol. Or you can simply take the modal coordinate, the widget in another program. 3. How do we create a Model?: Let's take a closer look at the machine learning process and understand when our model is ready for deployment. In machine learning, the algorithm looks at the data, derives pattern and creates a mandala. Let's start from data. Typically we receive raw data and then we do the data preprocessing. Data preprocessing involves stapes like data cleansing data standardization, fixing issues with null values, missing records, unknown values, and various other things. During data preprocessing, we also convert categorical values, two numerical values. Because machine learning models can work with numerical data. This step can be performed within the machine-learning boundary, or it can be performed by another team. For example, a team which specializes in big-data Spark, which is a very popular technology for data preprocessing. For many models, we also do feature scaling. That is bringing all the features to the same scale so that the model will not get biased or influenced by the particular feature. Once that is done or data is ready for machine learning algorithm. Depending on the problem we're trying to solve, we might repeat this process several times to get the perfect data. For our machine learning algorithm. We feed the data to an algorithm and get a model. But is that the final model? Once we get a model, we test the accuracy we defined to the model to get higher accuracy. If we go back to the data preprocessing step and generate the data again and feed it to the algorithm again and to get the model with the desired accuracy. Apart from accuracy, we also check whether the data is overfitting or under fitting. And once we are happy with the model, we deploy a particular Watson to production. So that is the final model and that gets used by different applications. In this course, our focus is more around learning how to deploy the model so that it can be used in production by various other applications. 4. Types of Machine Learning: Let's understand different types of machine learning algorithms. We talked about customer profile, learning from customer behavior based on certain profile and applying that learning. Let's look at it in detail. So when we say customer profile, it could be AID salary countries, gender. Based on that, let's say we know whether a customer is purchased in the past or not. One starts with purchase, euro starts but not purchased. If we feed that information to a machine-learning algorithm, you could look at this past purchase data. It will look at these different features in their behavior in terms of purchase or not, then create a model. Here the output is always one or 01 means purchase, 0 means not purchase. So this type of machine learning is called classification. When we are predicting certain number of classes from the input data. Let's look at another example of classification. When we feed an image to a model and the model recognizes it is a cat or a dog. That is also classification. If we create a machine-learning algorithm with different images which belonged to three classes. It's a cat, dog, and cow. And if we create a model, that is also classification because our prediction is always limited set of values. There is another type of machine learning called regression, where instead of predicting a class, we predict certain values that could be a continuous value in terms of house price. You might have information about area, number of bedroom, and distance to the bus stop or city center. Based on that, if you have to create a model which will predict house price, that type of machine learning is called regression, where you predict a continuous value instead of predicting which class the output belongs to. Classification and regression are called supervised machine learning because Algorithm lands from the data. It lands from set of features and the behavior. Fitting information about house price for a set of features. Or you are fitting information about whether the customer is bought or not. The algorithm is learning from that. And then it is predicting output for new set of variables. This is supervised machine learning, where you tell the algorithm what to look for in a particular dataset. There is another type of machine learning called unsupervised machine learning, where you feed certain data to an algorithm, but you don't say what to look for. For example, you could feed a salary, country, gender, and how much the person is spinning. And ask the algorithm to group them in a way so that you can take certain decisions based on that. Typically nucleate clusters using unsupervised machine learning, you could create different clusters like young spenders are high-income, high spenders. And based on that, you can decide which customer group to target in your marketing campaign. This is unsupervised machine learning. In supervised machine learning, we split the data into training data and test data. Typically 70 to 80% data is kept for training the model, and remaining 20 to 30% is used for testing the model. Let's create a development environment, load some data to start our machine learning process. 5. Creating a Spyder development environment: We lose Anaconda spider for machine learning development. Search for download Anaconda and go to their website. Click on pricing. Scroll down, select the individual edition which is free. Click Learn More. Click Download and pick the right version for your operating system. Once downloaded, click on the installer. Except that Thomson condition just means fine selected directory. Make sure there are no spaces in the directory. I would recommend selecting both the tick boxes because you can make Python NEW dalda environment variables. Click on install. The installation takes about 20 to 30 minutes. Once completed, click Next. We don't need to select this to click Finish. Started for record, our spider had lunch. Spider will first create a working directory where we will store all the files. Directory under user engine. This will be my working directory. Go to the top right hand corner and select there directly. And that, that would be working directly. Now let's create a new button. We'll write helloworld. File is Python or lower selected and run it. You can bend it using recycled. And we can see helloworld dot console. 6. Python NumPy Pandas Matplotlib crash course: We'll be covering Python, Numpy, Pandas and matplotlib. In this lab. You are already familiar with these Python liabilities. Then you can skip this lecture and move to the next one. Let's create a new file using spider and start coding. In Python, you can declare variables without giving datatypes. And he put now populate a string value we may, Python will not complain. Can spider variable explorer, you can see all the variables and their value. Let's say three, be equal to five, then break into a plus b. Select this 31. So we can see that the output is getting printed in Python. You can perform all kinds of arithmetic operations. Python is a data type called list. And you declared that within square brackets. And then you specify a list of elements. And you can then grab elements specifying the index number. And index number starts with 0. We'll print this out. Then. Do, DO whatever index one, and so on. To grab the last element, you specify minus1. You can also specify three in this case, but minus1 would also give you the last element. That way when the list is very long, you can easily grab the last element by specifying minus1. And then if you do minus two, it will give you the second last element, that is 30. So this is how we can declare a list and grabbed different elements. And the list can have communist enough different data types. In Python, you can write a loop by giving a condition colon and hit enter. In Python, both single quotes and double quotes are fine. Space and indentation batters in Python. So if we write this like this, it'll give her loop ends when Sundance. Now if I write something here that is outside, if look outside the loop is getting printed, debate change the condition. It'll print both. There are many ways you can write a for loop in Python. So I can say for i in range ten. So this should print the value of i is starting from 0 to nine. So these are the ten values. You can also look through this list. My enlist, you can say for i in my underscore List, tie, it, printed all the elements of the list. And let's do another operation on the list, which is picking all the values from the first list, multiplying them by three, had been creating the new list. In Python, you declare function with the def keyword. Calculate some, let's say a, B. And we can get done this sum. And now we can call this passing two values and then we'll get the sum. You can also return multiple values. So we can see that both the variables are getting populated. So desire you can return multiple values from a Python function to create a file in Python, use with open and then write some content. You can see my file in the file explorer. It has sample content. Note that the mode is W here. That is what rating. You can add more content with an app campaign mode. Let's execute this and check out the file. You can see more contents getting art it. But you can also work with the W mode. Now you should see that new content, heavy things should get overwritten new content. So these are, we can create a file in Python. Let's now understand NumPy. Numpy is a popular Python liability for scientific computing. First we need to import numpy, will import numpy as np. And now we can do all NumPy operation using NB. Many of the popular machine learning libraries, scikit-learn, they're designed to work based on Numpy arrays. You can create a list. Let's declare a list. And we can create a one-dimensional array from the list. Let's take out this value, sample number one DRA. So this is a numpy array object, will now create a two-dimensional NumPy array. It has four rows and three columns. Should create an umpire two-dimensional lattice. You can easily reshape them pair is. So this is a for row three column array. We can reshape it to two rows and six columns. Note that when you reshape, the original Arabism get reshaped, you can store it in a new way. It has two rows and six columns. You can reshape provided the total number of elements match. You cannot have two files because it has two elements. If we reshape with, let's say one minus1, it would create one row and maximum number of columns. Similarly, you've got to reshape to one column and maximum number of gross possible. You can do that by specifying minus1 and one. You don't have to count how many rows or columns are there. We will have this as new edit three. So this is how we can reshape NumPy R is sometimes doing machine learning processing. You might have to extract rows and extract columns and do some operations to this reshaping would be very useful. You can grab a portion of the Numpy array. So this means give me first draw up to third row but not intuiting the third row. Second column up to fourth column, but not including the fourth column. Let's see what we get. So the original array doesn't get changed. We live to populate it to a new array and see the output news sample. We got rosy at index 12 and column at index two, because there is no column at index three. Pandas is a popular Python library for data analysis. You import pandas saying import pandas as pd, that is convincing. And pandas one-dimensional arrays known as cities. So this is very clear that cities, it's one-dimensional. Their advantage with parentheses, you can give your elements and name. For example, I can say 10203040, but I can give them a label. Let's check it out. You can see that the index ABCD, you can grab a limit specifying the number, index number ours perspective. If you do Sample Series two, you get 30. You can also grab it by saying sampled cities to see. That will also give the same value. You declare a DataFrame, which is a two-dimensional array using pd dot DataFrame function. And you can pass a two-dimensional list and you'll get a DataFrame. We can see the pandas DataFrame. And with Pandas, you can also give rows and columns and label. Should now we have row one, row four, column two, column three. And you can grab elements specifying row name, column name, or specifying the index number for each row and column. So column three years, 36912, which is this. And you can grab multiple columns by specifying both the columns. To grab rows, you're pre-specify a Lucy location and give Roden m, you will get the row to grab a portion of the DataFrame. You can specify a boat row and column names and get that person. So we're getting column two, column three, row two, row three from the sample data frame two. You can also specify index location in instead of liberals to get a portion of a DataFrame. This is rho 0 all the way up to row two naught including row two, column one, up to column three, not including column three. If you don't specify anything, you get all the rows and all the columns. And if you are up to the last column, you say black minus1. So you've got 14710 to 581. That is column one, column two, and all the rows. And we said grab all the columns up to the last column. So this is how we can grab all the columns and all the rows, but excluding the last column. And a subset of a DataFrame is a dataframe if it is two diamonds smell. If you are grabbing one row or one column, it could be a series. In Python, you can use tape to check paper of any variable. You can easily convert it to Pandas by invoking dot values when he machine-learning liabilities are designed to our Putnam PRA. So do the conversion using dark values. This is now a NumPy array. You see two opening and closing brackets. So it's a two-dimensional NumPy array. You can store this in a new NumPy array. This is now a numpy array. We grabbed a portion of the DataFrame and converted it to a NumPy data with dark values. This would convert the last column to a NumPy data. Let's look at an example of filter operations on DataFrames. So we are saying here, get me those samples where column one values are greater than for. Wherever it is greater than 48 gives you true. Otherwise it gave you false. Sample DataFrame. You apply that condition on the mid DataFrame. With Pandas, you can easily read CSV files are indeed get-up file. And as is the read_csv, let's read a sample.csv file from our repository. We would say store data dot CSV. Pandas would load the CSV file to a DataFrame. And if we check our DF Now, there, that's pursuit has been loaded into a DataFrame. We can check the file also. So these are huge in part does you can easily load all the rows and columns to a DataFrame. With df.describe, you can get video statistical element about the DataFrame. Like how many rows, what is the mean and standard deviation? You can get additional info with dF dot info. What data type and what are the columns? Df.loc head would give you the first five rows. You can take sample of a DataFrame by doing head. And you can also specify how many rows you weren't in the head. So this get-up premise three columns. We can grab the first two columns and convert that Vietnam. Now let's go to variable explorer and check x. So it is the first two columns because we excluded the last column and it has been converted to an umpire. To convert the last column, you simply grab the last column. You do not have to specify the range. And the last column will get converted to an umpire. It's a one-dimensional area. Finally, let's look at the matplotlib library. Using matplotlib, you can visualize the data by drawing different bloods. Spider is applauds tab where the plots will get created. You import matplotlib like this. Now let's declare two lists. And we'll plot x and y. We got inline-block by default, we get a line plot. When we plot to get a scatter plot, you say plt.plot scatter. And you will get a scatter plot. You can give labels to your blood and also a title sample plot, x and y-axis. Let's create a plot for our data we read from the CSV file. We'll create a new plot. And X6 is we leverage the y-axis will have salary, and we'll grab columns and pass it on to the plot function to get the block. So you can see plot for our data which will aid from the CSV file. This is an example of a histogram. So this is about Numpy, Pandas, Matplotlib and some basic Python. This is not everything that is out there in those liabilities. However, this much knowledge is sufficient for you to get started with machine learning programming using Python. 7. Building and evaluating a Classification Model: We pulled, we build a text classifier. Let's understand how classification is done on numeric data. We have the store purchase data. We have data for different customers. There is in their salary and whether their purchase or not. Based on these data, we'll build a machine learning classification model, which will predict whether a new customer with a certain agent salary would buy or not. So in this is in salary or independent variables. Machine learning classification model using kNN, which will get trade with distort purchase data. Let's understand k nearest neighbor or k-NN machine-learning algorithm through a very simple example. Imagine we have cats and dogs shown in this diagram. On the x-axis we have weight and on the y-axis we have height. All the green ones are cats because obviously they would have less weight and laicite and all the blue ones are dogs. And if we know the height and weight of a new animal, let's say this new one in the center. Can we predict whether it's a cat or dog? Knn algorithm? Besides on that, based on the characteristics of the nearest neighbors. Typically the k value is five. We look at the five nearest neighbors and based on that, we decide which class the animal could be lumped two. For example, in this case, there are three greens and blues. That means there are three cats and dogs who have similar characteristics because the new animal. So this AnyVal is more likely to be a cat because the majority of the animals belong to the cat class in the nearest neighbor rule. So this is k nearest neighbor technique where outcome is predicted based on characteristics shown by the nearest neighbors. And that gave a Louis typically five. Let's apply this technique on the store purchase data. We have the data in the project folder. We can spider, you have to select your project folder here. And then you can go to files and see all the source code and files. So this is the store purchase data we have using which will build a machine learning classification model. Let's create a new Python file. Will imitate ML Pipeline. We'll import numpy and pandas. In spider. As soon as you type, you get all the errors or warnings. It saying we are not using Numpy pandas, that's fine. We'll be writing the code for the same shortly. Now, let's load the store purchase data to a Pandas DataFrame. We live it training data, dataframe, which will store the store purchase data. Note that will not be training with the entire data. We'll have some requests for training and some for testing, which we'll see next. What training data pandas DataFrame would store the entire CSV file data. You can run the entire file by selecting the cycle, or you can run the selection. Let's run the selection. We can go to variable explorer, click on cleaning data, and we can see that is salary participant loaded to the training data, dataframe. Let's get some statistical in boat dot printing data. We can see various statistical information about the data. How many records? We have, 40 records. We can see the mean, standard deviation and some other statistics about the data will store the independent variables in an IRA. Will take rose up to the last column and stored them in a dependent variable X, which is a NumPy array. Let's do that. So this should populate agents salary. And next, lets go to variable explorer and checkout. We can see that agent salary have now populated in NumPy array, will populate the purchase column, which is the prediction to Canada NumPy array away. So this should populate the last column and stored it in way too. This is our y, which is the dependent variable or the one where trying to predict we have Asian salary and x NumPy array. And we have y, which is the purchase data. For not purchased, one is for purchase. So that is stored in a Numpy array. Now we have the independent variables and dependent variables in two separate Numpy arrays. Next, using scikit-learn will separate the data into training set and test set. And we'll huge 80-20 ratio, 80% of the data for training and 20% for testing. Scikit-learn is a very popular library for machine learning using Python. Scikit-learn comes pre-installed with Anaconda spider. If I'm using a different Python and Marmot, you might have to install scikit-learn using pip install is Kalen. Pip install is the command to install any Python libraries. Anaconda spider comes with scikit-learn, numpy, pandas, and many other libraries that are required for scientific competition and machine learning. We're using scikit-learn, train, test split class to split the dataset into two parts. Now once we do this, we'll have the training set and the test set. The training set will have 32 records. We said 80% data will be used for training. So we've totaled 40 records of which 32 will be used for cleaning. So this is extreme. And white tr1. 32 records for trading. And x-test has heat records. Similarly, waitlist will have eight records. This is the data for testing the border. Next we'll feature scale the data. So that is in salary are in the same bridge and the machine learning model could not get influenced by salary, which is in a higher range. Let's run this. Now we can see the scale data. Standard scaler distributes the data in a way so that the mean is 0 and standard deviation is one. Now both the A's in salary and in the same page. Next, we'll build a classification model using the K nearest neighbors technique. Will have five neighbors. We lose the Minkowski metrics. To build this classifier. Minkowski metrics works built on the Euclidian distance between two points. Euclidean distance is nothing but the shortest distance between two points. That's how it decides which neighbors are the nearest. Next will fit the training data to the classifier to clean it. So this is where the model is getting trained. This is the classifier object which is been trained with certain cleaning data, which is, is it salary is the input variable and purchases the output variable. The classifier is our model. Will quickly check the accuracy of the classifier by trying to predict. For the test data. Classifier has a predict method which takes a Numpy arrays input and returns an output in another number. So this is our X test, and this is the weight. And let's see what is the prediction. Waited? Six sit for one record. The model predicted it correctly. For all of the records. We can also check the probability of prediction for all these data. Here we can see that wherever we have more than 0.5 probability, the model is predicting that the customer owed by. Otherwise the customer would not buy. Probability is helpful when you look to SAR data from the prediction and the customers were more likely to purchase from this tree. The third one is more likely to purchase because the probability is 0.8 or 80 plus it will check the accuracy of the model using Confusion Matrix. Confusion Matrix is a statistical technique to predict it courtesy of a classification model. The way it works is pretty simple. If the actual value is one and the model predicted one, then in-situ project. Actual value is one and model predicted 0, it's false-negative. Similarly, 00 is true negative and 01 is false positive. It can also be represented in this format. So once we know all four types, we can easily determine the accuracy. So they couldn't see is true positive plus true negative divided by all four types of predictions. No matter which classification technique you are using, kNN or any other Confusion Matrix can be used to calculate the accuracy of the model. Scikit-learn and other machine learning libraries, the built-in classes to generate confusion matrix, comic Julian predictor. Let's create the confusion matrix. Will pass the childValue app.js sit that is whitest and the predicted values, that is white bread. And get the confusion metrics from the scikit-learn confusion underscore metrics class. Go to spider variable explorer and we can see the confusion matrix over here. We have three true negatives for true positives. Only one false negative and false positive. So this model is very good it, because we have only one false positive or negative from eight record. Let's calculate the accuracy of the model. And we'll print the quiz 0.875. So our model is 87.5% a great. So this model can predict whether a customer with a particular agent salary would buy or not with 87% accuracy. You can also get the entire classification report to understand more about precision recall and F1 score. So we've taken this toward purchase and created a classifier which can predict whether somebody would by R naught. That model or classifier can be used to predict whether a customer with a particular agents salary would BYOD naught. So let's try to predict whether a customer with age 40 salaried today 1000 goodbye. Note that this model takes a NumPy array and returns a number. You have to create a Numpy array from agents salary, feature skill that data, and then feed it to the classifier. Because the classifier is trained on feature skill data, suitably ensured the data you are fitting is also feature scaled with the same technique, which is standard scaler In our case. And the prediction is 0, the customer or not by somebody with age 40 and salary to detergent would not buy is but this model, we can check the probability of the prediction for the same data. Classify Reza predict parameter using which you can get the probability. So the probability is 0.2 or 20%. That's why the model set to customer would not buy. Let's try to predict. For a customer with a age 42 and salary 50 thousand. This time the model set the customer or buyer. Let's check out the probability. It's 0.880%. So there are 80% chances of the customer buying. Now we will machine learning model, brady. It's a classification model. It can predict whether a customer with a certain days in cell D would by R naught. 8. Saving the Model and the Scaler: We have built a kNN classification model, which can take is in salary as input parameters and predict whether a particular customer with that agents salary would by R naught. Let us now understand how to save the model we have created. To recap the model tending process, we read 40 records from the dataset and identified 32, that is 80%. For training. Those are represented here. And then we use standard scaler to scale the values so that the mean becomes 0 and standard deviation becomes one for both agents salary. For many models, killing is required. Otherwise the model might get influenced by values which are in the higher range salary in our case. And you can use standard scaler or any other scaling mechanism. Once the data is scaled, we feed that to the model in a two-dimensional NumPy array format. And we get an output which is also a numpy array with one column. Internally, the model applies kNN technique. It looks at the output for each record and tries to optimize the formula so that the overall liquidus you would go up. There are various ways we can save the model. For some we can extract the formula. And in some cases we'll have to save the modelling binary format so that we can restore it and then use that model to predict output for new set of data. We'll see that in action shortly. If anybody wants to predict with The Model, two things. Don't need the classifier model. And they would also need the standard scalar if they use some other technique to feature skill the data, that the model might not give a correct result because we have used a particular standards killer. We would also export it along with the model. With the classifier model and the standard scaler, do the prediction in any Python environment. Let's see how we can save and export these objects to other environments. Python is a technique called pickling, using which you can store Python objects in serialized or byte stream format. In another Python environment, you can be serialized these objects and use them in your code. So let's understand how we can pickle the model and standard scaler were built in the previous lab unit, we import the picking liabilities file, kNN model.predict are willing limited classifier dot pickle. If we do not want to tell which technique we use to create this model, we can simply name it as classified or quicker. And using pickled dot-dot-dot method, we can store the classifier object which we created earlier in print to this classified or pickle file. Similarly, we can clear the pickle file for this killer. Will store the standard scaler In a CDART pickle file. Here, wB means the file is opened for writing and in binary mode. Let's execute this code. And we can go to File Explorer. And does see that classified or pickle and ACWP kilobit created. You can also verify the same in the Explorer. So these two are binary or serialized files for our classifier and standard scalar objects. In this lab, we have seen how to save the borderland standard scalar in binary format using Python pick celebrity. Next we'll see how to use the pickled files in another Python environment. 9. Predicting with deserialized Pickle objects: Till now we have seen how to create a model and store it in the pickled format. We have also stored the standard scalar objects in binary format using picker liability. Next, we'll see how to DC relays and use this pickle objects in another Python environment. It could be on-premise or it could be on cloud. Will first try to use the pickle files to the local environment. Let's create a new Python file. We'll call it use model.predict. We first need to import the libraries. We also need to import NumPy. Next we'll DC relays and store the classifier in a local object in the new program will use the pickled dot Lord method to load the classifier that vehicle using read binary format. Similarly will read the scalar to a new object. St.petersburg will be loaded to local scalar objects. Next, we'll use the local classifier and the local scalar to predict whether a customer with age 40 and suddenly 20 thousand goodbye or not. Before running it lets clear all the old variables. You can click here and removal old variables. You can also clear the console by right-clicking and doing clear console here. Now let's run this program. Now we can see that new prediction and which is 0, which is matching with the previous prediction. Let's take the new probability. This is again 0.2 for the customer with age 40 and suddenly 20 thousand and delays the classifier object and the local scalar object. Then we have tried to predict whether a customer or buyer not using this D's related objects in a new Python program. So this program doesn't know anything about how the model was built or trade. It picked up the modelling scalar from the pickle files and use them to predict. We can also try to predict for each 42 and salary 50 thousand. Earlier we got 80% probability. We should see the same output here, 0.8, and prediction is one. Customer buy. So you've seen how use Pickle files in another Python Program, which doesn't know anything about how the model was built and how the model was trained. We tried this in a local environment. Next, we'll try it in a cloud environment. 10. Using the Model in Google Colab environment: Next we'll take the pickled files to the Google collab environment and try to predict their. Google collab is like a Jupiter environment with some visual customization. And it has lot of pre-built libraries for machine learning and deep learning. You can just login using your jimmy lady or Google lady and then create a new notebook and start coding. Let's create a new notebook I've already logged in. Will give this file a name. We can go to tool setting and change the theme to dark or adaptive. Let's send it to dark. Colombia is like a Jupiter notebook environment. You can simply type code NDA, hit Shift Enter. You'll see the output. Or you can click on the Run icon here and run the program. And you can right-click Delete sin or you can simply click here and delete sale. In Kuulab will find most of the machine learning and deep learning libraries pre-installed. If something is not installed, you can do pip install here and install it. Wallabies like Linux environment. You can do exclamation mark Ellis and see all the files that are present here. Currently, there is nothing that is a sample data folder within your Columbian moment. And all the files get saved to the Google Drive. Will transfer this to pick your files to the Colombian moment. We'll go to our GitHub repository. And we've already uploaded the pickle files to this repository on GitHub, futurist skilled ML model deployment. Select the classified or typical. Greatly can download and copy the link address, go to the Colombian Robert and do a Linux W get. And the path makes sure the file path is row. Get the file, do ls to see if the file has been copied or not. Next, let's get the standard scaler. Click on a CDO pickle, right? T can download, copy link address, not do a W GET and get the standard scaler pickle file. Now we can see both the pickled files are available in the Colombian moment. We've uploaded the morals to the Colombian moment. Here in this notebook. We don't know how the models were built are trained, but we can use these models to do similar prediction as you have done earlier. Create a classifier object. We'll call it classifier collapse. Create a scalar object. And we'll use that classifier and scholar to predict. Simply type the variable name and hit Enter. We'll see the output. So the prediction is 0. It is same as what we got earlier for a customer with age 40 and suddenly 20 thousand will get go probability also. You can print it the same cell also. The last land gets printed. So we're seeing 20% probability of somebody with age 40 and solid 20 thousand buying the product will do the same for age 42 and san-serif 50 thousand. Prediction is one. Probability is 0.6 because we did not put the right edge. Let's run it again. This time we're getting 80. This is how we can train models in one environment and take them to a completely new environment and run them dead. You are giving the model to another team or third-party. They did not know how you built enter into your model all the noise. It's a classifier, it takes value in certain format. And Gibbs doubt. 11. Flask REST API Hello World: Next we'll understand how to expose machine-learning model or rest API. Rest stands for representative scared transport. It's a popular way of extrinsic data in the real world. You can build an application using Java, Python, Scala.net, or any other programming language. And if we want to make your application accessible to others, you can expose it over rr. Any client wants to access your application. Dude, send a request to what HTTP protocol using rest and get a response back. And data is typically extension XML or JSON format. Using the Flask framework, you can easily build a risk deeply for the Python application. Let's first look at a simple Hello World application. Then we'll dive into making our machine learning model exposure porta, rest API. In spider created new Python file. We'll call it flask helloworld. To build a flask, Chris TPA import Flask and the associated request object. From the flask liability will declare an endpoint slash Model. And who will receive post request in this application. Post is one of the most common HTTP methods. Using post, one can send data to an application, can receive a response. Let's say hello world function. In this example will send the data in JSON format and receive it in JSON format. Here, whatever data we are receiving the request in JSON format, we're storing it in requests to underscore data. We'll pass the Martin limb in the request which will retrieve and display to the user. Anybody could pose the model name. Invoking this last modelling point will display simple screen you are requesting for a with Python string interpolation. We are displaying that Martin Limb. Now let's add a main method. Will specify the port number so that when the app is started, it will run it that particular port. Let's launch the application in the local environment. If anybody wants to use equally invoke it with this last model URL. Let's go to the command prompt and started the program type CMD and hit Enter to learn.com prompt. Now let's start though. Helloworld program. Darpa is now started. We have created a simple rest API which is running at port 8 thousand. Let's now see how to push data to this app and receive a response. We'll create a new Python file. We'll call it restaurants could plant dot-dot-dot. Since we will be sending the data in JSON format, let's import JSON First. We also need to import the request liability. Request is the HTTP library. And you can just hover over it and read more about this. Using requests. You can send HTTP request. Now let's have a variable for the URL. In the server name. We can have local host, or we can put the IP address that was displayed in the console. 1270018 thousand, which is pointing to the local loss, will have very simple request data in JSON format with one key and one value. And we are passing KNN as the model name. Now instead a post request. Possibly you wouldn't enter data in JSON format. And from the response object, we can extract the text and print it out. Let's now run it and see the output. Now we can see the output you are requesting for a KNN model, which is coming from the SDP. 12. Creating a REST API for the Model: Next we'll create a list EPA for the machine learning model so that anybody can invoke the risk EPA and do prediction. Let's create a new Python file. We'll call it classifier rest service dot pi. Let's copy the code from the HelloWorld Python application. And we'll import pickle, import numpy will lord the pickle files. We'll use the local classifiers to predict the data. For any hedge. And salary will retrieve the agent salary from the request will first represent h, then the salary. We are now passing gays in salary edge variables to the classifier to predict. And whatever prediction we have, we'll return it. The prediction is and pass the prediction variable at different time. Now let's run this application. We'll say Python classifier, rr wizard. By now it is running at port 8 thousand. Let's clear the Machine Learning class. We'll call it a meld rest client. Let's copy the code from here. And instead of having mortal kNN, now, two parameters we leverage, which is a numeric value, let's say 40. And we love salary, 20 thousand. We are passing two variables now. And with these two variables, we are going to call the classifier predict method to get the prediction where there is going to be 0 or one. And based on that prediction, back to though client now, let's run it. We'll run it at a different port. Lets clear the console and are there to print statement for agent salary so that we can know what is insanity or being passed. Let's run it and see if everything is fine. It compiled fine. Let's now run it from the command prompt. It's running at port 8 thousand to now. And we'll go to the mail client and call it with age for the Sangre 20 thousand. The prediction is 0. If we call it with age 42 and salary, 50 thousand recorded, the prediction is one. Instead of two final prediction, we can also determine the probability or risk TPA. We can see the prediction is 0.8. And if we change it to 4020 thousand, we should get 0.2. We have seen how to create a rest API using which are the clients can access the machine learning model and get the prediction. And these clients there might be running in Python, Java, or any other language. They can send data over HTTP and receive a response to what is GDP. So when you make a rest call will not only about how the application is written. This is how we can expose your Python machine learning model to other applications which are written using Python. 13. Signing up for a Google Cloud free trial: Let's create a free Google Cloud or GCP account. Search for Google Cloud Platform and pick on cloud dot google.com. Then go to console. You would need to sign in using your Gmail ID. You can see opsin far cry for free is November 2020. The free trial period is for three months. You would get $300 fee credit when you sign up for Google Cloud for three months. And that's a big amount to tryout various things on Google Cloud Platform for big data machine learning and other services. You'll have to enter your credit card and other details are to sign up. Google will authenticate your card and charged up to $1, which you'll get refunded within a day or two. You would not be automatically charged after the free trial period. Google will send you a message and ask you to renew beyond the free trial period. This is the main Google Cloud Platform homepage and you can go to various links. One of the important links is though billing link, and you can see how much free credit is available. Now from here, you can search for different services and try out different things on, under GCP Cloud Platform. 14. Hosting the Machine Learning REST API on the Cloud: We have seen how to create a list GPA for our machine learning model. Next, we will try to deploy this model on Google Cloud and cleared a rest API so that anybody who has access to the IP address of the VM can access the model. You can try it on AWS is euro or any other cloud environment. Gcp or Gould Claude provided theory roller free credit using which you can explore many of their services. For this lab will be primarily using GCP, VM instance. Search for VM instance or virtual machine. Click Create. We keep the default name. Let's select this extraordinary machine. You can leave all of the properties is default, will allow HTTP and HTTPS axis. Once the instances created, let's first ensured all the neglect ports are open. Because we will be running a Flask application than we might use. Different ports will open all the ports. Go to firewall rules. When we open these two dp by default, port EPEAT God opened. So we'll just edit dot, dot to make called ports open are you can artist separate firewall rule to open all our specific ports. Now all the ports for this particular VM and statured reopen. Whatever port we used for our Flask application that will be accessible to the outside world. Now go back and find dices its link to connect to the instance. And this is Python 3.7. And we also use Python 3.8. If it goes 2.2x automobile older version optic cortex, we might have faced some issues, but these were satisfied. So we need to install Python liabilities before we get started with building the rest API, we need to install other Python libraries. So let's install that by doing sudo app installed Python three Pip APS way. Let's install Flask by saying p3 in stone flies will install numpy. Next. We also need scikit-learn libraries to run the machine learning model. Next we'll get the pickle files from the GitHub repository. First installed w gate. Good dog classifier dot pickle. Also good. Now we'll copy the classifier rr with Python code from the local environment and will make two small changes. Set the port 20000, which is same as local laws, but that we adopt would be available outside gcloud VM instance. And we also change.com prediction from DCP APA. We'll create a python file you can create with any name. Simply paste the code, save it. Now you can start the classified by saying Python three classified ArcPy way. It started at port 8,005. Will grab the public IP address upto Google Cloud VM. Copy it. Go to the wrist claim based data IP address. And whatever port we use 805 and run it. We can see the prediction from the Google Cloud, DCP, APA. And we can try for 40 and that will be total will get 0.2. This is how you can make a machine-learning rest API on a cloud instance and make it available to the world. 15. Deleting the VM instance: Always make it a practice to deliberate virtual machine instance when you are not using it that way, you won't adjust your frequented balance. Whether you are using virtual machine or any other resource in the Google Cloud and Marmot, when you are not using, you should dominate those instances, are dominant those resources. And you can always recreate those resources whenever you need them. To delete an instance, simply click here and delete. And it takes about a few seconds to get deleted. 16. Serverless Machine Learning API using Cloud Functions: Next we'll create a serverless API for our machine learning model. Heavier we saw how to create a virtual machine and deploy the model. If we ever own hardware, our infrastructure to run the machine learning model that you'll have to pay for the maintenance cost, even if nobody's using your modern, you still need to pay for the cost for the VM. With a serverless cloud model, you do not need to worry about the dandelion infrastructure, the maintenance, and the cost. You can focus on writing your business logic in a function or a method. And you will be charged based on the number of times that function method is getting invoked. If nobody is using that functional method, you will not be charged. That is the serverless model, which is the next big thing in cloud computing. Let's now see how to create a serverless function on Google, Cloud environment and random machine-learning model. You can also try the same thing on Azure cloud using Azure functions or AWS Lambda. First thing we need is we need to store the modal somewhere on Google Cloud. Google Cloud provides something called buckets, using which you can store objects. It's Lake S3 bucket on AWS cloud. Our blob storage or the job creating a bucket is really easy. You give a name to the bucket and it has to be globally unique. Click Continue. You can select one reason. You have to select a location and you'd have to make sure Google Cloud Function runs in the same location. Stories tape can be standard, you can leave it default. Fine-grained access is fine. You can have Google-managed ski for your bucket. Anyway, we are not going into the details of storage, but just leave everything as default and hit cleared. Now the rocket has been created and we can see the bucket and under that we could clear it folder directly upload files. Let's create a folder. Will say models. And then we'll upload files to this bucket folder. Let's applaud the classified pickle and a CDART buckled to this bucket folder. We can see that both the files have been uploaded. You can go to the bucket rule directly, see all your buckets. So I have few buckets which are already created. This the new one that I just created. I also bucket effects skills, which is a models folder. And under the same files that are available, classified or equivalent ac dot vehicle. Next we'll search for Cloud Functions. Select Cloud Functions. Here I've already created some functions, but if you have to create a new function, just click on this Create function. You can give a function, a name, select the region. And it is showing the default version which is matching with the bucket. Sit regard type is HTTP, so that you can get an HTTP endpoint to access this function from anywhere. Allow unauthenticated invocations because you are just testing something and we don't need to part authentication or get into access management for this lab will leave everything else as default. To 56 MB is fine, but if you want, you can allocate more memory. We can also specify the timeout deletion. Let's click Next. And here you have to select your runtime. We'll use Python 3.7. And then we can write our code here. You can see a sample HelloWorld program here. If we're logging into Google Cloud Function for the first time, you might be prompted to enable APIs. If you're prompted, just do that. Here you see a sample HelloWorld program. You can deploy and test it. In the requirements.txt, you specify what are the packages required. So I've already written a function for the machine learning code. Let me open that function, help replicate it. And it will take me to the same screen which you saw when you click on create a new function. This function was also created with it triggered type is, is to, to be. The function has a rest API and you get HTTP URL endpoint to access that function. Click next. So this is the function that I've already written. Let me quickly go through it. You have to import requests the way we did the awhile importing flask for our other demo. And this also works based on flask. For any Python functions are method3, which is flashed to export HTTP end point. You have to import pickle. You have to import storage because we need to access topical files from the bucket. For that, we need the Google Cloud SDK storage. You're pretty import numpy as np. You have to get the request JSON. This is similar code, what we did earlier. Additional code that you have to write is get an instance of the storage client. Get an instance of the bucket by doing stories Glenn dot get bucket and specifying the bucket name. Then you load the classifier and scalar specifying the path here. It has to be folded Naaman, the pickle file names unit to download the pickled files and you get it. Pre. Gold chloride function gives you a slash temp directory, PMP, under which you can download the pickled files. Then you have to load the pickle files in scalar the way you did earlier. After that, the code is pretty much similar to what we had done earlier. You read the agent salary and using those variables you can predict and you return that prediction. There's a requirements.txt file in which you have to include the required packages. You have to specify the package name and which version we need request, we need scikit-learn, we need cool cloud storage, and we need numpy. That's all you have to do. And simply click Deploy. It will deploy the function. You can come back, make changes and again deploy. It takes a few seconds for the function to get deployed. Once deployed, you can click on the function name and get into the details. You can see various metrics about that function. How many times it is getting invoked here in last one hour, six hour, 12 hour or whatever, and you get charged for the number of requests. But since we are using a Google Cloud of fried bread, that amount will get deducted from the credit available in our account. You can see the source here. You'll find the HTTP end point down to the triggered tab. And you can go to the testing tab and paste dysfunction. You do not need to write any external program to test this. So we'll send Asian salary that sort our function expects and did to predict whether a customer will buy or not. You would request has to be a valid JSON format. Click on paste the function to test it. And we can see the prediction here, 0.2. And then it will also display the log. And that will change h to 40 to 50 thousand. And we should get the prediction is 0.8. That's the probability of customer buying. We can see the prediction is 0.8. This is how you can apply a machine learning model in a serverless environment. Now this can scale up to millions or billions of users. You don't have to worry about the underlying infrastructure. Google Cloud function would take care of it. All you need to worry about is the business logic or the code that you're writing in a serverless environment. And we add some print statement in the log that you can see. This will help in debugging though, function whenever required. 17. Creating a REST API on Google Colab: In this lab we will see how to create a rest API on the Google collab environment and expose it on the internet. Let's import numpy. And we lord the pickle file. So for that, let's import to Lord INI file to Google Colombian Marmot. You can also directly upload from here. Click on this file icon. We need to select this directory and then click Upload. Let's check the files got uploaded. So we have both the pickles now, in the local environment of this collab notebook will first create a classifier object. Let's run this and then load it again. Will now create a scalar object. After that, we need to install Flask in Iraq. Using Flask in Iraq, we can make our Flask app accessible over the internet. It says that it's already installed in the Google collab environment. Let's continue. After installing flask and Iraq import run wheat ended up from Iraq. And after that, the stapes to creative flask list API is pretty much the same as what we did earlier. Import Flask and requests from the flask liability. Important request. Next we'll declare an app, and now we'll say run this app within Iraq. So with this, dark would be securely exposed over the internet. Similar to what we did earlier, will have to create a method which will read input parameters from the request and use the scalar and classifier object to predict. Then we live an endpoint as class predict. And we'll create a method which will be mapped to this particular end point. Let's run this. After that, we'll do app.get. And as soon as we do that flask and Iraq would create a public URL using which you can access the glass cap. So this is the URL that we need to hit to access the model. Let's copy this. And we'll go to spider. And we'll use that URL and we'll use last predict. So that is the end point at which we are reading the request. And then let's run this code, or there is a typo in it to fix that. Let's run it again. Let's check out the URL. It says NPS Naur defined. We probably did not run this. So let's run this again. And we can do a restart runtime. And we can run the notebook from the beginning again. And it just started running and it has given us a new public URL this time. Let's copy it. And now let's in, walked up and get the prediction. We got 0.8 for age 42 thousand. And we can change this input parameters and I'll get a different output. Collab is a great tool to practice machine learning, deep learning. And also, if you want to create a prototype and exposure machine learning model, without creating a virtual machine, you can directly do that. And the Colombian moment. 18. Postman REST client: We've now installed the postman tool using which we can send request to easily without having to create a Python app. Let's search what Postman pool installation download for your operating system. Once downloaded the condo EXE to install postmen, then Lord medically get loaded. You can sign up using your Gmail Lady. Let's open postpartum again. Now let's create a new request. We'll create a request tape is post recipes that we have created for our ML models, expects post requests. And here we'll specify the endpoint URL. And within the body we need to specify the data. We'll put the JSON string a 100, the body tab, and then select the JSON from the dropdown here. And it is send request. We got the prediction is 0.2. So make sure that Jason is, we'll form our tip. Using JSON editor online, you can validate with your JSON string is evalutation. Simply paste under the core tab, then click on the tab and you will know whether your JSON is a valid dishonored. Let's now change this values and recorded different prediction. This is how you can test only your machine-learning rest APIs with the deadening and collaborate on a VM or on Google Cloud Function using dot post one tuner. You don't need to create a separate by turnip just to taste your rest. Apis. 19. Understanding Deep Learning Neural Network: Deep learning is a subfield of machine learning, and neural networks are the most common category of deep learning algorithm. Let's understand the core concepts of neural network. In traditional machine learning would take good data and feed it to an algorithm and then get the output. For example, we could have A's in salary and other information of the customer, and we feed it to a classifier and then predict whether the customer is going to buy or not. Let's understand how the same thing would work in a neural network or deep learning. Let's start with a simple neutron. Neutron is a individual learning unit. It would read the input parameters and apply an activation function and then get the output. Typically in deep learning, you create many layers of neurons and build a neural network. Each neuron reads to a weighted sum of inputs from the previous layer from all the neurons, applies an activation function and passes the output to all the neurons in the subsequent layer. Let's understand activation function. In neuron receives weighted sum of all the inputs and applies an activation function to get the output which is passed to the next neuron in the next layer. Typically the hidden layer, we use RL2 activation function that is rectified linear unit. A railway line plot looks something like this. The one highlighted in red. For certain variables up to a point, the neutron ignores that input. For example, a customer might not be buying up to is 30. So it will not give any output up, please 30. And moment h crosses 30, the chances of buying is higher. So with h, the output would go up linearly. So that is the RL2 activation that is typically applied in the hidden layer of neural network. Softmax activation function is applied in the output layer of a classification model. Softmax gives the probability of the output. So it might read, let's say age multiplied by weight, salary multiplied by weight. But the output would be the probability if you apply the softmax activation function, whichever class has, the higher probability that class is the predicted class. It could be whether the customer is going to buy or not, whether it's a cat or dog. And it can be applied to more than two classes. Also, generally, softmaxes used with cross entropy loss calculation. Crossing defines how different two distributions are, the predicted distribution and the actual distribution. Low cross entropy means the predicted and the actual values are in sync. And the classifier tries to minimize the difference between predicted value and the actual value. Applying softmax activation function and cross entropy loss calculation. In a neural network, input is received from the input layer with its sum is calculated. An activation function is applied in all the neurons and the output is passed to the next layer. Again, in the next layer, weighted sum is calculated, an activation function is applied. Put is passed to the next layer and so on. And finally, we get the output. It could be probability for classification models or it could be the actual value. In a regression model. This process repeats several times until the loss is minimized and the data moves end-to-end from input to the output layer. It's called one epoch. We can define multiple epoxy until we get certain desired accuracy. For a classification model, we use cross entropy loss minimization technique to adjust the weights. This arrow denotes the feedback loop or backpropagation neural network. The loss is passed to the input layer so that the weights can be adjusted to minimize the loss. So this is our deep learning artificial neural network Lance. By adjusting the weights. With more number of epochs, the accuracy goes up and the loss gets minimized. Because with each epoch, the neurons learn something new about the data. And based on that, they keep on adjusting the weights. There are several deep learning libraries available to construct neural network, TensorFlow carriage, biter chart, some of the popular liabilities. The time of this recording. 20. Building and deploying PyTorch models: Let's solve the customer behavior prediction use case. With Python. We'll head to Google collab and create a new notebook. It's really easy to get started with the pipe Tarjan, Google collab, because all the liabilities at pre-installed, you can just create a notebook and start coding. Let's give a name to this notebook by Dutch, create and save will import the standard libraries for numpy and pandas. And also we need to import liquid liabilities by dots is based on the torch library. So you have to import to import dot-dot-dot neural network can import functionality we have. If we are using some other environment, make sure Python installed in that environment. Next, we load the customer purchase data from our GitHub repository. At this time we have slightly more number of records. Let's check it out. So we have 1550 records because you are building neural network. It's better to try with slightly higher volume of data. And we can do dataset or to see some sample records. It's same as before, a salary and whether the customer is partition or not. Separating the dataset to cleaning ab.js dataset and features scaling. Stapes are similar to what we have done earlier. 80% for training, 20% is artistic. Next, we'll do feature scaling. We just killing is a must for D planning labor disease. Next, we'll convert that IRAs for training and test data to bite arch tensor format. Tensor is The made data type in Python. Tensor are similar to arrays. Let's check out sample values. You can see it's similar to array, two dimensional array, but it's in a cancer hallmark. That's the main data type for Python. Will also convert the dependent variable depends upon matter. We can check the shape of the tensors. So we have about 1243 records for training and 311 that cuts for testing. Next we'll construct the neural network using Python. Let's declare three variables. What is the input size is two. That is, the two variables we have in the input layers is in salary, then output size two, because we have predicting two classes, whether the customer is god not yes or no. So that'll be two. Then we'll have hidden sizes. In each of these layers will have ten neutrons. You can experiment and try with different number of neutrons in each of the layers. We'll build a neural network with ten neurons in two layers, mix to create a class for the neural network. And the syntax looks something like this. Here you have to specify how many layers you need. Sfc stands for fully-connected layer. In a fully-connected layer, a neuron in each layer is connected to all the neurons in the subsequently. And we would need fully-connected layers. One for input to hidden, the other four hidden to hidden. And the third one for hidden to the output layer will have just two hidden layers. And then we are using renew activation function in the hidden layers and softmax and the output layer. So this is how you create a neural network class in Python. And once your classes defined, then you instantiate though modally. After that, you define what is the learning rate and what is the loss function. Using this learning rate loss function and Adam optimizer, that deep learning neural network will learn from data, adjust the weights, and give the prediction. And we'll define a 100 epochs. That means the neural network will learn a 100 times from the same data and keep on adjusting the weights to get the final output. Next, let's start training the neural network with a 100 epochs. And we'll print the loss after each epoch. So you can see that the loss is getting minimized after each tray. And after a 100 epochs from 0 to 99, we let the model ready for prediction. With model.predict amateurs, you can see various parameters of the model. Now this model can predict for new setup data, but you have to make sure that the data is fading denser format. Let's try to predict for each party and suddenly 20 thousand the way we've been doing. But this time we'll be converting 48 to 2002 tensor format. And we fought that will have produced feature scaling because the model has been trained on scale data. So using dot-dot-dot from NumPy, we can convert a NumPy array to a tensor RA. And you feed that data to the model and you get the output. So the output is two columns. First twenties for 0 and the second one is for what? This means if this value is higher than the customer is not going to buy. And if this value is higher than the customer is going to buy, you can also get the maximum loop of the two and predict which class the output is. So it says 0 customer is not going to buy. Similarly, let's predict for age 42 and salary 50 thousand. We can see that this value is minus 0.61 is higher than minus 0.707. So the customer is going to, by. Now that the model is ready, there are many ways you can save and export the model. One of the ways is to save the model Eugene dot-dot-dot save. The model will get saved in a file. In Calabria can do a list to see the file which has been created in the same directory. Now, you can export this Morton and simply do dot-dot-dot load and then restored the model. We're trying it in the same notebook, but you could take it to another notebook or another environment and this would work. Let's predict from the restored model class and we are getting the same prediction. But this is not the most preferred way of saving biters model. Pi-thirds recommends saving the model dictionary instead of the entire model. And then you can take the dictionary and use it in another environment. Using state underscore zq method, you can get that dictionary of the model. This has various details, lake weights of the neural network. To use the neural network in another environment, all you need is the weights and that is available in the pipe dodge model dictionary. And you can save the dictionary using dot-dot-dot save model, state dictionary, and specifying a file name. Let's export this dictionary to another enviroment. You can also predict the same notebook. Using the dictionary. You define a new predictor with the class, and you can predict the way you predicted earlier. Now this customer buys state dictionary can be exported to another pirates environment and they used their ship, zipped it using the Linux zip command. Let's download it from here. You can download a file from Google collab using the files class. Make sure you are using Chrome Gaussian. Otherwise the Download might not work. So this is the downloaded model file. Let's check it out. And this is in binary format. So this is all the details around the weights and various other parameters for the biters. Martin will upload this to our GitHub repository. Let me give it a new name because there must be a file with the same name. Let's create a new notebook. We'll call it use by dictionary. So currently there is nothing in this directory. Will get the model from the GitHub repository. Let's unzip it. Now this file is the python dictionary that we created earlier. We also need the pickle file to scale the data. Let's import pickle created new scalar, import the libraries. Now we need to create a class for the neural network. The input size, output size, and hidden size of that class should match with the original class using which the model was created. Number of fully-connected layers should match with dot is middle class will define a new variable, nu predictor. Now we'll load the dictionary to the new predictor. And after that, we can predict the way we are predicting earlier. We need to import numpy also. We can see the predicted output here. 21. Creating a REST API for the PyTorch Model: Next we'll understand how to create a list EPA from the charge more than Go to Google Cloud and selectivity in the way we have done earlier, make sure is TDP HTTPS axis is allowed. You can have additional space if p1 by ten dB should be sufficient for this demo. Cleared the instance mixture, all the ports are open the way we had shown earlier. Let's log into the console. We need to install all standard libraries. The stem dirty snow libraries, we need our starch and pandas installed the liabilities one-by-one. The way you have done earlier. Flask would be required to create the rest API. Numpy scikit-learn. And then we'll install starch so that we can pull files from our GitHub repository. We would need the pickle file. Let's copy it from GitHub. Let's copy down by more than land on Jupiter binders. After dark tau, we can run the Python file which we run in Columbia environment. Just download the notebook is r dot by and run it and you will get output. This output for 42 n salary 40 thousand. Next we'll create a rest API using Flask. The court is pretty much similar to what we have done earlier. Let me first turn it and then I'll show you the code in the meantime, try to guess what Alton is required to create this flask list API. As you might have guessed, unit due to Lord the Morton. And you also need to create it SDP. And then use store doubt, put in a variable and then just return it. To access the model, copy the public IP from the Google cloud instance. And I, it's running at port 85. Now let's run it with age 40 and salvatore 1000. We got the response from the Google Cloud API. We can change the way lose and get different outputs. Now, this can be accessed from any application be java.net Python because it's a rest API request and response to be sent over HTTP protocol. 22. Deploying TensorFlow and Keras Models with Tensoflow Serving: Tensorflow and kala-azar popular libraries for machine learning and deep learning. Tensorflow provides something called TensorFlow Serving, using which you can save and export your models. Tensorflow Serving is a high-performance modern serving system. Let's look at an example to understand this. We have logged into their Google collab environment. Let's create a new notebook. Tensorflow is pre-installed in the Google collab environment. If you're using any other environment, you have to make sure tens of kilo and TensorFlow servings are installed. Let's give this file a name. We'll first import the standard liberties. Let's import numpy pandas. And we also need TensorFlow library will important supplies. Pf, TensorFlow 2.2x has a tight integration with Kara celebrity and you do not need to import Cara's library separately. Get us Libraries the preferred way to build TensorFlow models with TensorFlow 2.0. and higher worsens. Next we'll import the store purchase data. And this time we have a large file with around 1500 records. Because we'll be using neural networks to build our model. We need slightly high volume of data to get higher accuracy. So this data set has similar data as before, and this is about 1500 records. Let's see sample the cuts we have AID, salary and purchase as before, will separate out the independent and dependent variables. We'll do train test split using scikit-learn library. Next, we'll feature scale using the standard scaler. The way we've done earlier. Now will build a neural network using TensorFlow cameras, models sequence helpless. And we'll have to dense layer of ten neurons each. And the output layer will have two values. That is whether a customer is purchased or not. And in terms of activation, methods will use railway activation for the hidden layers. And we'll use softmax for the output layer. Now our model is ready to be trained. We lose the Adam optimizer and cross entropy technique to measure the loss. Next will train the model using the cleaning data. And we'll specify 50 books. This is how with a very few lines of code, you can build a neural network with density locators. Lovely. Model training is over and we can see that we got ninety-five percent accuracy. We can pass the test data and extract the Law Center. From the model dot-dot-dot evaluate method. So a courtesy for this model is quite high. About ninety-five percent. Model.predict gives us different starts around the model. How many layers and parameters? Ten supplicate as model is a predict method using which we can predict data for a new set of values. For another customer with ties 4,250 thousand, the probability of buying his, but this model is 0.6. And for a customer retains to D8 salary 40 thousand, probability is very low should this customer will not buy. And the other one with is what to encrypt Italian. Goodbye. Let's now see how to save this model using tensorflow serving. You simply invoke the modal dot-dot-dot save method, giver modelling name, and avoid some number and save the model. It will get saved to the project folder. In this case, in the Columbian moment, we can do a list and see the directory. Let's go inside the directory. We can see a directory with worsen m1 will go inside that saved model.predict B, that is the model that is saved in portable of file palmer. We didn't variables. Tensorflow stores all the check points and we didn't assets all the graphs. Now to use this model in another environment, we have to export this model files and Lord them, and then use the model to predict. Let's see how that can be done in this notebook. At the same technique would work in any other environment. From tens uploaded Cara's Import Load model. Now we'll declare a new model class, and we'll call the Lord Martel and pass the directory name. And the model will get loaded to the cost model variable. Once that is done, we can predict the way we predicted earlier using the new variable. To export the model from the Columbia environment, you can zip it and download it. Calabresi Linux environment. So use zip minus r, that is recursive zipping and create a zip file port, the files library to download the file. Make sure you are using Chrome browser or growing the Download might not work. The file got downloaded. Let's open it and see what is inside it. We can see the saved model protocol file. This is the main model file which has all the details. You can take the entire model directory to another environmental music. They're loaded the way we load it in the Colombian moment and use it. 23. Understanding Docker containers: Next we'll be creating a TensorFlow model server in a Docker eyes environment. So let's understand the basic idea behind containers and Docker. Typically in the on-premise setup will ever server, which arose multiple applications. For example, we could have a server hosting, web server application, servers database and various other applications. Then there is another concept called virtual server. Using VMware hypervisor, we can create multiple virtual machines on top of a physical servers. And each server would have their own operating system. It could be Windows, Linux, or any other operating system. With virtual servers, we are able to create isolated environments for a set of applications. For the disadvantages will have to have separate operating system for each of the virtual machines. Containers are lightweight, isolated environment in a server or virtual machine. They can share a slice of operating system from the underlying machine without needing their own operating system. We can have a physical server with one operating system. And within that, we can have multiple containers. They will share a slice of the underlying operating system. And within container, we can have an isolated environment where we can install specific apps. Unlike virtual machines, we do not have to worry about maintaining separate operating system for each of our container. Note that it's also possible to create multiple containers within a particular virtual machine. The advantage with containerization is you can build apps within your container and easily port to another environment. You can have separate applications or microservices running in each of the container. And you can keep your applications isolated from other applications running in the same physical environment. Containerization improves application security. Also, applications start to really fast in a container because there is no dependency on other applications. So containers are lightweight software components that would bundle the application, all its dependencies and configuration to a single image in an isolated environment and which can be ported to other servers or other cloud environments easily. Docker is a software platform for building applications based on containers. So let's dive in and create a TensorFlow model server using Docker container. 24. Creating a REST API using TensorFlow Model Server: Let's understand how to create a rest API for the TensorFlow model that we just created. We can deploy a TensorFlow model using tensorflow model server. It can read the model file that we generated in protocol format and expose the model is erased APA or over g RPC, that is Google Remote Procedure Call will focus on how to create a risk APA for the model. In this lab. We will go to Google Cloud, create a virtual server, then installed Docker, and using docker will instruct the model server. We'll deploy the model on the VM in the dark raise environment, then clear the rest API using which will access the model from Google collab environment. Let's see that in action, go to GCP Console. Search for VM instance are Compute Engine. Click on the VM instance. Click Create. Let's let the VM with aid CPU and target USB memory. And this time we'll select the o12 instance. Let's select one to 18.400 GB space will allow us to DPS to DBS axis. Click cleared. It will take few seconds for the VM to get created. Elliptic go and open all the ports as shown earlier. Because we'll be creating a rest API at a particular port and that port should be accessible to as shown earlier. Make sure all the ports are open for this VM. Click on SSH. Let's do sudo apt-get update to ensure all packages that up-to-date. Next, we'll insert Docker with sudo apt-get install Docker command. Then fire this command. After that update all the packages again. Then they live it to root two j. At this point your Docker should be installed. Check the Docker version using Docker wasn't command. Then run a docker HelloWorld program to ensure it is installed correctly. You should see this message hello Chrome Docker. Now using docker pull tensorflow serving, you can install that TensorFlow models over. After that, let's get the model that we created earlier to this environment. Customer module.js will install unzip to unzip this file. Next, unzip the file, and we can see the Customer model here, and that we can see the protocol file. So the model has been copied to the Google VM environment. Next to start the TensorFlow model server, you'll have to specify the path of the model file which is present working directory slash customer behavior model. In this case, any target directory which is modeled plus the model name. And you have to specify the model name. Then TensorFlow serving. And the default port for risk. Epa's 8x 501. This PAF model server will expose the Martin lazy rest API at port 8501. This fire, this command. And USDA EPA would be ready. Let us now understand how to access this model from the Google collab environment. Will go to collaborate research dot google.com, and create a new Python three notebook will give it a name. Use TF models serving. Now we create a risk client and it would be similar to the steps that we have seen earlier. We have to import JSON request. Let's import numpy and will get go a seeds.rb file to do the standard scaling for the input data will import piccolo also. Now let's lower the standards killer pickle. We're variable. Before we call the TensorFlow models, having APA you leapt to standard skilled or data. We'll copy the public IP of the Google Cloud VM instance and create a URL variable. It should be V1 because we are only worried someone that model class the model name colon predict. This is the end point for the PAF model server. Now we can send the retinal list. We are sending data for both the set h 2040 thousand, which is this. And there's 42 thousand. You can send it up for one set of input parameters also. So you've created a JSON object, let's use that to post-trip. These are similar steps which you have done earlier, just that the endpoint is different this time. And after that, we can now print output. We can see output for both the input datasets. For the first one for age 20 and salary 40 thousand, the probability is 0.4, which is very low. And for the second one is 42 and salary 50 thousand the probabilities very I 0.98. So this is how you can create a rest API for your TensorFlow model using tensorflow model server. 25. Converting a PyTorch model to TensorFlow format using ONNX: Annex is an open format for deep learning models. There are many deep learning libraries available these days, such as biters, TensorFlow, cafe to using annex, you can easily port models from one format to another. Format. On x is supported by a community of partners such as AWS, Facebook, and many other companies. Let's understand how we can export our Python smarter to annex format and then import it in TensorFlow and use it. We will put the PICOTS notebook that we created for customer behavior prediction. So this model can take A's in salaries to parameters and predict whether a customer will buy or not. We need to specify a sample input parameters to export the mortar to annex four, Marta. Let's declare a sample tensor. You can have any values here. You tends to Mazda input Walmart. In this case, we are taking two parameters. Has in salary. Using dot-dot-dot x-dot export, you can export the model specified the sample tensor, that is the input format. Give her file and name and then export perhaps would be true. And after this biter to have generated this model in annex four. But now this file can be imported to TensorFlow Cafe too, and many other machine learning liabilities. Let's see how we can use this model in TensorFlow. First we need to import annex. Then we need to install onyx PF. After that, we would import onyx and import annexed. If we are using some other environment to load this model, you might have to importance of low add-ons do it is not required in the collaborate moment annex TAP class called prepare, using which we can import onyx model2 TensorFlow. Lets first lord Don export it to a variable. Then using prepare, we can convert onyx model2, TensorFlow, Walmart. You can hover over, prepared and read more about it. This is how we can convert it by touch mortar to annex format. And then using annex prepare, we can create a TensorFlow model. This converted model has a run method which will take input parameters and give the prediction. Let's check the prediction for age 402750 thousand. It is minus 0.52 at index one. This should match with our biters prediction. This customer would buy because your next one value is higher. So this is how you can convert the pi-thirds moderate propensity flow per month using Onyx. 26. Installing Visual Studio Code and Live Server: We'll be writing JavaScript code using Visual Studio Code, which is a very popular Integrated Development Environment or IDE for JavaScript tend website development. You can simply download it and start using it. Know that it is Visual Studio Code nor the Visual Studio, Visual Studio code that you need to download. And once you've downloaded and installed, then you can open it and start writing your code. Within Visual Studio code, you'll have to open the folder, go to File Open Folder, and open any HTML file. And then you can start editing it using Visual Studio Code. To this, the simple index.html file, which is some text. And we will open that using Visual Studio Code. You can make changes to the page and thus save it. You can also configure auto save. You can make changes and open the HTML page using a browser and see the output. Can these extensions link and ARD leaps that about extension. That would give us a local web server to deploy the page. Click install to install a web server, it would get installed in a few seconds. Once it is installed, let's go back to the explorer view. Will open index.html. Make some changes. Now will say right click open with live server. The strength to start us out where it port 55 00. We'll allow axis using the leaps that, or we can have the same experience as a web service. Now our site is located 127001, which is the local laws, IP, then port 55 00 and index.html. So instead of opening it, specifying the complete file path, we now have a local web server running which displays the content of this webpage. As soon as we type something that changes will automatically get that afflicted. In lifestyle. Index.html page. 27. Loading TensorFlow.js on a web browser : Let's now see how to Lord dense uploaded to the web browser, will open a new folder. This is a blank folder. Let's add a new HTML file. We'll call it BFGS, demo HTML, type HTML. And select HTML5 will put some text here. Now let's load this page. Read lips server. I load the page now with Chrome browser. And we can see that text here, let's say Visual Studio code. And the browser side-by-side will fix this typo. And we can see that it is getting refreshed. Now. Good lord, TensorFlow js to this webpage, we can simply add a script tag with TensorFlow js, and that will our TensorFlow JS, this webpage, search for TensorFlow j's setup and go to their office or webpage. And here you'd see opsin to art and suppler JSON HTML file using a script tag. We'll copy this will include the script tag within the HTML head. Next door within the body, let's add another script tag. And here we'll print out worse onto the console. We can see that the TensorFlow version is getting printed to the console. And we can extract BFGS version, and that is 2.0.0. So this is how you can lord denser progess, pitch. 28. Deploying TensorFlow Keras model using JavaScript and Tensorflow js: We'll now understand how to export the TensorFlow carriage customer behavior model that we created earlier and deploy it in a TensorFlow JSON moment. We have opened the notebook, which we created earlier to predict customer behavior. Now once you've created the model, we need to export it in a format that encircle a j's can understand. First we need to install TensorFlow JS. Let's do that in the collab environment. It says it's already installed. Then we'll import TensorFlow Js is, and then we'll save the Cara's model in JSON format using TMJs converters save Cara's modal method. Now, if we do a list, we see one module.js IN file and one group, one shared one of one bin file. So we need to take these two files to TensorFlow JSON moment and use them to predict. We can also view the content of the JSON file. Let's now download these two files from Google collab environment will show the module.js IN and the Groupon bin file in the same directory as the HTML file yml file, from which will predict using the exported model files. We first importance of luxuries. And then we load the model by using tf dot lower layers modal method. And we will have to write that within an asynchronous method. If we write asynchronous method, then the program execution will not wait for this method to get completed. Whenever this method is completed, the output will be shown. Without asynchronous method, the steps will get executed sequentially and your program will keep waiting until this execution is complete. So let's load the model within the JavaScript sinc function and then predict output will use the scaled value of Isn't salary for prediction. Let's first try it for a is 42 and salary 50 thousand. So these are the values. And using the same values we can predict. Now let's load this pays on live server and do Inspect Element. And go to the Console tab. We can see the prediction, which is 0.684. That means the customer route by which is seamus dot earlier prediction. Now if we try for age 207040 thousand, we'll get 0.008. So that means the customer will not buy. So this is how you can export it TensorFlow Cara's model, and use it in a webpage using TensorFlow JS library. 29. Converting text to numeric values using bag-of-words model: All machine learning models are designed to work on numerical data. So we have seen an example of how to do classification for store purchase data which contains Asian salary in numeric format. How can we apply the same technique to classify text? For example, we could have reviewed data for a restaurant like services good or ambiances really nice, hard. We categorize them as positive or negative reviews. If we're able to build a classification model based on this review data, then we can predict whether a new review, for example, main course was nice, whether it is good or bad. The problem that we need to solve is how do we convert this? Takes data to numerical format. This takes us to natural language processing or NLP. It's an area of computer science which deals with interaction of computer and human languages. Nlp can be used to process text or speech. One of the ways to convert picks to numeric format is by using bag of words model you represent text is bag-of-words, disregarding the grammar and the order in which they occur, but keeping the multiplicity, you give higher weightage to award if it occurs more number of times in a particular sentence. Let's understand bag-of-words through a simple example. We have three sentences. Service, good, nice ambience, good food. Now let's see how we can represent them in numeric format using the bag-of-words modelling. Let's identify all the words appearing in all three sentences. These are service, good, nice, ambience, and food. Now let's see how many times each word occurs in each of the sentences. In the first sentence, Service occurs once, so let's capture one. Nice doesn't occur in the first sentence. So let's capture 0. So similarly, you can do that for all the words in all three sentences. And then you can create a matrix of numeric values. Let's look at a slightly more complex example. We have three sentences, and these sentences have many word says shown here. The first sentences services good today, then ambiance is really nice. Then the third one is today for his coat and salad is nice. We'll create a histogram of words and capture how many times each word is occurring. When you convert a sentence to numeric format, you do not necessarily take all the words. You will have to find the top words and then create a matrix out of that. There are various libraries available for you to pick Top 1000 or 10 thousand English words for your text and create a numeric vector. For now, let's try to understand how the model is created by taking these simple examples and then picking top four or five watts. When you start working on actual NLP project, you'll have libraries to help you extract the words and create numeric vectors. So in this particular case, we have arranged word by word count. And let's pick these five watts is good, nice today in service, which occur most number of times. And let's pick this top five watts which occur more number of times, and then build a numeric vector for our three sentences. So as you can see here, what iz occurs twice for the third sentence. So that's where the value is two here. For rest of the sentences, it is occurring one. So we have captured one-year, similarly count of number of times each word is occurring in each sentence is captured here. The limitation of bag-of-words model sees each what is given the same importance. If you have to do some analysis using text, for example, if you have to calculate the sentiment of the text, not all words might at the same importance. For example, words like nice will have higher importance than today when it comes to positive sentiment analysis. Let's now look at another technique using which we can give higher importance to certain words. 30. Tf-idf model for converting text to numeric values: Tf-idf is a popular technique to convert takes to numeric format. Tf-idf stands for term frequency and inverse document frequency. It's put this model, if your word occurs more number of times in a document or a sentence, it is given more importance. However, if the same order occurs in many sentences or many documents, then the word is given less importance. Let's look at an example. Tf is Term frequency, that is number of occurrences of a word in a document divided by number of words in that document or sentence. For example, if today food is good and salaries Nice. That's a sentence. Then the Term Frequency of what the good is one by eight because the word good occurs once and there are total eight words. Similarly, the target frequency of word 0s is two by eight because the word iz occurs twice. And there are properly towards. So going by this model would easily have higher importance than we're good because it is occurring more number of times in this particular sentence. However, if toward easy common watercress, multiple sentences are documents, it's importance would be lower. So that is driven by inverse document frequency, which will look next. Idf Inverse Document frequencies calculated based on this formula. Log base C, number of sentences divided by number of sentences containing the word. Again, you don't have to remember this formula. You love libraries available to calculate TF and IDF values. For now, understand the concepts. Let's look at a simple example to understand IDF. Imagine we have three sentences. Services good. Today ambience is really nice, and today food is good and solid is nice. We already know how to calculate frequency of different words appearing in these sentences. Now to calculate inverse document frequency will have to do log base C, number of sentences. That is three for all the words divided by number of sentences containing the word. For example, eases a peering in all three sentences. So in the denominator we have three for each than log base e, three by threes 0. Now, the word Israel have lower importance because it is a commonly occurring words. Similarly for word good, it is occurring. And to document, If we apply log base e three by two, we will get a very low point for one. And then we can calculate for all the words. Service occurs only in one sentence or one document, so its value is 1.09. To calculate numeric value of each word, we take into account both TF and IDF. Simply multiply TF, IDF, for example, for what is TAP is 0.25 and IDF is 0. Similarly, you can calculate TF-IDF value for all the words. Now you can see that words are given importance based on how many times they're occurring in a sentence and how many times they're occurring in all the sentences. Unlike bag-of-words model, we give more importance towards which occur more number of times in a particular sentence, but they are lists spread out. This is TF-IDF model using which you can convert takes to numeric format. Now once you have this text in numeric format, this can we fit to a machine learning model? Each of these words in a text based classification system would be a feature or independent variables. And your dependent variable would be whether the sentiment is positive or not. That can be represented in numeric format is one or geo instead of positive or negative. 31. Creating and saving text classifier and tf-idf models: Let's understand how to build a text classifier using the techniques that you've just learned will also understand some of the core concepts of NLP or Natural Language Processing. Go to Google collab and create a new notebook. We'll call it text classifier. There are various libraries available for natural language processing. Will be preprocessing our text using a popular library called NLTK. Will understand NLTK and some of the core concepts of Natural Language Processing by looking at some examples. First, we need to import NLTK. After that, we need to download NLTK libraries and will download all the liabilities. While it is downloading. Let's look at the text file that we'll be working on to understand NLP and build a text classifier. Will be looking at this restaurant review data. This is available on Kaggle and many other places online. This is restaurant reboot data and whether customers like the restaurant dot naught one means they've like did gentlemen step not like. You can see some of the positive sentences like the phrase, we're good. That is one that is positive. Who would not go back? That's a negative sentence, that's a negative review. So that is marked as 0. So based on these data will have to build it text classifier, using which we can predict whether a new sentences positive or not. We'll click on the tab to get the path of this file. We need Pandas to load the file. So we'll first import numpy as np, then Pandas as pd. Using pandas read_csv will read this CSV from our GitHub repository. We got an error because this is not Comma-separated, tab-separated, so you have to specify that delimiter. So the delimiter would be tab and then capture coating equal three, which means double-quotes should be ignored. Once it is loaded to a Pandas DataFrame, we can see the top records. Now this restaurant advertisement loaded to a Pandas DataFrame. In natural language processing, we remove some of the commonly occurring words lake, even though they might not tell us whether a sentence is positive or negative, but they would occupy space. Those words are called stop words. And using NLTK, we can easily get rid of all the stop words. There is another concept called stemming using which we can derive root form of words. For example, for both running at run, we can have word run for totally and total. We can have worked total. That we, we limit the number of words in our analysis. Let's understand how that would work. First, we'll import stopwords library from NLTK. Then we'll import porter stemmer, using which you can derive route for the words, will instantiate the stemmer class. Now let's look at our dataset in detail. It is 1000 entries, will have to loop through these touted entries and remove all the stop words and apply stemming and create a corpus of clean tech. First we'll declare a empty list which will contain the corpus of text. Now for i in range 0 to thousand, we'll declare a customer review variable which will contain data for each row, which we can fetch using dataset review I. Next, we'll get rid of all the stop words and apply stemming using this syntax. So we'll get all the words which are there in customer review. And if the word is not in the English stopword list off NLTK library, you apply stemming. Then you can concatenate the words to get the sentence back. And then finally, we'll append that to the corpus list, will also do some further data cleaning. If we look at this reboot, there are certain characters like exclamation mark, which we can also get rid of using Python. Regular expression will keep only alphabets in smaller capital letters. And you can easily do that in Python using regular expression. And the syntax for that is something like this. Should, this should get rid of all the characters which are not alphabet and will also convert all the sentences to lowercase for consistency. Now we'll split the sentence on space to derive the words. So the first line is removing all the junk characters. Then we are converting the sentences to lowercase, splitting it by space. For each word. If it is not in stopwords, then we are taking that word and applying stemming. And then finally we're joining all the watch to get the sentence back. So let's run it and see the output. We need to import the regular expression also. This has to be lower. Now after this, we should have a corpus of clean sentences. Let's check out the values. We'll take the first sentence is you can see now we have all the dots removed and the entire sentence convert it to lowercase. Let's say chord line seven, which is an index six. You can see the parentheses have been removed. And also all the stop words like a in the and other commonly occurring words have been removed. And the tamer helped us derive the root form up each word. Let's look at another example. So this is another sentence where words have been changed to their root form. Note that the root form may or may not have any meaning. But then that would help us reduce the number of words so that we can do the processing much faster. Next, let's convert the sentences to numeric format using TFIDF vector treasure. Scikit-learn is it TFIDF vector Egypt class. And we can specify how many words we want, tau 01500 or whatever number. Using mean DAF, we are specifying that the word should occur at lease price for that to be considered. So you can get rid of words that are. Cutting infrequently using the mean df. Using max D if you can get rid of words that occur frequently in all the documents. So for example, MAX da 0.6 would get rid of any word which occur in more than 60% of the documents. Next, using the vectorized or we can convert the corpus to a numeric carrier. Let's print takes now. So these are the TF-IDF values. There will be some nonzero values which are not displayed in this notebook. Let's check a sample record. And we can see that some of the words have nonzero values. So this victimizer is created a two-dimensional numeric carry from all the sentences in the restaurant review file. In this dataset, like does the dependent variable which contains one or 0. So let's create a dependent variable, y, which will have data for this column. So we'll get all the rows and the second column, convert that to a NumPy array. And when you print y, you can see all the values one or 0. After this, the stapes to create machine-learning model is same as what we have seen earlier for numeric data. We'll do train test, split, keep 80% data for training, 20% for testing. Let's use the K never technique to build a classifier. So you can also use any other classification technique like maybe so which is a popular classifier for text-based data. Now let's predict using the classifier. Let's derive the confusion matrix. Will now print the equity issue. Next, let's have a sample sentence and predict whether it is positive or negative. We use the same vector leisure to convert this sentence to numeric format. So this is now the TF-IDF representation of the sentence. After that, we can predict the sentiment using the predict method of the classifier. So we got one which is positive. Let's have another sample sentence. Convert that to TFIDF format. Now predict the sentiment and we got to 0. So this is a negative sentence. This is how we can build a text classifier which can read different sentences and determine whether it is positive or negative. Now if anybody wants to predict using this classifier, they would need the classifier. They would also need the victory measure. Let's export this two files in pickled format. So this is our classifier. We'll call it text classifier. And we'll create a pickle file for that TF-IDF model. Now we have both the pickle files and we can download from the colab environment and take it to another environment where we can use this buckle files to predict sentiment of text. 32. Creating a Twitter developer account: Let's go to developer dot twitter.com and apply for a developer account. So this is different from dot to.com that you might have. Pashtun could log into Twitter and then go to their law partner tutor.com. Click Apply. Click on Apply for a developer account. I'll start doing academic research. And give all your details. Specified the reason for creating a developer account which will give you access to data. Epa answered this video's questions. Click Next. Read the terms and conditions and click Accept. And submit the application. You need to go to your mailbox and confirm that you've applied. Now there will be with application and approve it. It might take a few hours or up to a few days. And you will get a e-mails getting that your application has been submitted for review. Once your application is approved, go to developer dot Twitter.com. Click on developer portal. Then here you can click on apps. And you can clear the nap. Give it a name, give it a callback URL, which can be same as your URL. And other details. One stop is created. You can go to keys and tokens and get your consumer EPA's and secret key which you can use to retrieve two. It's you can always go back to the apps and select a particular app, and go back to the keys and Tokens tab to see the keys. And you can also Region part of jumbling, putting somebody knows your keys, then you can always read them. And you can generate access token and access keys. And you can only see these values once Sudipto copy and give them somewhere. 33. Deploying tf-idf and text classifier models for Twitter sentiment analysis: Let's now go to the text classifier notebook on Google Columbian download the pickle files that we generated in the previous level. First we need to import the file stability. Then we can save file store download and specify the file name in courts, and download the bcl files faster download the classifier. Then we'll download the TF-IDF modelling will upload the pickled files to GitHub repository. Now let's create a new notebook for Twitter sentiment analysis. We'll save this. We'll name it as Drew doesn't demand analysis. This is a new notebook, so the pickle files will not be present here. Will copy them from GitHub repository. Copy link address. Then, first get dot TF-IDF model, Copy link address, and then get the text classifier. Now both the files have been copied. To do Twitter sentiment analysis from a Python program will use to be liability. First clinically important 3p. Then we need to declare forward variables to store the consumer key, consumer secret, access token and access secret. Let's copy them from our developer account. We'll select the app that we just created and copy this key secretes and access token and access secret and regenerate these keys. After this lab, you will not be able to use these keys. Next, we'll write those turned out to be core to get outraged to Twitter using the consumer key consumer secret, access token and access secret. Next declared an APA variable with certain timeout, specified 22nd timeout. If there is no tweet for 20 seconds, then it will timeout. Next, let's fetch tweets for a particular text. Will be fetching for vaccine, which is a popular topic. Now we'll create an empty list to store all the tweets. And then using standard 2pi chord, we can fetch all dot, which the only thing that you need to pay attention to is how many tweets you want to fetch have specified 500 here. This will keep running until it reaches 500 tweets. You can verify the length of number of goods, phase two, which is 500. You can check some sample two, it's also, so these are some real grids that people are tweeting right now on covert vaccine. As you can see, that tweet said lord of special characters like cohosh. At the rate, we can use Python, relay, periodic let-expression, two pins that weights. So we didn't really look. We'll get tweets one-by-one, converted them to lowercase, will remove all John characters. You can read more about regular expression and understand how to deal with different types of text. We can take a sample to eat after cleansing. Let's check out this one. See that it's gained at all. The special characters are gone. We have learned videos techniques to deploy the pickled files, like having risky IPAs are serverless EPAs for this lab, let's simply Lord the pickle files to two variables and use them to import topical. And we lowered our TF-IDF model to another variable. Let's declare two variables to keep track of positive and negative tweets. Next we look to the Twitter list and using classifier dot predict method will predict sentiment for each tweet. And before fitting that takes to the classifier will have to apply the TF-IDF model to convert it to numeric format. Let's run this. After that we will get the positive and negative uidCount. Let's see how many positive but two, it's on vaccine, it's 97 and then 403 negative two. So this is the sentiment of the text analyzed for last 500 tweets. 34. Creating a text classifier using PyTorch: Let's now understand how to create dot text classifier using by touch. The stapes work takes preprocessing and cleansing is same as what we have done earlier. Once you have the corpus of clean tech, you can use TFIDF vector Asia to create a numeric array. And then after that you can do train test split using scikit-learn. After that, instead of creating a model using k-nearest neighbor technique, we'll use Python to build it text classifier. Import the required library for touch. You need to convert x and y variable to tensor format. One thing to note here is we have total 1000 sentences in the corpus. They have 467 features. So these are the vectorizes towards now that determine our input node Cij. So earlier we had seen an example of a Python model with two input nodes, that is for Asian salary. But this time we have an input node of size 467 because there are 467 different words that are features for this model building. Output size would be two because you are predicting with the sentiment is positive or negative. Or we can try with different hidden size. Let me try with 500. Similar to the previous example, we have two hidden layers, will have fully-connected layers input to hidden, hidden to hidden, and then he did do final output. So the only change here is the input size n perihelion sage. Rest of the steps are discussed earlier. Defined a model class. Then you define, optimize your learning rate. A 100 epochs this time. And now let's train the neural network. You will see the losses getting minimized. And now the model is trained and ready for prediction. We can predict the way we predicted our layer. Let's have a sample sentence will convert it to numeric format. And we need to convert that sentence to dodge denser format. After that, you can predict using the biters model class. From this, we can see that it's a positive sentence because the second element is higher than the first one. If we have another sentence similar to the one that we had earlier, which is a negative sentence, then will get the output in which the first element will be higher than the second one. So this is a negative sentence. And now you can create a dictionary from this model, the way you have done earlier using state underscore dictionary method. Then you can save the model and export that model and use it in another environment or created SDP from this model. 35. Creating a REST API for the PyTorch NLP model: Let's now create a rest API using the pipe classifier model and DFID and the pleasure that we just created. First we'll go to the Columbia environmental endow upload text classifier, Python's dictionary file and TFIDF model.predict refinement. You can simply click here and upload. Once you do that, the python dictionary and our TF-IDF model pickle file should be visible here. After that, you can follow the matters that you've learned till now. Import Flask Python, NumPy libraries. Before we lord up pi-thirds dictionary, we're predicting that in neural network class for which you'll have to specify the input sizes for 67. That was the input size for our text classifier. And let's declare a class. And now we'll declare a model class for the Python model. Next we load the Python's dictionary keys matched successfully. Now as shown earlier, let's create a rest API using Flask in Iraq. Will import runway tends Iraq. Then declare an app. Then do runway 1000 cap. And after dark, when Lord that TF-IDF model. Now we'll declare an endpoint and create a method to read the request and then predict output. First we'll read the incoming request to it, a variable. Then we'll convert that to a list. And then after that we are using the TF-IDF with treasure will convert it to numeric format. And then using Python will predict output. And then we'll compare that tensors and tensor at index 0 is higher than the tensor at index one. That's a negative sentence. Otherwise it's a positive sentence. Softer predicting, we're just comparing which value is higher. Then we're returning that sentiment. So let's run this. And now we'll run that. Record the endpoint URL. Let's now hit the rest API from the postmen tool. And we'll pass it text. And we'll try to predict the sentiment of this. So this is a positive sentence. Now let's send something else. And this is a negative sentence. So this is how you can clear the rest API for your search NLP model, and exposure to internet. 36. Twitter sentiment analysis with PyTorch REST API: We'll now integrate the Twitter sentiment analysis app with despite Dots risk TPA. Let's first ensure the pi-thirds NLP Rest API is running. We have created a risk TPA using interrupt, and this is the public URL that ends the rocket generated for this rest API. Let's hit this endpoint from postmen. We need to provide the slash predict endpoint. We're able to predict sentiment using the rest API that is running in the Colombian woman. Now let's open the Twitter sentiment analysis notebook. Let's try to hit Dar SDP from this notebook. Will first declare a variable URL. After that will create a JSON string containing the input data. Will import JSON. And import requests. Then create this input data variable. After that, let's send a request. And I will try to print the response. So the status is 200. And if we do response.data will get the actual response string. The prediction is negative. So we are able to hit the race TPA from another notebook. You can hit it from anywhere. Let us now modify the tutors sentiment analysis program to use the rest API. We don't need the pickled files. Let's delete. This. Will face the tweets as before. Will phage 500 tweets and managed to fit 500 tweets again on vexing topic will cleans the tweets. Now we don't need to import pickle because we will be invoking the rest. Epa will have two variables to keep track of positive and negative two, it says before. Now, earlier we use the TFIDF vector Asia and the classifier model to predict. Now you'd be in working the rest API for each tweet will create a JSON string. Then we'll get the response using star dot post. And the sentiment would be response dot text. Will go to the pi-thirds race TPA notebook and will modify the code to simply return the sentiment. So this would return positive or negative. Let's read on it. You can restart runtime and run the entire program from the beginning. We got a different URL. Let's copy that. We'll go back to the tutors sentiment analysis notebook. We'll update the URL in the URL variable. Now for each Tweet, once we receive the response, will check if the sentiment is positive or not. Based on that will increase. The counters. Will fix this typo here. Now let's execute this code block. For each tweet, we are making a rest call to get the sentiment. And based on the response, we are increasing the counters while it is processing. Let's go back to the other note book and take the log. We can see that the HSDPA is getting the real-time to its end processing and then returning the sentiment positive or negative. Let's go back to their Twitter sentiment analysis notebook and verify the account. So there are six positive 2494, negative two, it's we're analyzing real-time to it. So every time we will get a different response based on what people are tweeting. The key thing we have learnt here is how to integrate with the machine learning model through risk APA. 37. Creating a text classifier using TensorFlow: Now let us understand how to create a text classifier using TensorFlow cameras. So once our data is ready, then we can create a TensorFlow model. Similar to the earlier examples, we'll create two hidden layers and one output layer will have 500 nodes in each hidden layer, intensive loci. Whereas you do not have to specify the input layer because it will automatically determine that from the input data. Now let's train the model with a 100 epochs. Once the model has been trained to be can take the loss and accuracy and also check the model's summary. Now we can predict the way we predicted earlier for kNN or Python models. Have a sample sentence. Convert it to numeric format. Then using TensorFlow model.predict method, you predict this intimate. It is 0.79. So it means it's a positive sentence. Similarly, for the other one, we got a very low number exponential minus 07. So that's a negative sentence. Now we can save and export this model and you can create a rest API using TensorFlow model server. The way we have shown earlier. 38. Creating a REST API for TensforFlow models using Flask: You have seen how to create risks EPA for TensorFlow Martin's using TensorFlow model server. We can also create rest APA for TensorFlow models using flask. So let's see how that can be done in the collab environment. And you can try the same thing on a virtual server or any other environment. Will download the text classifier model fails the protocol files that we created earlier. And we will also download the Dart DFID, pleasure. And then we'll upload to a new collab notebook, TF-IDF, modeler and text classifier. After that, we need to unzip the file. And then let's import TensorFlow. And we'll import Lord Martin. And then let's load the model fails to a model variable. And we'll import pickle and Lord their TF-IDF model. After that, we'll install Flask in Iraq. Now the steps should be similar to what we did for scikit-learn are biters models will import all the flask liabilities. Create an app. We'll see it on with Iraq. End here. Will create an endpoint and read the incoming request. And then Don leachate that you could do premiere by tortured scikit-learn modeling on how to predict using the model. So we'll use the TensorFlow model and numeric text to predict output. And then output probabilities greater than 0.05. but is a positives and deaths are there is a negative sentence. And finally, we'll run that. And we'll get a public URL using which you can predict sentiment. Copy it, record to the postmen, and send a post request for this sentence. The sentiment is negative. If we change it to something else, the sentiment is positive. So this is how we complete the rest API using Flask for your tens of rewarding. You can also use TensorFlow model server and create a rest API. 39. Serving TensorFlow models serverless: Let's now understand how to create a serverless rest API for the TensorFlow text classifier that we've just created. You can create the model and are downloaded to a local environment, though we have shown earlier. So when we download the model, we get a protocol file and we get dow, which is within the variable folder. Within the Google Cloud Function environment. We won't be able to create a directory to Lord this protocol file. So what we will do is we would load these weeds which are present under variables subdirectory, and then use them to reconstruct the modelling. So let's go to Google Cloud environment and search for Buckets. We'll create a new bucket. Let's name it as effects. Bf demo. You can keep everything as default and just create. Now let's applaud the dart TF-IDF model in TensorFlow weights. So we'll go to the variable static tree and applaud the widths. And let's applaud the DFA D of more than load. So we'll upload the pickle file that contains the TF-IDF model. So waits DFID model have been uploaded to this bucket. Now let's go to Google Cloud Function. Search for Cloud Functions. And lets clear the new function. And we'll set Dart triggered type is HTTP. And let's select slightly higher memory which is required for the TensorFlow. Martin will select one GB memory. And we'll say allow unauthenticated invocation. Save it. Here we'll select Python 3.7. Now let's look at the code that we need to first import, request, pickle, Google Cloud Storage and tensor flow. And then we need to create an instance of the bucket is shown earlier. And our bucket this time is a fixed PF demo. And let's Lord though weights to different blog variables, variable startIndex and data file. And then we'll Lord the pickle file to deter similar steps which you applied for the second LAN model. Just that we are loading the weights this time. And then we can download the wage two doors last TMP directory to each axis. And now to reconstruct the TensorFlow model. We'll have to define the model class. This should match with the model that you created earlier. And once that is done, you can load the weights. Using this load weights. You have to give variables which is the filename without the extension for both the files. Then you can predict the way you predicted earlier. You have to read the sentence from the incoming request. And using the TF-IDF vectors, you can convert it to numeric format. Then using TensorFlow model predict method you can predict. So these tapes that similar to what we have done earlier, only difference is how do we load the weights and then how do we reconstruct the TensorFlow model? And in the requirements.txt will have to add the required packages. We learn, TensorFlow, Google Cloud Storage, scikit-learn and request, make sure you use the same worsens are the ways you might face challenges. Let's deployed now. Here you see different options to view logs. If we're deployment fails for some reason, you can go to this view logs and see what went wrong. While it is loading. Let me just pull up another function which I've deployed. Here. You see logs for various runs that we've done earlier. You can pick on this log file and get more details about the Iranian case you face any error. You can also copy this to clipboard and take it to a text pad for analysis. Here you'll find logs for deployment and also logs for function invocation. Let's go back and see if our function got deployed. It's still loading narratives successful. Let's click on it. We'll go to the testing tab. Let's pass it JSON string to test this function. We got the sentiment is positive and if it changes good to bad, let's see the output. It's a negative sentence. We can also invoke this cloud function from outside using the HTTP trigger endpoint. So let's find that. Click on that triggered tab. And you'll see the triggering point. Let's copy this. Will go to Postman and paste the entire URL. And we have this sentence in the main body. Let's send it. So this should hit the goal cloud function and give us the prediction. The sentiment is negative. And if we change the sentence to something positive, will get a positive sentiment. So this is how you can create a serverless Cloud Function for your TensorFlow models. 40. Serving PyTorch models serverless: We'll now create a serverless, risky pay for the Python model using the Google Cloud Function. It the time of this recording, Google Cloud supports 1.00 one verse unoccupied touch to currently, 1.7 is the latest sooner to downgrade. Pipe dot John Gould collapse to 1.01 and also elect to downgrade dark, dark vision to 0.2.2. So let's do that in the text classifier python IPython notebook that we created earlier. And after that, you run the entire file and then download dow pi term dictionary. So this tape is same as before. Only thing is we have downgraded Pi torch to 1.0.1 worsen. After importing touch, you can verify the pie charts worsen. It has to be 1.0.1. Now Go to Google Cloud and creative bucket and upload the text classifier, Python, dictionary file and TF-IDF model pickle file without bucket. And then had to Google Cloud Function and create a new function for the pipe yardsticks classifier. You look to Lord our dictionary first. And then the TF-IDF vectors major DOP declared though Python neural network class. And then load the dictionary. And then the rest of the code is similar to what you've done for TensorFlow and psychic LAN. You just need to update the code where you do the prediction. If the value at index 0 is higher, it's a negative sentiment. If devaluate index one inside it's a positive sentiment, and then return that sentiment in the requirements.txt unit. To define all the dependencies. We need the Google Cloud Storage liability, scikit-learn for the TF-IDF modern requests library NumPy port Python's will have to specify their download directory path for the 1.0.1 verges on any worse than higher than this currently doesn't work. After that you deploy. And then I'll copy the HTTP URL and go to Postman and predictor. So the sentiment is positive, it is coming from the pipe serverless function. If I change this to a negative sentence will get a negative sentiment prediction from Python. Two, this is how we can create a serverless RISD pay for Python arch models. 41. Model as a mathematical formula: Or some models, we can extract the formula from the model and use it in another application. We do not need to store the binaries. Linear regression is one such models where the formula can be extracted and used in other applications. So let's understand linear regression through a simple example. Unlike classification where we predict the class of the output. Here we predict continuous values. For example, if this chart shows what is the car price for a certain number of cylinders, then given a number of cylinders, can we predict the car price? This type of prediction is called Regulation. Now, given this data points, how do we determine the price of a new car for a certain number of cylinders? Using linear regression, we can easily solve this problem. Linear regression is nothing but trying to find the line that best fits these points. And how do we determine this line is calculated based on a formula called Y equals a plus bx, where a is the intercept and b is the coefficient of the line. Now for any new point, if we know the x value, then we can easily determine the y value by using this formula. Scikit-learn and other machine learning libraries, they provide you a class using which you can feed different data points and get this aggression or the predictor. How does the model determines the best-fitting line? And how do we know the accuracy of the prediction? So that is done by a simple concept called r-squared, which is also known as coefficient of determination. What this means is how good is the line compared to the line which is represented by the mean value of all the points. For example, if this is the mean value of all the data points, we can also predict using this mean value. But if we are coming up with a new line with linear regression, we need to see how good is that ln compared to this line. Now to calculate the R-square value concept is simple. You calculate what is the error for each of the points. That means how far is the line off from the actual value for any point? If this is the actual value, the point at which the vertical red line intercepts the predictor is the predicted value. The distance in red represents the loss or the error in prediction. You calculate loss for each point. Do a square of that and add it up, you get the sum of square of residuals that is shown in the numerator here. Similarly, you calculate how far is the mean line from the actual value that is represented in green here. So that is sum of squares up totals lower the error lower is the value up sum of square of residuals. So the numerator will tend to 0. When the model becomes more accurate. That means R square value would be closer to one. For a. The accuracy modelling. So higher the window part square, better is the accuracy. And R-squared can never Maxwell loved one. R-square is also known as coefficient of determination. You may or may not remember the exact formula of R-squared. But for any model, you will find a method to get that square value. All you have to check is whether it is close to one or not. If the value is close to one, then you know that your model is where the credit. Let's apply this concept and solve a use case. Then we'll see how to extract formulas and then use the formula to predict output for new set of values. We have a new dataset called house prays dot csv. So it is two fields, distance in price. So distance represents what is the distance of the house from the city center. And price represents what is the house price. So as you can see, are higher the distance lowers the price. Now, how do we calculate how sprays of a new house, which is at a particular distance from the city center. We need to build a machine learning model using the linear regression technique, which you learn from this data and create a model using which we can predict how sprays for new set of data. Let's import the standard libraries. This time we'll also import matplotlib so that we can plot the house price and our distance. Next, let's load the dataset to a Pandas DataFrame. So as you can see, the advertisement loaded to the pandas DataFrame. Let's describe it to get some statistical info. We can see there are 40 records and the mean, standard deviation and other values. Let's separate out the independent and dependent variables. X will have the distance to the city center and why you left the house price. We can also plot the house price and distance to see how it looks on a chart. We can see that there is a linear relationship. Is the distance increases the house prices going down. And that is in a linear fashion. Now using linear regression will have to find a line that best represents these points. And using that will predict output for new data points. We'll comment it out for now. Let's run it again. Now using scikit-learn train test split. We'll create the training data and test data. So we are using 32 records for training and eight records for testing. Scikit-learn provides a linear regression class using which we can create a regression object that could be our model. So this the aggression is the line or the model which has been trained on the training data. From regression, we can easily calculate the r square value. There is a score method which gives us the R-square. Print. The R square value is 0.807, so it's not very close to one. That means the model is not very accurate. We are anyway, trying it out with a very small set of data to understand how machine-learning deployment works. For now, we are fine with this R-squared value and we'll move forward from the regression. We can easily determine the intercept and coefficient for our intercept is 610710. Let's now get the coefficient. Coefficient is minus 72635 because our house prices going down as the distance increases. So that's where we are seeing a negative coefficient. Now, anybody who wants to use our model can take this intercept coefficient and get the house price. We do not need to send them the regressor class in binary format or export that model. All we need to share is the formula. So our formula becomes Y equal intercept plus coefficient multiplied by x. So it is price sequence 610710 minus 72635 multiplied by the distance will first predict using the predictor method will feed the training data to the regression and get the prediction. So this is the predicted house price. Let's compare it with the jailhouse place. We can see that for some cases it is very close. In some cases it is little bit off from the actual price. These are the actual prices, these are the predicted values. We can also plot the predicted value and the actual value. Created a scattered plot for the actual values for a predicted value of peer deadline block. So this line represents our degree or our predictor. Now for any new point, we can easily determine the house price given the distance citizens. Let's now predict the house price for a house which is at 2.5 mile distance from the city center, the value is coming around four to 91 to 0. We can also get the same output using the formula y equal intercept plus coefficient multiplied by the X value. So we got four to 91 to 0. Now to shared this model with anyone, we can share the formula. We can also create pickle files and create raced EPA's, but this is one of the option that is available to export. Linear regression models. 42. Model as code: In many scenarios in the real-world, you'll be running mandalas code. You'll be taking the model and integrating with another application. And your main application in moral code will be running within the same brand name. Let's see how that works. I'll layer we created a model for house price prediction. Let's first write a Python program to predict house price based on that model. Then we'll integrate that model code with another application. This is the formula that we came up with for host praise given the distance to the city center. Let's now write a Python program which will take distance to the city center is a parameter and predict the house price. In Python, you can create a class using class keyword and give it a name, and then put a colon. Then within the indented block, you can break your variables or functions. First we'll create a new method, which is like a constructor that gets invoked when the class gets instantiated. This is optional, you do not get to specify it. After that, we'll create a function which will predict the house place. And this function will take distances a parameter, and return the price. This is our mortal court. What house price prediction we can date a main function to test it. Within the main function will instantiate the class and then call grace passing distances a parameter. Either we can pass a float value or we can convert whatever parameter we get to a float value. Let's do that. Now we'll run this. We'll have a print statement to print the output. We can see that house prices 24, 7, 5, 3, 0. And let's pass a different value. And a different house prices getting printed. Now let's use this model in another Python application and create a new Python file. Let's call it another Python up. Here we'll import house prays predictor. In Python, you can import another Python file by just specific the filename. And if it is within a subdirectory, you to save from that subdirectory import the Python file. Then we'll create an instance of the model by invoking the class from the file limb. After that, we can predict by invoking the predict method. Let's run this. We've got Gauss place. This is how we can integrate your mortal cord within another application. You are modelled code and the main application they're running in the same runtime environment. You can also run your model code from another application by invoking is shell command that we had. This application need not be written in Python. It can be any application which can call it a shell command and execute the model code. Let's see how that will work in Python. We'll create a copy of this Python file. Let's call it house prays predicted to. Since we'll be running it as a separate process, we need to pass the distance has its command-line argument. Python is an arc parse package using which you can read command line arguments. First import that. Then you create an instance of argument parser and define what argument you want to read. We'll add an argument called distance, which is for distance to city center. You can add as many arguments here. And then we'll do a parser.org to get all the arguments. Now we did the constructor will extract the distance argument and store it in a variable distance to city center. And we'll make it a class level variable by saying self.age, distance to city center. And we'll convert float. Now here we do not need to pass distances a parameter. And whatever argument we receive will use that to predict the house place. In Anaconda spider, you can configure command line arguments by clicking on Run and configure per file. Then click on command line option, and here you specify the arguments. We have distances in our document. Let's run it now. We can see the prediction 3, 2, 0, 1, 66. We can change it to a different value and see the output. Let's send it to three. We can see a different house price here. So this program takes a command line argument and gives the prediction. Let's now invoke this program using a shell command. You can use the waste labeling, waste labor, religious system method using which you can execute any command. To run the Python file, we need to say Python and the filename and whatever command line argument we are to pass. Let's go back to house prays predictive Python file, Right? The model prediction to disk will create a file called model output. And that will go to whatever prediction will get. We need to call the predictor 2. Here. We lived. You could just this block of code. Okay, got executed. Let's go to Files tab. We can see the prediction getting written to a file. Let's pass a different value. Now. We rotate again and open the model output. We can see a different value. This is that we can invoke a model as a separate process using the Python shell command. So these are the two ways you can execute. Model is a code. Either have the model code within the main application or run it as a separate process. When you are running it as a separate process, you need to decide how you are being sensed data with the model. If her passing limited parameters that can be done using command-line arguments. However, if you had to pass a large file and then get the prediction, then you can store that file to somewhere where the model code can access it. And then within the model code you can read the file, do the prediction, and then store the prediction to another file from where the main application can read the prediction. 43. Storing and retrieving models from a database using Colab, Postgres and psycopg2: Storing models in a database is a common practice in the real world. Data scientists can create the models converted to pick color binary format, and they can store it in a database from where other applications can access the model and use it for prediction. Let's see an example of how to store and retrieve models from a database. There are various ways we can try this. One of the easiest ways is to create a Postgres SQL Database on Google Colab and then use that as a model store. Let's see how that works. We'll go to Google Colab and create a new notebook. Let's call it ML models in dB. First, let's install Postgres equal on Google Colab. Colab is a Linux environment and we can execute postgres SQL Linux installation commands to install Postgres in this notebook. First, we'll make sure all the packages are up to date. This is a Linux command which will make sure all the libraries or packages on the Linux environment is up-to-date. Then to install Postgres sequence, we'll execute this command. Now installation is complete. Let's start the Postgres equal service. Postgresql database server is started on this notebook. Will alter the Postgres sequel user password to Postgres. By default will get a user id postgres, and we can set the password to the password. I've set it to Postgres. Next we'll create a database Future x, but before that, we'll drop any existing database. Using this command, you can execute any SQL query. So let's try it out. It says the database doesn't exist. That is okay. We can create it, but in future, if you have to read on this notebook, it will drop any existing database and then create a new one. Now we'll create a new database Future x. Next we'll create a table which will store all the models. The easiest way to do that is Habit text file with the create statement and then executed from Columbia. We have a simple text file which contains one create statement. And before that there is a drop statement. Also, if the table already exists, it will drop it. And it'll create a table future ex model catalog, which you'll have three fields. Model ID, modeled name, and the model file, which will store the actual model. So we have kept the modal ideas. Integer model name is varchar, that can store any string. And the model file format is byte. That is the format that we'll use to store to the pickle file. Now we need to upload this create model SQL file to collab and executed. So let's do that. Click on this icon and then upload the file. Will upload the file create models, table SQL, which we just saw. The SQL file has been uploaded. Now to execute the file, we need to connect to the database and then specify the file limb. So here local host is the host, which is the host for the server. This is the local host for the Colab environment. Then we are connecting by specifying the port number 5, 4, 3, 2, which is the default port, password and the database name, and then calling the script. Let's execute this. So this is fine. It failed to drop it because the table doesn't exist, but it would have created the table. Our scripted one drop statement and then one create statement. Let's now check if the table got created or not. There are various libraries available to interact with Postgres database from Python. We'll use psycopg2, which is one of the libraries or packages to work with PostgreSQL from Python. Import that in the Colab environment, psycopg2 is already available in other environment. You might have to install it by doing pip install. Then we need to connect to the database using psychopg2 connect method. You need to specify the user AD, password, the host name, and the database name. Now a connection has been established. Next, we need to create a cursor. Using coarser, we can execute different queries. Let's write a simple select query to get all the records from the model catalog table. Coursera is an execute method using which we can execute any query. Coursera's various methods to retrieve one or multiple decades. Let's do concert dot-dot-dot and get all the records and store it in a variable called models. Currently, because you've created a model catalog table, but we have not stored in the model. Let's now create a model in this Colab notebook. We will use the same code that we tried earlier got to build a classifier using KNN technique. It would read data from store purchase CSV, and then create a model. We need stored particular CSV in this Colab notebook. Let's upload that. So it is done now. We'll run this code to create a model. This is the same code that we have seen earlier. Now the classifier has been created and we also use the standard scaler to scale the data. We can try to predict using the model to ensure everything went okay. Accuracy is 0.875, so it ran fine. Next, we need to store the classifier and the standard scaler in the Postgres database model catalog table. Using pickle will convert the classifier and standard scaler to binary string format. This time will not be writing it to a file. Instead, we'll store the binary string in local variables, pickled classifier string and pickled standard scaler stream. Let's execute it. Now we need to store this pickle strings in the modal catalog. They will impose grids, will create an insert statement which will populate into the model catalog table. It has three fields, ID, name, and the gel model binary string. Next we'll pass the values using a tuple. For the classifier will use model ideas one name as classifier, and then we'll specify the classifier string. Next we create a cursor. Next we'll execute the query by calling cursor dot execute and then passing the insert statement and the tuple. Let's do the same thing for the standard scaler. This time we'll use modal ideas to name as C and then specify the standard scaler string. Will execute the insert statement again. And finally, we'll close the cursor and commit the connection. Now the models had been stored here, the standard scaler and the classifier been stored here. Any application which has access to this database can retrieve the models and use them. What prediction will retrieve the models from the Postgres table in the same Colab notebook. First we create a cursor. Then we'll write a select query to select all the models from the model catalog and then execute it and store the output in a model's variable. Let's do that. Now we can see that classifier and standards killers, they're getting phased from the Postgres table. So this is our any application can access the models from a table. In a real-world scenario that would be shared database which data scientist and other applications can access. Models is a list of tuples. Now we have two elements and each element is a tuple. Get the classifier from the first element of the list, that is index 0. And the third element of the tuple that is index two. Similarly get the scalar from the models by specifying one, which is the second element of the list, and then two, which is the third element of the tuple. Using pickled got loads, we can read the binary string and store the data and local variables. Now you've classified and scalar objects retrieved from the database and stored in local variable. And we can use this to predict, let's predict for age 40 and salary 20000. Using the same technique that we have done earlier. This time we are using classifier from dv and then scalar from dv. It works fine. Similarly, let's predict for age 42 and salary 50000. It gives expected results. This is our machine-learning models can be stored in a database and retrieved and used in other applications. Instead of storing the models, you might decide to store the prediction so that other application did not run the models that can directly take the prediction. It could be probability or whatever your model is, giving us an output, and then use that. 44. Creating a local model store with PostgreSQL: In this lab we'll see how to clear a local Postgres database and use that as a model stood. First will download and install Postgres equal. Go to post-classic collapse site and go to the download link. I'll be downloading for Windows. Click on download the installer. Let's pick 10 dot 14. I'll download for Windows 64-bit. Once downloaded, click on the installer. Click Next. You can leave the default directly. No need for stock builder will give our main admin. You can choose any password. 5, 4, 3, 2 is the default port. Let's install. Suppose this sequence has been installed to Lord post-classic. Well, simply pay BCR been and click on it. Pcr administer interface to a database. It open sit at localhost port phi 1 4 0, 1. If p's yard when text pane to Lord search where PC, which should also be present on your machine. That is the command-line tool to enter Postgres. Then prompted for local laws just hit Enter on Database default database postgres hit Enter Porter's Five 432, that's 34, port data into and use an image Postgres password for the user positivists, that is the password that you would have given during installation. I gave out with. So I'll enter that. And now I am in the Postgres database. Now that we are connected to the Postgres database locally, let's create a schema and under that will create a table which will be the marginal catalog locally. We'll create a schema called modern store. Then within that will create a Eugenics model catalog table. This is the same create script that retried in cola one layer. Let's try to fetch from this table. Nothing is getting phased because we have not stored any models in this module catalog table. Let's now go to spider and create a model and store the model into Postgres table in the local database. Before we can interact with Postgres from a Python file locally, we need to ensure psycopg2 is installed into Python. And moment. Since you are using Anaconda spider equilibrium shift psycopg2 is available in the conda environment. Go to Anaconda Prompt and do a pip install psycopg2. I've already installed it, so it says that requirement already satisfied. But if it is not been started to install it after that, you can verify it by simply importing psycopg2. And it is getting imported correctly. Now we have a new Python file. Within this Python file will create a model and store it in the table. The code would be exactly the same as what we tried in Colab. When you go into your store purchase date read.csv is available in the same directory. And then we'll create the model using the KNN technique. Let's run up to this point where we check the accuracy, 0.875, so it is working correctly. Next, we'll store the classifier and scalar in binary format. After that, we'll insert the models to Postgres table. We need to ensure schema name and the table name boater specified. And also locally we have a different password admin, so that has to be captured here. Rest of the court is same as what we call up. Let's run this block now. We've got executed successfully. We can go to PC equivalent query the table. We can see that both the records that are getting fetched, the model is getting shorter window binary format. Now we've created a model and stored it locally in a Postgres table. Any application which is access to that Postgres database can extract the models and use the models for prediction. Let's try it out in another Python file. We'll call it use modals from dB. And then we let the same chord is what we had in colab. Import psycopg2, establish a connection, and then select from the model catalog table. Make sure you have the schema them captured here. And then after that, do the prediction. We are getting expected result. These or we can have a local Postgres database and use that to store and retrieve models. 45. Machine Learning Operations (MLOps): Ml ops or machine learning operations, is the combined effort of Data Scientist and operations. People could take models to production, ML ofs's to machine learning, what DevOps is to software engineering. In DevOps, we establish ca cd process to continuously integrate the court and deploy the code to production and moment in a machine learning project, the additional things that we need to take care of. How do we continuously train our models, monitored their performance, and retrain whenever required. Ml ops guides you on how to package and validate your machine learning experiments. A monopsony is a relatively new field. Successful area mill adoption is yet to be figured out. In many organization data scientist work independently then notebook environment, usually there'll be no clear process defined on how to take those models to production. And only very few organisms have the right tools in place to monitor the model performance. If we're an operation person with some machine learning background, you'll find an app opportunity in the middle of space. There are multiple tools in the market to help you with the overall machine learning lifecycle. Airflow is a popular tool for data sourcing. You can also look at Uzi in a big data Hadoop and moment. Pi yard is a popular tool for feature scaling or feature engineering. We have seen libraries like TensorFlow, pi-thirds, scikit-learn for machine learning, turning an evaluation for deployment. We have TensorFlow Serving serverless Cloud environment and various other options. Finally, when the modal is in production, ML watcher and various other tools can be used to monitor the performance. You can use tools like giraffe get off for worsening and managing the overall process. Let's look at a popular tool called a mail flow. What machine learning lifecycle management. 46. MLflow Introduction: There are various tools available in the market using which you can easily run your machine learning experiments and deploy your models locally and in the cloud. And Mama, ML flow from Databricks is one such product. With ML flow, you can work with any liability. You can build your models and deploy it easily in any cloud environment. There are currently four key components is shown here. Using ML through tracking, you can easily track your machine-learning cleaning experiment, what kind of parameters you are using and what kind of matrices you are generating. We'll see that in a demo shortly. With their models component, you can easily deploy your model locally and in various cloud environments. Let's dive in and see them in Floor tool in action. 47. Tracking Model training experiments with MLfLow: Let's see how to get started with the milk flow with a very simple example, we'll open the anaconda prompt, search for anaconda prompt and open it. That's the default Python on this machine. Whatever Python and Mama DR. using unit to install MN flow within that. Since I'm using anaconda, it'll be installing MN flow within the condyle and moment simply do pip install. It will flow. And he turned out, I've already installed it. It might say that it's only available, but this is how you install lemon flow. It says requirement LEDs are displayed. That is fine. We have ML flow within the codeine moment. Our next we'll create a program and then we'll see how to drag odd mortar liquidus using ML flow. Let's open the milk biplane Python file that we created earlier. We'll copy this to a new file. We'll call it a mill by Blend underscore ML flow. To use ML flow, we need to make future India's first thing is important in milk flow. And import emitted floor dark a scalar. Because we have a scikit-learn model in this example where we're trying to predict whether a customer or bio based on age and salary is a classification model. And we are using scikit-learn Kamen liabilities for the same. Next, we'll create a main method. And in the court, we'll get rid of all the code to create pickled files. A male flow will gender. For us. We would need the core dark to the point where we're checking the mortar liquidity. We'll store that in a separate variable. Let's put a few print statements. Completed. Feature scaling. This is just to see how the program is progressing in the console. Modern trend. So this should be good enough. So far ML flow, we have audit to import statements and created a main method. So whatever experiment we are going to try, we're going to give it a name. We'll say milk flow set experiment and give her experiment a name. You can give any name. So that's all we need to set it up. Next, we need to specify what we want to track. Will track two things. We'll first one is model accuracy. So every time we run the model, we can see what is the accuracy. We also want to track model for each state shot model is Classifier. We want to drag these two. Once you see the UI, you'll get a better sense for now, just make these changes import liabilities. We were experiment a name and simply say track accuracy metrics and track the modal. Will open a command prompt from the directory where the Python violet. And we'll run it saying Python ML Pipeline underscore ML flooded by. While it is starting. Let's open another command prompt. So our program finished. It said ML floor demo does not exist. So creating a new experiment, we will see that experiment shortly in the UI. Then it printed and we got 0.805 bakeries. And to load the ML flow UA simply say a melt flow. Ui. It says it started it port 5 thousand. Open a browser, go to 1270015 thousand. That is the local laws, colon 5 thousand. If this part is not available, you represent a different port. Now the UA loaded and we see a new experiment. There is a default experiment, but we explicitly created ML pro Demo Experiment. And we can see that there is a model run that is getting shown here. Let's click and see what is their accuracy is a matrix we said we would track. So that is getting shown here. And you also see Martin. And under that YAML and the pickled files, a mail flow creates a typical file for your margin and that is ready to be exported. You can find the directory here and then go and search it under your project folder. This particular file has been automatically created by ML flow. And we see that courtesy. Now let's make some changes to the code. So instead of using 80% data for training, we would use 70% for training and test for testing. Let's make this simple change and run it again. And let's see what happens. We'll go to the terminal and run the Python file again. We got the same accuracy. I did not save the file. Let me run it again. This time we've got a different accuracy because our train test ratio was different. Let's see the UI now. We'll go back to the homepage. Click on ML for demo now you can see three experiments. Second time it tried with the same 20 splits. So it's showing that you can click on that and check that accuracy is, is shown as a matrix. And you can also see the corresponding model. And in the third runway at different accuracy. And you can also see the corresponding modal artifacts, the pickled files in a different directory. You can also look different parameters that you use for your cleaning. Let's try to log few of them. Dataset ship. And we can also log the trend test ratio, training percentage. So whatever parameters you are using for the 4D printing, you can also log dose for your tracking. Lets run it. So the mortal finished. Now, let us see how I will floor tracks this parameters. Will select the latest From. This time we run Chrome spider, the part. You can see all these parameters that we use to put out training. So this will also be very useful later on when you are trying to compare two different runs. So this is how we can get started with the milk flow. Could track your model equity, see you can change different parameters and then finally decide which model to pick. You can select different models and do a comparison of their accuracy and other parameters that are different. Plots available. Seven, a lot of other features. As I said earlier, ML flow is an open source platform for the machine learning lifecycle. You can go to that officer website and get plenty of information. And they have built-in integration for so many liabilities. We showed a simple demo for scikit-learn, but they support TensorFlow by dodge garage and many other libraries. So as a data scientist, if you want to track your experiment and then pick the right model, then this is how we can leverage ML flow by plane. It's very easy to get started and you can deploy a milk flow with a pip install command and then start tracking your experiments using the tool. 48. Why track ML experiments?: Maintaining Watson's up data and experiments is a key aspect of machine learning operations. Many organizations keep track up machine learning experiments and define a process to draw the conclusion on which model is best model. Also, there might be a compliance or legal requirement to maintain worsened subdata and predictions. A data scientists should be able to track algorithm drained test ratio and different hyper parameters like number of epochs and landing Grade, a milk flow. And there are various other tools in the market using which you can track all your experiments. Also tracking experiment execution time and number of resources required help you derive the cost of machine learning execution. 49. Running MLflow on Colab: You can also run your email flow experiments on the Google collab environment. Let's see how to do that. We'll go to Google collab environment and create a new notebook. First, we need to install a mail flow. Deploys in start. We had earlier seen how to create a risky beyond Google collide using NG rock. And using NG rock, we can also create a public user interface for ML flow. We'll install paella Iraq liability by Indie Rock is a Python wrapper for Iraq. Then we'll specify the port for the MLE flow UA and run it in the background. After that, we'll import in Iraq will invoke in 0 kilometer to kill any existing open tunnel. Then will open in a stupid tunnel at port 5 thousand using this command. So at this point occurred started by Emily flow UA. And we can get the address of the UAE by printing the public who are enough in Iraq tunnel. So let's go to this URL now. So as you can see, we have managed to lunch ML flow from the Google Columbian moment. Now let's copy this entire core From Spider And you can copy it to multiple cells article pasted in a single cell. Let's paste it in a single cell. And we put, we run this, we need to ensure this CSV file is accessible. We can either upload to a CSV file to the Columbia environment here, or we can access from the GitHub repository. Let me access it from the GitHub repository. Now let's run this cell. It would create an experiment and log the parameters. And we got a response saying ML for demo doesn't exist. It is creating that experiment. So let's press the space now. We can see that the new experiment has been created and this is the run that we just heard. So this is how we can run a mail flow from the Colombian moment. And we can click on this file icon and ML flow that generated the pickle file. We can see that under this directory model.predict L. So this is the file that has been created in the Colombian moment, which can be exported to another environment. 50. Tracking PyTorch experiments with MLflow: Let's now see how to track machine learning experiments with ML flow. Let's copy the pipe neural network code for customer behavior prediction to the collapsing. We need to import two additional libraries, that is the milk flow and ML flow Pi torch. Let's give the experiment in Nim. And when plough by touch demo, we can not track different parameters, different metrics. So let's create a code block with ML, deploy, start run, and then track number of epochs and drag the model. Before we run it, we need to ensure any existing ML flow r1 is terminated. So let's invoke ML flow end run. Then we Tamil PLO start R1. We can start that on. Let's run this cell now. We're running it in the same notebook where we ran the scikit-learn experimentally. Or you can also create a separate notebook and install NG rocket, run it there. Let's go to them and flow. You wait now and we're fiscal page. We can see the new experiment ML flow, python demo. And one of the run was executed are layered. Let's open the recent one. We can see the parameter and modal artifact. Now let's change the number of epochs to 50 and will also change the param value to 50. We'll run it again. Now we should see that neuron with param value is 50. We can also allow different metrics. Let's drag loss metrics. Using the log metric method, we cannot track different metrics. We'll run it again. Go to employers, find that neuron. And we can see that loss metrics is getting trapped. This is how you can track different metrics, different parameters for your high-touch model experiment. On the homepage, you can select multiple runs and do the comparison. Hard download the experiment parameters and metrics to a CSV file. 51. Deploying Models with MLflow: Deploying models with ML flow is really easy. Let's see how we can deploy a modal locally. Back in them will flow away. I can see several grants. I'll pick one of them. For each running, get a unique line ID, and go to the command prompt and run this command. Ml flow models are dash gas, mortal URL, then runs, give the Run AD, which is around 90 that is getting shown here. So you can pick any of you who runs and pick their NAD and deploy it. And then you also need to specify a port number. I've given 1244. So using a mill flow model serving, we can deploy this model. And behind this CNET, which is flask to create a rest API. Now it is started gunning at port 1244 localhost. We can send request from a risk land the way we have done earlier. A few things to note. Unity have URL port and then you IRAs invocations. You to specify application JSON in the header format. And this time you have to send the data in a Pandas DataFrame format. You got a response one, that means this customer with age 42 and repeat out it is going to buy. So these are, we can easily deploy a melt flow model locally and cleared an APA and start serving your model. With the milk flow, you can deploy your modelled easily to various cloud environments. Checkout their documentation for more information. Thank you for enrolling for this course.