Data Science: Hands-On Image Classification for Autonomous Vehicle using Deep Learning | Avi Jha | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Data Science: Hands-On Image Classification for Autonomous Vehicle using Deep Learning

teacher avatar Avi Jha, We teach what they don't teach you

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

6 Lessons (47m)
    • 1. Project overview

      3:42
    • 2. Module 1 : Introduction To Project Platform

      5:59
    • 3. Module 2: Cloning Traffic Sign Dataset

      8:50
    • 4. Module 3: Image Pre-Processing

      7:25
    • 5. Module 4: Build, Compile and Train a Deep Learning Model

      10:38
    • 6. Module 5: Testing and Analyzing The Performance of the Model

      10:44
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

27

Students

--

Projects

About This Class

This is a Hands-on Project. You learn by Practice.

Data Science is the hottest job of the 21st century.  You need good programming skills and analytical skills and years of hard work to be a Pro in Data science. We have designed this course to be precise ,efficient and to the point .We understand our students are Professionals and have limited time and limited attention span. Taking a few months course and forgetting everything along the way is not a efficient way to lean. We learn by practice.

A precise, to the point and efficient course made for those who want to learn the most important part of Data Science : Importing Datasets, Building Models using the Datasets and Training and Testing the Models. Everything else revolves aroutnd this.

Although, for the sake of this project we will using traffic signs for autonomous vehicles to learn about Deep Learning and Data Science. The same process can be repeated for other projects too. The same process and techniques can be repeated for other Deep learning projects. Some such projects that you can build following similar process are:

  • Self Driving Cars (This project)

  • Skin Cancer Detection

  • Currency Detection

  • Human Facial Recognition

Learn the most important aspect of Data Science :

  • Importing  and working with Datasets

  • Building a Deep Convolutional Network Model using Keras

  • Compile, train, test and analyze the model

We will build a Traffic Sign Classifier using Keras. In this hands-on project, we will complete the following tasks:

  • Task 1: Project Overview

  • Task 2: Introduction to Google Colab and Importing Libraries

  • Task 3: Importing and Exploring Dataset

    •      Converting image to grayscale

    •      Applying histogram equalization technique

    •      Normalization

      Task 4: Image Pre-Processing

  • Task 5: Build a deep convolutional network model using Keras

  • Task 6: Compile and train the model

  • Task 7: Testing model with the test dataset & assess the performance of trained Convolutional Neural Network model

  • Task 8: Saving the trained model

We’ll be carrying out our entire project in Google Colab environment. That's why pre-installation of libraries and dependencies are not required.

What you’ll learn

  • Introduction to the Google Colab and Importing necessary Libraries
  • Cloning , Exploring and Visualize Datasets
  • Image pre-processing that includes Grayscale conversion , Applying Histogram Equalization Technique and Image Normalization
  • Building Convolutional Neural Networks with Keras
  • Compile and Train a Deep Learning Model that can identify between 43 different Traffic Signs
  • Test model with the test dataset & oversee the performance of trained convolution neural network model

Are there any course requirements or prerequisites?

  • Basic Python Programming
  • Basics of Neural Network

Who this course is for:

  • Students interested in Data Science

You can get the source code of the entire project : 

https://colab.research.google.com/drive/1mH4teD8NiXuK_c7cWShLOoe_nncbKhof?usp=sharing

Meet Your Teacher

Teacher Profile Image

Avi Jha

We teach what they don't teach you

Teacher

Hello, I'm Avi. We are a team of Professionals. We are here to teach you what they don't teach you in school. We are unconventional in our ways but we promise and we over deliver.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Project overview: Hello, everyone. And I welcome you all to this project on traffic sign classifications using deep learning. So in this project will get our hands dirty with deep learning by solving a real world problem. The main objective of this project is to recognize the traffic sign from the images that is actually very essential for self driving cars. Onda, different autonomous baker toe operate on roads. So those traffic size, it may either abuse stop sign or a danger sign or a sign that it nodes 60 kilometers per hour or 20 kilometers per hour and so on. So while building a safari car, it is necessary to make sure it identifies the traffic sign with a high degree of accuracy on. If it fails to do so, then the reasons might be dangerous. So here we're gonna build animus classifier using the convolution, your network that can classify traffic, sign images in do their respective categories. So before diving into this project, I assume you guys have a basic understanding off biting programming language. Andi, Artificial neural network. Since this project, it just made for the intermediate label. So let me present the project off of you so here. What we're going to do is we'll divide our entire project into smaller chance or not, Let's say into smaller tasks so forcefully we've been making your family or with Google collapse, since we're going to use or says we're going to carry out our entire project in this platform and then we'll be importing the necessary libraries that will be using in this project. Then, in the second, OSK will import our traffic sign data set and explore it to see the glass labels off the images and their corresponding traffic sign names, and we'll also visualize How are emails in the D? Does it exactly looks like next in the top step will perform, emits prepossessing. We will be converting our RGB images to gray scale image to reduce the competition power, and then we'll apply his telegram equalization technique. Since it's standardized lighting in all our images, it's because some images are more brighter and others are very deep. Thus, we need to give them a similar lighting effect on at the end, we perform emits normal logician where we normalize the pixel values between zero and one. People sing images before feeding into the model gives very accurate regions as it helps in extracting the complex feature off the images. No. In the food dusk, we will build our deep convolution, your network from the very scratch. So to cross further images into the respective categories will build a CNN model that is a convolution, your network. We will trade a model so that it can decode the traffic sign from the natural images for you. CNN because CNN is best for a misclassification propose. So after building, the moral architectures will simply then train and company are modern. And lastly reviews, um, or let's say, will test our model with the testing data set. It's in short, we here use this model to make predictions on new images, and then finally, we save our model for the further use. So, yes, these are the talks that we're going to perform in this project. And, uh, yeah, that's all for this video. Thank you. 2. Module 1 : Introduction To Project Platform: Hello. Everyone on in this video have introduced you to the Google collapse. And then we'll be important some necessary libraries that require in this project. So Google Collab it is a similar to like a Jupiter notebook. Onda, Uh, it allows you to write to entire core in the cloud on like Jupiter. It is free to use and it supports free GBU onda training your deep learning model with Collab, it's much faster than local machine on. Yes, you're gonna drown everything on a plow. GPU on it specially becomes helpful if you don't have a fast computer and the first point iss, it gets saved. Your I know entire fires in the Google dry. So no matter where you go, you can always have access to it. Even personally, I find it very useful. And I myself use, you know, this back form I my suffuse cool collab for many of my data science projects because meaning that the science back is in putting numb by or pandas or true tensorflow. They're ready to go in the cooker environment, so we just need to import it without worrying about installations. So now I want you to do is just open a new tab. The stopping full after that racist or google dot com. Andi, uh, then you can see the window. It appears on the screen since its flick on the new notebook right down here on it will diet you to the I will have the environment. So Google collapse on you, then, right? Good s process you want. So before our document to the court, uh, to speak Show that you connect this to the hosted one time since it allows you to access the GPU offered by the global on, then you cannot rename the fine with the name you knight. And then just make sure you choose the runtime type Teoh chip you and then click on safe. So you have to ming active sessions dominate and existing system to continue. Since I am already running, uh, you're so it's the warning. So this cancel it has got it because I already have. Um uh um a project running over here. Leave on day people. So you have already important, Theo Necessary libraries that require in this project to the first line, um, be important for us. Give us is an open source. knew a little library written beytin, and that runs from the top off. That's a fluke. Then, from caressed our models, we import our sequential model. Uh, it is used to more low new in it work. It is a privilege dress model where you can just add the layers. After that, we import our con Trudy and are pulling layers from Kira Start list or convolution. So go on, Trudy. It is used to extract the future from the image compression helps with, You know, with different operations like blurring or sharpening is detection and other different operations. That can help the mission to learn specific at the sticks off an image, whereas, um, pulling Lear or, let's say, this max pulling duty. It helps to reduce the size of the data. Or let's say it produces the image dimensionality without losing information or ah, without losing the important features or patterns off data. Then, from your star players, we import our densely, uh, dropout Leo on distracting you. So this dense layer, it is also known as a fully connected lier. So this a dense layer is a classy oh, you know, classic, fully connected to your network. clear where each input node it is connected to each offered noon. The importance dense layers as they are used to predict the labels. Whereas I'm talking about the Strabo player it is. Don't reduce over fitting it. Drop off some of the neurons we're learning the Prusis and then, ah, this factor nail it transforms two dimensional matrix or features into a vector that can be frayed into a fully connected neural network classifier. And then, to combine our model we use on optimizes called Adam on, then the import of matter. What lib dot Piper s plt Onda Then we import Seaborn as SNS. So this two, they are the data visualization. Do it and then we import Ah, Stevie, too. So a civic toe, It's the open city. Fightin um uh, it is a library which is designed to solve competitive is and problems. Then we'll be importing bigger. So you ever use this to lure out data set? And then we'll be importing Brenda's SPD So pandas is used for data manipulation and analyzes on dhere will be you think, find us toe read overseas before onda, uh, them in important buyers and be onda. Lastly, will import random do degenerate random numbers that will be now let's go ahead on discern by clicking shift Endo. So we ran ourselves successfully and in this video we import some of the necessary libraries that we require in this project and in the next video will import on exploded, is it? 3. Module 2: Cloning Traffic Sign Dataset: so in this video will import our data set. So that is it that we're going to use here is the German traffic signed. It is that that contains images off traffic signal with their associative class label. So, force, let's loon the reports pre containing the data set in tow collapse. This will allow access to the deal. Is it into the school of environment? So just run this cell, my creaking shift and, uh on you can see So it's cooling the repository that contents our traffic find it does that into this kind of environment. So let's break for some seconds and it will be done. Yeah, it's done. And, uh, you can also view this Did us it the cloned reports tree or the downgraded asset from here , I see German Dr Excited, is it? Now let's list the contents using Andi strengths so you can see it. Contents are spread. Shit. Sinan, stop CSB and the vehicle fires containing the test strength on validated. So this sign name starts CSP. It has all the class labels off the images and their traffic sign names. Whereas the train don't be it contents All the training images before intensities along with the levels. We will use it to train our modern and then we have value. Don't be. It contains all the validation emits pixel intensities along with their levels. It is used for cross validation, so validation he does that evaluates the performance of the model on. Then we have the day stop be. It contains all the testing inspection intensities along with the liberals. Testing data has never been seen by the model during the training purposes. So we make use. Offer it for distinctly train more. You know it's time toe Lord idiota. We will use Banda's to lure the our sign name starts CSB Onda will use picker to load the train validation on testicle files. Now let's load the signing starts Yes, before using Brenda's. So, you know, we have, um, loaded our signing store, See testified You single Banda's. It's a just click shift on Endure to Randy said. Andi lets you, let's view it how it looks. So you're you can see, um, the glass labels on design news So it's toasty. Traffic sign names on its corresponding class labels, So if you school down, you can see so we have, um, the number off unique classes or labels instead, as it is 40 tree that ranges from 0 to 42. So since then, we have noted the our signing stuff scares me. So now let's use pickle to lure our drink test on and bagged a tropical files. So here we have made use off pickles to lure our trend. Valued and test little fights, data people, files onda, uh, this court What it does this. It opens each off the vice train valid and test in vinyl format for reading. And then it assigned the data to train and test. So here what we have done use, we have just because the data to make it more usable. So now, after extraction off the data, it is the trend ballot on dust. It is then split using the Disney labels, features and the labels. Since these pickles files says this, people fired off hours is a distantly with key value pairs where our future contents the raw pixel data off the traffic sign images on D level, it contends the level or plus idea off traffic sign. So it's simply lords then put on are put so Let's go ahead and run, but this is on it. Andi, run this. So now we'll analyze the data said, to see how it looks like using ship. So she gives us the number of images. It's dimension in fixes and it's that. So now let's spring the ship off the train test on validation. Didn't so with this I print Andi extreme. She this scope it, Andi with same for ex validation Andi X test on, Then run the said. So if you just see for extra in dot shape, then you could see it prints all the shape off the dream they don't. Which means there are 24,799 images off size started to buy 32 pixels, and the last tree means that the data contents, the colored images and the same thing goes for X validation and access. Now let's hear from some emits visual addition and fathers with siddiq a random images from our training data set todo cae exactly how our data looks like. Let's sell it. Oh, random number using number. I don't the random dot grand and which stands for ah, random indigent. And then we're going to select a random number between number one and the length off the training data. So it's dream. Then we're going to use Matt Part lip from our image. Mutual addition. Support this bill. Use reality dot in this suit and then the past in our training did a set index. And then let's brought our corporate label that is White Runoff Index. Well, let's print it So Inish liberal Andi format and with us, our little index. Sorry, it's B. I made a mistake. Oh, hell And let's Rhonda still bleeding Shipped endo So the sun is shining and then, uh, he'll be See, that's the doll Mortar on We have the image of it s starting And if you want you get No wonder So showed again And here we have Ah, another random inserted Former The drinks that and you can see the operable as touchy fight So yeah, that's fit for this video And in the next video we'll perform our emails People sensing using the open sea 4. Module 3: Image Pre-Processing: welcome back. So in this video, with it before more in its people, sensing so people's isn't images before feeding into the model gives very accurate results as it helps, and extracting the complex features off the images. So in this video we will perform tree different tasks. 1st 1 is converting to brisket, but the second task will be used to grant equal addition, and the tour task will be normal Addition, so forcefully we'll convert the images into the three skill for reducing competition. Since the colors for traffic science, they are not really important because the important details are the ages and the ships off the images. Next, we will apply the hissed a gram equalization technique to standardize lighting in all our images. Well, because some images are more brighter and others are very dim. Thus, we need to give them a similar lighting effect, and at the end we normalize the pixel values between zero and one by dividing them by 2 55 So, before proceeding further, let's first shuffle our data. And for this we use a psychic alone. Did it start? Don't ask, Don't you choose shuffle and then we're going to pass our input in our output, which is our extreme and white ring on himself. So let's do that. And then we'll bust our input. And plus so this line it returns ourself for data that is our extreme on Dwight Rainn. So are we to suffering so that in it doesn't on the order of the images, it shouldn't learn the around 100,000 images off party group last back to back. We just want Don't wanted to avoid learning the order of the images. So this lets a makeshift ender and run the Senate aan den. It should be done. The shuffle data. Now let's people says the data using open city so open city had some built in functions like civility, color and equalizes. For this task, the first let's conferred our images to gray scale images for you to sing competition using civic r function. So that's do that. Don't And then let's apply his to grant equal addition technique to standardize lighting in all our images using equalize hissed function and then at last oh, we normalized pixel values between zero and one by dividing them by 55 and then the victim , Mitch so let's move ahead and run this inn by clicking shipped, Endo said. Next, we will apply people assisting toe all our images here. The malfunction, it read, strand entire area and performs a given operation. Prepossessing. Do all of the images to be trained it a set distance it on, do the violation. That's it. Let's go ahead and understand. Next, we add 12 images that is required for our convolution deal. So here we have made use off the ship where the Skips device or yeah, the number or the size of the images. This gives the dimension on the last one. It gives the number of channels. No, let's go ahead and run the search. So after reshaping the iris, it's time to feed them in the morning for training. But before that, let's print. Oh, the extreme persist. Does sheep Andi, Uh, I really do, uh, similar for just as best word validation cope with this. That suit be current best just like shift. Enter around the self. So you're If the cd extreme persist, don't ship it. Bring it Prints all the shape of strain data, which means that they're around 25,000 images off size Tito by 32 picks is the last one. It means that did a contents the risk of image on de Someone for the X test persist on Duh and even for the expedition persist. Now let's select a random images from our latest extreme knob assist data that is, our our people sister or data onda. Any random image from are always the extremely does it and let's see the difference holdings. So for this, let's first rate random number as we did before. Then let's use BRT door image index. I squeeze. She met. It should be. I agree bot dot finger And then at last let's, um Lord So the prisoner X green does it off Index I I made a mystical via you. No, the stream run this cell. So here you can see the difference. So the force figure is the people assist image. Where s the 2nd 1? Is the original image orginal random images that we have selected. So you in this video V people assist our data on and in the next video a little bit, our border 5. Module 4: Build, Compile and Train a Deep Learning Model: So in this video we will build a deep convolution, Your network mortar from scratch. So we're going to use caress FBI to below modern. So yeah, I have given a name to my Mordor So you can no new meteo anything as you like. We are going to build our model in a sequential fashion. Then now we were built our firstly off, which is our convolution here. So let's add the convolution. Ah, network smarter dot That gone do de so So convolution network is a network where you can extract features off the images. So here we will extract critical features from the images and to source for the features the site off the searching metrics. So you I be taking it as, um fight close five. So your whole search five plus five pixels and keep it already over all the pixels and then my activation could be real. And our infant ship, Since we have a brisket limits, it would be 32 with 32 comma one, So yeah, Tori to denotes the voters Five cross five. It denotes the size of the creditors. Activation is really on the input ship. It is started to combat ready to come a one that is started toe It's dimension of the image and one man's the mazes off risking no next week at the following Leo some water dot at max pulling two D on the booth size Um, that will take your It could be to us too. So here we have already pulling Leo. So the aim is that we have God has to reduced because there are lots of data that we do not need. We just need the features we want. Full size is to cross toe that actually reduced the size tohave maintaining the same features you already have. Now let's dio probably a model dot at drop out on I'll take it as going to five So here I'll drop about 25% of the neurons to avoid over feeding off the data. So, basically, in all of this, what you have done is you have convergent the image. You got its features. You then are pulled it and reduced its size. Now know the same thing will do it again. Is this, um open this onda business here? Here we have added the another convolution deal. So Louisville interest is to 64 Onda. You should remove this, uh, input ship. Yeah, So you can see we have increased the size off the feature extraction from more 32 to 64 on the similar thing will do it for pulling there as well. Corporate. And this is to you. So since after the site has been squeezed with the feature saved So now let's flatten it. Fatten is basically the image that you have got from to the limits to one damage. So nordle dot at flattened. So now let's add our dense layer are our fully connected lier, but dense. So the amount of notes or the amount of new ones are cured. Using the big 2 56 on the activation could be seen as really next again, let's add, uh, the, uh, drop off Leo copy on bond based. But this time I'll be dropping out 50% off the new ones. Now, the last we need to define the output layer of all network. So since here, I'm pacifying for teaching different classes. So I want my help earlier to be consisting off 43 new loans. So let's do this again. But this time it should be 40 tree. Since we have 42 different classes on the activation, it should be soft. Max. Now, at last, let's see the summary of the model so summary it gives thes somebody of the modern more. Did somebody just wish you What is the newly talk about? Why do you have concluded at the last? So now let's go ahead and run this self by clicking shift end up. So here we have, um, the summary of modern here can see we have the We have our first conclusion here for buyout pulling Leo. Then we have our drop off. Clear. And then again, we have added the another competition are here with, uh, the pulling the area. Then we have flattened it. And then we have our fully connected They're all its out densely or then give me Have dropped about Leo. And last year, when we have our opportunity. So you're against if you have, you know, this much trainable para meters to drink after you have told that. Okay, these are the layers off the new network. So now you can compile them all together and make it like something. Where you can just give input on Get the respected hoped. So let's go ahead on And, uh, come file model I know so more dot complies. So the optimized that you I be using is so Adam, but the learning which and then the loss would be its first get gory girl course and should be. And lastly, our metrics would be e f policy. Yeah, I made a mistake. You should be this. So let's go ahead on and run to settle. You need to specify what type off often misers we need to use. So here I've used are Adam Optimizer. And then you just specified loss that I be dealing with. So the laws in this case is categorical cross entropy. Since we have more than two classes, actually, we have ah total of 42 classes. So we have ah, used categorical crossing drop you over here. But if we had just two classes than we would have used by no recourse and trophy and yes, lastly, I need to specify the matrices. Oh, was my accuracy. So now we are ready to fit our training digital model. So let's do that. History equals stormwater dot fit, then of give you our features, uh, data. And then he will be giving our plus Labour, then the specify our bad size to be off. It's a 500 so here bad size it means how many images will be feeding at once. Then I can specify the box. So let's say 20. And then we have for booze close to one. So then I have over simply means how mistakes do I need to show while training the moderate . And then I'll specify what my validation did. I would look like to perform or cross validation. So valid ization data. So it's some X gravitation persist and it's table. It's done here. I forgot to give Comma Onda. Let's, uh, increase this to 50 books. I know. Let's go ahead and run the search so you can see I'm during my model. So it's a strain on No. 25,000 samples on invited on ah, 4410 samples. So here you can see the accuracy. It's point or six physics various more and hopefully over the number off the box agrees will go up on the loss, which is our error. It will go down so it will run upto 50 box. So till then I both my video and I've written back after it gets tough some back. So now you can see out winnings completed. So we have restarted accuracy to be your 95% is on the Polish accuracy is around 92% and you can see the violation laws. It's 0.2471 Where is the lost? During the training here, it's 0.1685 Finally here we trained our modern on. We caught our Chrissy to be 95 percenters. So now you can also increase the number of reports on And you can return your model if you want. So yeah, that's it for this video and see in the next video 6. Module 5: Testing and Analyzing The Performance of the Model: So in this video we're going to evaluate the performance of the Modern. So let's go ahead and do that. So score equal store. So you have. You haven't dig our Trained Warner, which is our modern dot. Then we're going to use the varied function on our drink water so evaluate and then help. We are going to basically feed our this. Didn't Andi It's Now let's bring dollar score just that pleurisy on. So let's go ahead on and around the city. So simply, Oh, here, we won't see what our model is doing to our distant later. The testing data is the data that the model has never seen during the training process. So here you can see we have reached 91% accuracy on the distant gator. So finally we got more that is able to generalize and not memorize. So now let's go ahead and visualize the keys. ST door. No keys? No, this strong this it so you can see weaken the history. We have our religion law spell the inaccuracy loss andan accuracy. So, uh, this history algae has an attribute called history, which is a dysentery containing the values off loss on metro system in training so we can use the data collected in history. Object to create out grants. So now let's brought on visualize how our need for performed. So you're going to brought a graph off the training loss for suspect addition. Loss over the number of box. Let's go ahead and do that so Hillary bees and purity don't bloat on history. Don't ST on your we used los on. We'll do this scene for the van loss, then the door ancient treating on. Then you have validation. And then Bill said Titou as training on Val Addition, losses. Andi. Then the ability thought actually booed it. Books. So let's go ahead and run the self. So if he had brought it the graph between the greening laws and the validation loss. So here you can see our laws. It went down, and we have here fitted the data well, keeping board the training and violation laws at a minimum novel brought the graph between the accuracy on the violation of greasy. Now we'll perform the same thing as we did before this. Copy this soul on this did you so instead off laws, it should be accuracy on bond instead, off fellows, it should be, uh it should be Falla. Chrissy. Yeah, on its OK on it should be accuracy. And you That's no. Let's go out on a run itself so you can see over the number of a box. We have proved our accuracy. At last, we have reached our accuracy to be above 90%. This is very good. So now let's go ahead on duh deliberate all prediction. So for this to be used, I predict classes. And then we're going to feature here our X, this digger, and then we'll set our white dist to why true liberal. So let's go ahead on, Bender said. And now we'll try to disgrace the performance of our model in a metrics which recorded as the confusion matrix. So in the roadside virtual predictions on the column side we show our level process so far this will use I could learn to do the plotting the force. Let's import our confusion metrics from escalon dot matrix. So ask you learn door matrix unquote confusing my tricks. No, me tricks closed. So no. Oh, here We used our confusing matrix and then will pass our label that is our Why true label on, then the prediction from you so, so prediction. Uh, that's what are mere mortals. Predicting on Dwight to label is basically the thrust levers that we already know. So now let's use peer t door. Thank you on. Let's set our figures size to be. I'm going to bury 20. Then we'll use Seaborn off our essence as to brought our heat map. And then with class, Here are Matrix Onda should be so let's go ahead on Brenda. So So it'll take a while on at the end, will be able to visualize our confusion metrics. It will be a big metric since we have ah for two different classes. So here you can see. So we have broader our conversion matrix and it stays. The network has done well and any sample in here or in here, it means the network has Mr Bill. But since our network has done well, so now let's start our predictions on our to label on with images. So now, instead of visualizing one team is at a time as we did before. So now let's create a six by six grid that basically content started six images along with their prediction as well as along with the true labels us. Let's define the length and the whole bit off the grid on double should also be six. I know. Let's use guilty daughter surplus and then going to pass here our leant on with off the grid on the bigger size. Let's keep it as well by two of so this lying religion our or figure and access. And next, Verbillo flatten our metrics into town. Six area supporters. That's a start, so it's gonna flatten our metrics. Now let's spot our too liberal along with the images on debts. Print our predictions as well. For this you will use for hope. It's for I in French. That ripple from sealed, too. Um, go to six. No axis off index ice dot You sure and you really do feet. Our this did off, opened it. So let's sit out I to where will print the production value and the true value. So it's the production value, and that's why to label it's the class level that we already have. And now let's mark are excessive off this off index I Xs on, said it as off. So now let's use BLT dot subplot. I Just where our species let's give it as one. Let's go ahead and rather, sir, so you can see we have a total off chopsticks images with their predicted value as well as to value, so you can see the predictive value as well as the true value, the both as them. So our model, it's, you know, it's doing a great job, but scrolling down. You can also see here the created value and the true value. They're not seen. So though we have this unmatched value, but you can see we have a lot off images where the predicted on true value they matched with each other. So now let's go ahead and save our model for further use so modern dot safe and here you can name it anything as you like, So you give it as my model, not it's life. So it's gonna create RST if I file, which is our my model. Now let's go ahead and run, sir, so our moral has been saved and you can also viewed from here see my model dot as life so you can easily used the same model, father. So now you have an amazing water that has been dreamed. So in this posit, we have successfully classified the traffic, saying faster file with very good accuracy and also visualized how are accuracy and loss changes with time, which is pretty good for a simple see an immortal. So now we have come to the end. And I hope you like this project. Thank you.