Machine Learning use in Flutter The Complete 2021 Guide - Flutter ML | Hamza Asif | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Machine Learning use in Flutter The Complete 2021 Guide - Flutter ML

teacher avatar Hamza Asif, Android Developer | Instructor

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

72 Lessons (5h 15m)
    • 1. Course introduction

      3:06
    • 2. Firebase ML Kit

      2:08
    • 3. Flutter Image Labeling Section Introduction

      1:43
    • 4. Image Labeling Importing starter code

      2:57
    • 5. Image Labeling Starter code Explanation

      4:27
    • 6. Adding Firebase ML and Image Labeling related code

      13:08
    • 7. Testing ApplicationTesting Firebase Image labeling application

      2:18
    • 8. Camera Package Setup for Flutter

      3:47
    • 9. Flutter Camera Package Code

      7:47
    • 10. Importing Image Labeling live feed application starter code

      2:38
    • 11. Showing Live Camera Footage

      6:14
    • 12. 3 livefeedimagelabeling

      9:38
    • 13. Section introduction

      1:29
    • 14. Importing Starter code for Flutter Barcode Scanning

      3:07
    • 15. Flutter Barcode Scanning code

      10:46
    • 16. Flutter Barcode Scanning Application Testing

      1:09
    • 17. Flutter Testing Barcode scanning live feed application

      0:45
    • 18. Flutter Barcode Scanning Live Feed Application code

      7:43
    • 19. Flutter Text Recognition Section Introduction

      1:21
    • 20. Importing Starter code for Flutter Text Recognition

      2:56
    • 21. Writing Flutter Text Recognition Code

      9:07
    • 22. Flutter Barcode Scanning Section Introduction

      1:21
    • 23. Testing Flutter Text Recognition Application

      0:55
    • 24. Flutter Face Detection Section Application

      1:41
    • 25. Flutter Face Detection Application Flow

      1:21
    • 26. Flutter Face Detection code

      6:12
    • 27. Flutter drawing rectangles around detected faces

      5:27
    • 28. Section introduction

      1:46
    • 29. Importing Starter code for Flutter Image classification application

      2:52
    • 30. Starter code explanation for Flutter Image classification

      5:42
    • 31. Testing flutter image classification application

      1:41
    • 32. Importing Flutter live feed Image classification application starter code

      3:07
    • 33. Starter code explanation of Flutter Live feed Image classification application

      5:25
    • 34. Writing Flutter Image classification code

      10:43
    • 35. Flutter Testing Image classification live feed application

      0:47
    • 36. Flutter Object detection section introduction

      2:11
    • 37. Importing Application code object detection flutter application

      4:57
    • 38. Flutter Object detection code

      13:05
    • 39. Flutter Drawing Rectangles around detected objects

      4:17
    • 40. Importing the code for live feed object detection flutter app

      1:39
    • 41. Flutter Testing Object detection live feed application

      0:51
    • 42. Flutter Live feed object detection application code

      9:38
    • 43. Flutter Pose estimation section introduction

      2:21
    • 44. Importing Flutter Pose estimation Application code

      2:39
    • 45. Flutter Pose estimation code

      10:23
    • 46. Importing pose estimation live feed flutter application code

      2:39
    • 47. Using PoseNet model for Flutter Live feed pose estimation application

      7:38
    • 48. Image segmentation section

      1:44
    • 49. Importing Flutter Image Segmentation Application code

      2:39
    • 50. Flutter using DeepLab model for image segmentation

      8:40
    • 51. Section introduction

      1:56
    • 52. Machine Learning and Image classification

      2:23
    • 53. Flutter Getting the dataset for model training

      3:01
    • 54. Flutter Training the model

      6:19
    • 55. Flutter Dog Breed Classification Application

      18:14
    • 56. Flutter Live feed dog breed classification application

      2:34
    • 57. Testing live feed dog breed classification application

      0:42
    • 58. Transfer Learning introduction

      2:27
    • 59. Flutter Getting the dataset for model training

      3:01
    • 60. Flutter Training fruit recognition model

      8:30
    • 61. 3 your own dataset

      1:23
    • 62. Flutter Testing Live feed fruits recognition application

      0:34
    • 63. Regression section introduction

      3:41
    • 64. Tensorflow lite introduction

      3:00
    • 65. 1 importing the starter code

      2:30
    • 66. 2 Starter code explaination

      1:53
    • 67. 3 analysing the model

      4:31
    • 68. 4 coding the application and testing

      10:17
    • 69. 5 model notebook code

      4:02
    • 70. 1 model input output

      5:11
    • 71. Model code explaination

      2:32
    • 72. 3 Application code explaination

      11:07
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

175

Students

--

Projects

About This Class

Starter codes link 

Welcome to the Machine Learning use in Flutter, The complete guide course.

Covering all the fundamental concepts of using ML models inside Flutter applications, this is the most comprehensive and only Google Flutter ML course available online.

We built this course over months, perfecting the curriculum, and covering everything that will help you learn to use Machine Learning models inside Flutter dart applications. This course will teach you to build powerful ML-based applications in Google Flutter for Android and IOS devices.

The important thing is you don't need to know background working knowledge of Machine learning and computer vision to use ML models inside Flutter ( Dart ) and train them.

Starting from a very simple example course will teach you to use advanced ML models in your Flutter ( Android & IOS ) Applications. So after completing this course you will be able to use both simple and advance tflite models along with a firebase ml kit in your Flutter ( Android & IOS ) applications.

What we will cover in this course?

  1. Dealing with Images in Flutter

  2. Dealing with frames of live camera footage in Flutter

  3. Image classification with images and live camera footage in Flutter

  4. Object Detection with Images and Live Camera footage in Flutter

  5. Image Segmentation to make images transparent in Flutter

  6. Use of regression models in Flutter

  7. Image Labeling Flutter to recognize different things

  8. Barcode Scanning Flutter to scan barcodes and QR codes

  9. Pose Estimation Flutter to detect human body joints

  10. Text Recognition Flutter to recognize text in images

  11. Text Translation Flutter to translate between different languages

  12. Face Detection Flutter to detect faces, facial landmarks, and facial expressions

  13. Training image classification models for Flutter

  14. Retraining existing machine learning models with transfer learning  for Flutter applications

  15. Using our custom models in Flutter

Course structure

We will start by learning about two important libraries

  1. Image Picker: to chose images from the gallery or capture images using the camera

  2. Camera: to get live footage from the camera frame by frame

Then we will learn about the Firebase ML kit and the features it provides. We will explore the features of the firebase ML kit and build two applications using each feature.

The applications we will build in that section are

  • Image labeling Flutter ( Android & IOS ) application using images of gallery and camera

  • Image labeling Flutter ( Android& IOS ) application using live footage from the camera

  • Barcode Scanning Flutter ( Android& IOS ) application using images of gallery and camera

  • Barcode Scanning Flutter ( Android& IOS ) application using live footage from the camera

  • Text Recognition Flutter ( Android& IOS ) application using images of gallery and camera

  • Text Recognition Flutter ( Android & IOS ) application using live footage from the camera

  • Face Detection Flutter ( Android & IOS ) application using images of gallery and camera

  • Face Detection Flutter ( Android & IOS ) application using live footage from the camera

After learning the use of Firebase ML Kit inside Google Flutter(Android& IOS) applications we will learn the use of popular pre-trained TensorFlow lite models inside Google Flutter applications. So we explore some popular models and build the following Google Flutter applications in this section

  • Image classification Flutter ( Android & IOS ) application using images of gallery and camera

  • Image classification Flutter ( Android & IOS ) application using live footage from the camera

  • Object detection Flutter ( Android & IOS ) application using images of gallery and camera

  • Object detection Flutter ( Android & IOS ) application using live footage from the camera

  • Human pose estimation Flutter ( Android & IOS ) application using images of gallery and camera

  • Human pose estimation Flutter ( Android & IOS ) application using live footage from the camera

  • Image Segmentation Flutter ( Android & IOS ) application using images of gallery and camera

  • Image Segmentation Flutter ( Android & IOS ) application using live footage from the camera

After that, we will learn to use Regression models in Google Flutter and build a couple of applications including

  • Basic Regression Example for Android and IOS

  • Fuel Efficiency predictor for vehicles for Android and IOS

After learning the use of pre-trained machine learning models using Firebase ML Kit and Tensorflow lite models inside Flutter ( Dart ) we will learn to train our own Image classification models without knowing any background knowledge of Machine Learning. So we will learn to

  • Gether and arrange the data set for the machine learning model training

  • Training Machine learning some platforms with just a few clicks

So in that section, we will

  • Train dog breed classification model

  • Build a Flutter ( Android & IOS ) application to recognize different breeds of dogs

  • Train Fruit recognition model using transfer learning

  • Building a Flutter ( Android & IOS ) application to recognize different fruits

So the course is mainly divided  into three major sections

  • Firebase ML Kit

  • Pretrained TensorFlow lite models

  • Training image classification models

In the first section, we will learn the use of Firebase ML Kit inside the Flutter dart applications for common use cases like

  • Image Labeling

  • Barcode Scanning

  • Text Recognition

  • Face Detection

So we will explore these features one by one and build Flutter applications. For each of the features of the Firebase ML Kit, we will build two applications. In the first application, we are gonna use the images taken from the gallery or camera, and in the second application, we are gonna use the live camera footage with the Firebase ML model. So you apart from simple ML-based applications you will also be able to build real-time face detection and image labeling application in Google Flutter dart using the live camera footage. So after completing this section you will have a complete grip on Google Firebase ML Kit and also you will be able to use upcoming features of Firebase ML Kit for Google Flutter ( Dart ).

After covering the Google Firebase ML Kit, In the second section of this course, you will learn about using Tensorflow lite models inside Google Flutter ( Dart ). Tensorflow Lite is a standard format for running ML models on mobile devices. So in this section, you will learn the use of pretrained powered ML models inside Google Flutter dart for building

  • Image Classification ( ImageNet V2 model )

  • Object Detection ( MobileNet model, Tiny Yolo model)

  • Pose Estimation ( PostNet model )

  • Image Segmentation ( Deeplab model )

applications. So not only you will learn to use these models with images but you will also learn to use them with frames of camera footage to build real-time applications.

So after learning the use of Machine Learning models inside Flutter dart using two different approaches in the third section of this course you will learn to train your own Machine Learning models without any background knowledge of machine learning. So in that section, we will explore some platforms that enable us to train machine learning models for mobile devices with just a few clicks. So in the third section, you will learn to

  • Collect and arrange the dataset for model training

  • Training the Machine Learning models from scratch using Teachable-Machine

  • Retraining existing models using Transfer Learning

  • Using those trained models inside Google Flutter dart Applications

So we will train the models to recognize different breeds of dogs and to recognize different fruits and then build Google Flutter dart Applications using those models for android and IOS.

By the end of this course, you will be able

  • Use Firebase ML kit inside Google Flutter dart applications for Android and IOS

  • Use pre-trained Tensorflow lite models inside Android & IOS application using Google Flutter dart

  • Train your own Image classification models and build Flutter applications.

You'll also have a portfolio of over 15 apps that you can show to any potential employer.

Meet Your Teacher

Teacher Profile Image

Hamza Asif

Android Developer | Instructor

Teacher

Hello, I'm Hamza.

I have a degree in computer science and have a passion for Android Development.

Powering Android Application with ML really fascinates me. So I learned Android development and then Machine Learning. I developed Android applications for several multinational organizations. Now I want to spread the knowledge I have. I'm always thinking about how to make difficult concepts easy to understand, what kind of projects would make a fun tutorial, and how I can help you succeed through my courses.

See full profile

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course introduction: Welcome to Machine Learning Using flutter, The Complete Guide course. Machine-learning using app development is increasing exponentially, but developers who can build ML power applications are very rare, which means that the demand for such developers is quite high, so it's perfect time to update your skill set. This course will teach you to build machine learning-based flutter applications without knowing any background mileage of machine learning, It means that without knowing anything about machine learning algorithms, we will build applications like object detection, image classification, detection, texts eagles in Asian pose estimation, X translation, image segmentation, bar CT scanning, fuel efficiency predictor, and much, much more. Apart from all of this, you will learn to train your own machine learning model, then build flutter application with just few clicks. So without a doubt, this is the best flutter machine learning course available online. And if you have very little knowledge of flutter, you must take this core. My name is Mohammad them, that's if and I am a leading mobile machine-learning instructor. And this course is the most special and exciting course of our mobile machine-learning series. The course is divided into three major sections. In the first section we will learn the user Firebase ML kid inside flutter to labeling, text recognition, mark CT scanning, face detection, text translation and language identification applications. Then in the second section of this course, we will learn the use of TensorFlow Lite models inside flutter. Tensorflow Lite is a standard format for earning ML model inside mobile devices. So in this section firstly, we will learn the use of popular pre-trained machine learning models inside flutter and may limit classification, object detection, pose estimation, and image segmentation applications. After that, we will learn the use of regression models inside further and build couple of applications including fuel efficiency predictor for vehicles. Then in the final section of this course, you will learn to train your own machine learning models. So we will explore some platforms that enable us to train powerful machine learning models without knowing any background knowledge of machine learning. So you will learn to collect and arrange the data set for model training. Then you will learn to train the model on that data set. Then finally, we will use those models inside our flood or obligations. So after completing this course, you will be able to build powerful machine learning-based flutter applications. You will have more than 15 powerful ML based flutter applications to empower your CV. And you can impress your potential employer with your flutter ML scale. But one important question is who can take this course? If you are a beginner, flutter developer, or even an experienced professional discourse is far you, but you must have a very little knowledge of flutter to start this course. So if you want to distinguish yourself from other app developers, you must learn the ML model used inside your flutter applications for joining me today in this most comprehensive unexciting flutter ML cores. 2. Firebase ML Kit: Welcome to this lecture. In this lecture we will look at a brief introduction of Firebase MLK. So what is Firebase MLK, but it is a mobile SDK that brings Google machine-learning expertise to Android and iOS apps in a powerful yet easy to use that gauge whether you are new or experience in machine learning, you can implement the functionality you need in a just few lines of code. There is no need to have deep knowledge of neural networks or modern optimization to get started. It means that you don't need any background knowledge of machine learning to use machine-learning model inside your Android and iOS applications. So let's look at the main features are file-based ML kid and one of them is production ready model for common use cases. Hamel kit comes with a set of ready-to-use APIs for common mobile use cases like recognizing text, detecting faces, scanning barcodes, labeling images, and identifying the language of her tags. Simply passing the data to the MLK Library and it will give you the information you need. So Firebase and market provide us ready-to-use model and we will just pass the input to those model and we will get the output. That's how simple the process of using Firebase MLK Day's. Similarly, we can perform both owned device or own cloud machine learning with the help of fire with MLK, it means that either we can download the model in the device and we will use them, or we can also use the model hosted on the cloud. And both of these approaches have their unique advantages. So now let's look at the implementation path over first step will be adding Firebase, a market inside over flutter application. We will integrate the Firebase ML Kit SDK inside of a splitter project. After that, we will prepare our input data, which can be the image dig and palm Gary, your camera or the frame bluff live camera 40. So after preparing our input, we will pass this input to our model and get the result. So our third step will be applied the ML model to your data. So that is a simple process we will follow in this section, we will add the Firebase MLK inside Dover flutter project. We will prepare our input data and we will pass this data to our model and get the result and show those results to the user. So let's begin. 4. Image Labeling Importing starter code: Welcome to this lecture. In this lecture, we are going to build our first application using fire with MLK. And that application will going to use the image labeling feature of Firebase and Medicaid. So we will build a simple application in which users will either to then image from gallery, all captured it using camera. After that, we will pause this image to the image labeling feature on Firebase MLK it and our model will go now recognize that things present inside that image. So let's begin by importing over start replication code. So open your browser and type this URL, or you can take this URL from projects URL file shared with you. Here you need to click on this code button and copy this repository link. After that, open your Android Studio and click on this, get from volume control and paste the link here. After that, make sure that your version control is good. And here 2s quadric location. Now click Clone and this project will be ready in a moment. So now the project is cloned successfully and the Gradle build. So let's firstly launch this application inside an emulator and see where does this start replication contain. And after that, we are going to add the image labeling feature of fire with ML kit inside this application. So let's begin by testing the starter application. So I am going to launch this application inside an emulator. So now the application is installed inside my emulator, and that is the GUI of this application. So there you can see that we have this wall or we have this background image. And above that while we have this image frame and in the center of this image frame, we have this camera. And so the flow of this application is when the user will click on this camera icon, the Gary will be opened and he can choose an image from gallery. So there you can see that the gallery, the opening. And now let's choose this person image for now. And there you can see that the 2s and image is being displayed here. Similarly, when the user with long press in the center of this frame camera will be open and he can capture enemy using camera. So now the emulator camera is being launched, and that is the default preview for emulator camera. And here let's capture an image. So here you can see that the captured image is being displayed here. So that is a WestRock replication that user can either choose an image from gallery or captured it using. And then that children are captured image is being displayed here. And now we need to add the co-related to File which MLK it so that the student or captured image, it'll be passed over image labeling model and we will get the upper addictions here. And we're gonna do that inside of our upcoming lectures. 5. Image Labeling Starter code Explanation: So now let's look at this replication code. So here we will firstly expand this folder and open our main activity dot dot. And with the help of this code, we are choosing image from gallery. You're capturing it using camera. So there you can see that we have this code and you should be familiar with that code because inside of our previous section, we have learned to choose images from gallery or captured them using camera with the help of image picker library. So here we have repeated the same process, but we just changed the Dui of this application. Let's quickly look at this application code. So here inside of our pub spec dot YAML file, you can see that we added this image picker library. And we are using this library area to choose images from gallery your capture them using camera. After that, I will remain dot dot file. We have our code. So here inside this main dot dot file, we are using a stateful widget. So there you can see that inside this class firstly, we have this inert state method. So here when the application is firstly launched and this activity is being shown, then this init state will be called. And there we are initializing our image picker library and we declared this library object here. After that we have couple of methods, then we will look at them later. But now let's look at the GUI. So inside of a bitmap, third, we have the GUI of this application. So there you can see that we have this material app and then inside that we have this scaffold. And after that we have this container. And there we are showing this, I am due to dot JPEG, and that is the image which is actually the background. So here when I will open this image, you can see that that is the word background. They may move back over to main file. So here we are displaying this background. He made an offer that we added a column width and they're inside this column widget. We have our spec and we are using this tag because we need to show the image that user have captured or chosen from gallery above this frame we made. So here inside this tag, we are firstly showing this framing made. And above that we added this center widget. So inside the center we did, we added this black button and then inside this flat button, we placed our image widget. So inside this image we did our two donald captured image will be shown. And firstly, when this underscore image variable is equal to null, then only in that case the camera icon will be shown. There. You can see that when we open that application there we have our camera because in that case this image is equal to null. So we are showing this demo I can hear. And once user will either to capture that Amy, this underscore image will not be null. So we will display that image here. So hopefully you get the idea. And here we are displaying all of these widgets inside a flat button because we are adding an own press and own long pressed listener own them. So when the user will press on them, we are calling this image from gallery mattered. And inside this method we are using this image picker library, the two to the image from gallery. So there you can see that image picker.gov image. And there we specified the source to gallery and user will pick the file. That image file will be stored inside this underscore image. So now this underscore image variable will not be null. So here this image will be displayed. Similarly, when the user will launch, click on it, this image from camera will be executed. And there we are kept getting enemy using cameras. So there you can see that we specified the source to camera here. And worse, user will either to capture the image we are displaying this image here. And after that, we need to pass the Timmy to over ME labeling model and get the prediction. There. You can see that we have this method do image labeling here. So we need to call this method here. Once the user will either tools the image from gal, we won't capture it using Gamma. And inside this map third, we're going to pass that image to our model and get the prediction. And we are gonna do that inside of our next lecture. But for now you should understand this code that how we are choosing an image from gal, we kept doing it using camera. 6. Adding Firebase ML and Image Labeling related code: Welcome to this lecture. As in the previous lecture, we imported over start replication code from GitHub and you have seen the working of that application. And apart from that, we also covered this replication code. And now we need to add our image labeling code. So here for that purpose we are going to use a package name Google MLK hard, and you can get that package from this Pub, dot-dot-dot. So here inside your browser you will type this URL and this side will be opened. And there you will find the package is related to Dart and flutter. So here we are going to tie, well, MLK would suggest try paid and press Enter. And in the results you can see this first reserved Google ML Kit library. So the popularity is 90 percent. So that is acquired popular library and it also provides null safety. So just click on it. And they are the documentation page for this library is open and here we have the steps to add several features are fire with MLK it inside of our Android applications. But firstly, we need to install this library inside our flutter application. And for that purpose, just click on this installing section and there you will find the instruction. So you need to copy this dependency inside your Pub spec dot HTML5. So just copy it and move back to the application. And here open your Pub spec dot YAML file. And there we are going to paste this dependency. After that click on this bug guts so that this library will be downloaded and we can use it, and it will be done in a moment. So there you can see that this library that if successfully, and now we're going to follow the instructions where the tone, the three Udemy sections. So just click on it. And there you can see different features. Our file with MLK that you can inside your Android and iOS application using this library support text recognition, face detection, pose estimation, barcode scanning, image labeling, digital ink recognition, language identification, own device translation, Smart Reply, an entity extraction. So all of these features are supported by this library, but now we are interested in ME labeling. So here scroll down. So there you can see that we have the requirements to use this library inside iOS and Android. Our iOS, we have these instructions that the minimum iOS development order should be 10. And similarly we should use Swift five and so on. And for Android, the minimum SDK version should be 21. And inside your app level build-up Gradle file, you can change this minimum SDK. And similarly, you can specify these values inside the iOS section. So here these are the requirements. And inside of our starter application, our minimum SDK, virginia.edu 21. So we don't need to change it. But if you want to change it, then here inside this Android section, you need to open this build-up Gradle file. And there you can see that the minimum SDK Virginia already 21. And if you have less than 21, then change it to 21 here. After that, open the documentation page, and here we have the steps to use this library. So firstly, we need to create our input image from. So firstly, we need to create our input a maid and we are going to pass this inputting me to over ME labeling model. So we cannot directly pass the image file to the model. But firstly, we need to create an object of type input image. So here we have several methods. So we can create this inputting me using apart, using a phi. Similarly, we can create it using bytes so it from camera image as we have the image files. So we're gonna copy this section of code. So just copy it and move back to your application. And here inside of our main dot dot, we're going to paste it inside this blue image labeling method. Because whenever user will either to capture that a maid, we want this method to be executed. So there we pasted it, and now we need to add the import for this input image. So just click on it and press Enter. And there you can see that import library, the Beckett Google MLK, it MLK dot, dot, just add the import and they said it will be gone. Similarly, we need to specify our image file and that is named as underscore remain. We got inside this file, we're able, we are slowing of an image file. So now we created our input image and the next step is passing it to our image labeling model. But before that we need to declare and initialize an object of type image layer blur. So on that documentation page, we have the section to create an instance of detector. So there you can see that we can create a bar code scanner, face detector, immediate blur and so on. So here we will copy this line for image layer blur. And we're going to paste it inside of an init state method because we want that our image label or should be initialized when our application is first-line. So I'm going to paste it here. And now we need to declare this immediately above so that we can use it inside this blue image labeling method. So here less copied and offer that, paste it below this Trudeau declared immediately blur. And they are, we will change this final into Image layer blurs or image labor. After that, add a semicolon, and now here remove the final. So there you can see that we are declaring this image labeling here. And then we are initializing it inside this init state method. So with the help of this Google MLK dot v1 dot image layer blur, we are initializing good. And now the next step is parsing this, inputting me to this Leibler and getting the predictions. So far that purpose we have the code inside this third step, so-called the corresponding method. So here you can see that we have this Emy labeler dot process method. So just copy this fourth line. And then we will take our low. So I'm going to copy it and paste it below this input image. And now you can see that we are calling this Emy labeler dot processing method. And here we are specifying our input image. And this method will return us the list of image labels. So these are actually the predictions returned by the model. So whenever user will choose or capture the image, we want this method to be executed. And here we will create an input image from the Amy that user have just captured or chosen. And then we are passing this image to this image layer, blur and getting the list of labels. And the next step, a light reading this list of labels and getting the predictions. We are going to show these predictions inside our application GUI. So here you can see that inside the Dui of our application, we also have this text we did. And here we are displaying the value which is present inside this result variable. So whatever we will store inside this result variable will be visible inside our application below this frame. So if we type anything inside this result variable, it will be displayed here. So now we're going to assign all the predictions through this result variable so that they will be visible inside of our application DUI. And for that purpose, open the documentation page. And here we have this section, extract data from response. So here we have them far different sections. So you need to find them far Labor's. So here you need to copy this code where we are I treating this list of labor and offer that, paste it inside ever do image labeling method below this process ImageNet third, so I'm going to paste it here. And there you can see that we are writing this list of labels. And then for each image label object, we are getting the label name with the help of this label dot text button, the latest documentation, this is actually change. So in order to get the prediction name, you need to use this labeled our label property. Similarly, you can get the index with the help of this label dot index property. And you can get the confidence score with the help of this label dot confidence. So here this label is indicating the name of the production and this is indicating the index. And this Confidence Code is indicating that how much over the model is sure about particular prediction. So for example, if our model predicted that the thing is a banana, so this label, that label, we're going to return the text banana. And this confidence score value will be between 01 and for example, it is 0.7. It means that our model is 70 percent sure that it's a banana. So that is the purpose of this confidence score. And similarly we have this index which is indicating the position of that label in the file where we stored all the label names. So you can ignore it as well. But here we need to show the name of prediction and the confidence score inside our TextView. So we're going to assign it to our third variable because inside our text widget we are showing the value of this result well. So just call this result and offer that appended. So here we will firstly display the name of prediction, which is stored inside this text right here. After that, add couple of SPAC. And then we will show the confidence score, so confidence. And then we will call the toString as fakes because this confidence value will be between 01 and it may contain a lot of number of units after floating point. So here using this method, we can specify the number of fraction did here. So I will specify it to here. After that, add a new line character because we want each production to be shown on a new line. So here you can see that once we get the list of labor, we are iterating it. And then we are showing each label name and the confidence score inside this result variable. And we need to enclose this assignment inside our set state method so that wherever this result variable is being used, the changes will apply. So here we will write set state. And there you can see this blog and now just got this line and paste it inside this set state method. So now we need to call this do image labeling method inside this image from Valerie and image broke MY method so that whenever user will choose or capture the aim made, we will call this method and pass that image to our image labeling models. So just call this method here. And after that, call it here. So do image labeling. And after that add a semicolon. And there you can see that once the user will capture the image, we're going to call this do image labeling method. And inside this method we're passing that image to our model and getting the predictions. And then we are showing these predictions inside of our TextView. But here we need to do one more step and that is resetting this result variable so that each time and the user will choose a new image, the predictions for only that image will be shown. So here simply set this result. We are able to empty string. So here when the user we're going to choose another image, we will firstly remove the production of previous images because they are stored inside this result variable. And after that, we're going to show the new predictions inside this Beltway. And that say, now you can run this application inside your emulator and test date for a couple of images. And this application will work perfectly. But here we need to do one more step and that is closing this image layer blur whenever application is closed. So here we need to add a method name dispose, because that my 30s carl when we are closing our application. So here when you will type DISP USE, you will see the dispose method. So just click on it. And now inside this method we will close over the image layer blur. So just call image labeling and then call the close method. So now we are closing this image labeling or whenever application is closed. So it's a go-to approach to free the resources before closing this application. So now let's run this application inside an emulator and tested. And after that, we will quickly review the process. 7. Testing ApplicationTesting Firebase Image labeling application: So now the application is installed a gain, and here let's test it by choosing an image from gallery. So I'm going to click in the center and the gallery, it'll be open. And here let's do the same image. And now you can see that this image is being displayed here. But apart from that, we also got these predictions. So the first prediction is muscle and the confidence score is 0.96. It means that our model is 96 percent sure that our muscle is present inside this picture. As you can see, the muscles of this bodybuilder. So this production is quite right. Similarly, we have this e weren't, so it thought that there may be some kind of bodybuilding given. So the second prediction is that, and then we have these shorts prediction as this person is waiting sharks. So this prediction is also right. Similarly, we have this dual sports model, flesh swim wear, jeans, competition and so on. And all of these predictions are quite related to this picture. So this application is working quite well. So that is the image labeling model of Firebase and Medicaid. And you can see the power of this feature as based upon the predictions we can infer that it provides, provided us all the possible things that are present inside this image. And now let's quickly review the process. So firstly, we imported over start replication quote from get her in that start replication with the help of this image picker library, we are choosing an image from gallery or capturing it using camera. So we added this library here and then inside our main dot dot file, we are using aid. After that we added this package, Google MLK aid. And using this package we are parsing disomy to our image labeling murder. So here inside of our main dot, dot phi, we firstly declared a very immediate layer blur, and then we initialize did. And after that, we are creating our input image object. And then we are parsing this object to this image layer blur and getting the predictions. And after that we are showing these predictions inside our texts we did for that is the image labeling application. 8. Camera Package Setup for Flutter: Welcome to this lecture. As in the previous lecture, you have seen the use of image picker library to choose images from Galileo captured them using camera. And we're gonna use this library in future with ever machine-learning models. Now the next thing we need to learn is to use the Live for the firm camera with the word martyr. But before that, we need to access this live footage from over a camera. And we need to access it frame by frame so that we can process each frame and pass it to a bert model. Now in this lecture we will look at a package name camera, using which we will get the access to our live camera footage. And we can also access this life for the farm camera frame by frame with the help of this package. So let's start by creating a new flavor product upon your Android Studio and create a nuclear projects. I'm going to create one here, module this flood replication. Now click Next. And there you need to specify your project name. So I'm going to write it live. Now, click Next and there you need to click Finish and your pleasure project will be created the normal man. Now you can see that our flooded project is created successfully. So now we have the code for this default Counter replication, and now we need to make changes in this code to get live camera footage to. Firstly, we need to add the package name camera inside our flat replication. So open your browser and go to the site, but dot dot there in the Third Reich camera here. And press enter and you will get the package is related to that. And now you can see that we get the results suggest click on this first reader and there the documentation page for this camera package will be opened. So the first step is installing this package inside of a flat replication. So go inside this installing section. And there you can see that we need to add this line inside the vector EML files. Just copy this line and move back to your flat replication and expand your life. We demo and go inside your books back dot YAML file. There inside the dependency section, you need to paste this line. So I'm gonna paste it just below these icons dependency. And now click on this public debt and this package will be downloaded in a moment. And now you can see that this package is loaded successfully. So now let's move back to the documentation page and go inside this room with actions so that we can follow the next instructions. When you scroll down, you can see that now for the iOS, we need to add these four lines inside our info.plist file up iOS section. So we need to add these lines because in order to access the live camera footage, we need to permissions. The first one is for the camera and the second one is farther Mike. So whenever this permission will be off, the string or the texts that will be displayed to the user will be taken from these lines. It just copy these lines and paste it into your Info.plist file. After copying them, move back to your flat replication and expand this iOS section. Now inside this folder we have our info.plist file and add the tub inside this dictionary tag. We're going to paste this line. There. You can see that these lines are specifying that can I use the camera please or can I use the mic please? This text will be displayed whenever these permissions will be asked to the user. Now the setup for our ILS section is complete and now let us look at the instructions related to and dried. If you scroll down inside the dendrites F, and you can see that the minimum SDK version should be 21 for using this package. So we need to change it inside of our app level, build our Gradle file. So moved back to your Android Studio project and they are going inside your Android section and open this app folder. And there we have over build-up Gradle file. And when you scroll down, we have this min SDK version, just change it to 21. Now the sector forever Android and iOS section is complete. And now we need to use this package inside of a flat replication. 9. Flutter Camera Package Code: The application, we're going to build a very simple one in which, in which the only live camera footage will be displayed. So to achieve that, we need to make changes inside of replication UI. So just scroll down and move to your build method. Then I will scroll down there you can see that we have overbuild method and there we have a scaffold. So I'm gonna remove all the code that is present inside the scaffold. So just copy it here and scroll down and there you can see that our scaffold is ending here. So I'm gonna remove all the code inside this camphor. Now in this careful, we want to show the live camera footage. And to do it, we need to add a widget name aspect ratio to move back to the instructions paid. And when you scroll down, you will see that we have this video here. Just copy this aspect ratio widget and paste it inside your scalp. Move back to the flooded project and they're inside the body tag of our scaffold. We're gonna paste these videos some there you can see that we have two areas. So when you will click on this camera preview and press Alt plus Enter, you will see that it is giving us an option to add the import for this library camera dot, dots. So just click on it and this import will be added inside we're main.js file. This header is gone and the second adder is related to this controller. This controller is actually initializing the camera and we need to declare this controller about. But before that, let's look at this aspect ratio visit their entire date. We are specifying two parameters. The first one is the aspect ratio, and we're getting the aspect ratio using this controller. And the second child is the camera preview. And we are also passing this controller here. This camera preview will show the live 40 from camera. So now let's look at the next instructions which are related to this controller. Move back to the instructions page. And there you will see that the scroll out there, we have this camera controller declared here. So just copy all these lines along with this inner state method. Then I'm gonna explain them and now move back to the project. And there above this build method, we need to pay these lines as we are not using this increment counter method and this counter variable suggests remove it and paste the lines here. So there you can see that firstly, we declared a variable named camera controller. Then we initialize this controller here with the help of this camera controller constructor. There we are passing a first element of list camera and we will look at it in a moment. And the second parameter is the resolution. So we are setting this resolution to medium. You can also set it too high if you want offer that we are calling this controller dot initialized method. And once this method will be called the live camera footage, you will be displayed inside of our application UI because this controller will be an inch live. And we are showing this controller inside this camera preview. So we will have the live camera 40. And then we are calling this then method an entire debt. We are checking if the camera is mounted successfully or not. So if the camera is mounted successfully, then we can get the live camera footage frame by frame. In order to get the live camera for the frame-by-frame, we're gonna use these controllers. So just write controller dot Start image stream. Now you can see that we have this method here. Similarly, I will add braces here because they are more comfortable for me and I'm going to add a little bit less spaces. And at the end we're gonna add a semicolon. Once this controller will be initialized, we can get the live camera 40 frame by frame using this matter, this controller dot strategy midstream will return the live camera 40 frame by frame, and that frame will be stored inside this image variable. So we can pass this frame to our model and get the results in future. And they are one thing is missing and that is this camera's list. This list will actually contain list of device cameras. Move back to the instructional space there you can see that we are declaring these cameras list here and it's stripe is cameras description, and we are initializing it here in our main method. So cameras is equal to a weird available camera. Then it will return us all the device available camera and it will be stored inside this list. So if device has two cameras, like the back camera and the front camera, then the index 0 will contain the backend rock device and the index one will contain the front camera device. So firstly, we need to copy this nine so that we can declare this list, copy it, and move to your application. And at the top, we're gonna pay this line. I'm going to paste it here just above this main method. And after that, we need to copy two more lines where we are initializing this list. So just copy these lines and paste them inside your main method. Suggests above this run method, we're gonna paste them and now you can see that we have this area. So just click on this weird keyword and press Alt plus Enter. So we need to add asynchronous keyword to this matter because we want this process to be heparin asynchronously. And now you can see that the header is gone. Now let's look at this process in detail. So firstly, we are creating a list of camera description and we're naming it cameras. After that we are initializing this list here, so it will going to contain all the device camera. And then inside our class, we are creating a camera controller. There you can see that we are creating it and we're initializing it decided over init state method. So there we are initializing this camera controller and we're passing the back camera of the device to this camera controller. And if you want to open the front camera, then you should pass one here. So in that case, the front camera of device will be opened, but I'm gonna change this to 0. Similarly, when we will call this initialize method, then the camera will be initialized. And we have the live camera 40 displayed inside of our application UI, where we use this aspect ratio video. This widget is containing two parameters, aspect ratio and the camera preview. And once that Emma is initialized, we can get the live camera 40 frame by frame, and that frame is stored inside this image where yet. But we can get the stream and we can pass it to our model and get the result we're gonna do in future. But here one more thing is needed, and that is we're not checking that if the controller of depth is not null when we are putting it inside of our aspect ratio wizard. So just before this aspect ratio, we need to check the status of this controller video. Then we're going to display this aspect ratio video. Otherwise, we will get an error at the start of application, which will be going after some time, but it is not a good practice. We will add a check here so that if the controller is initialized, then this aspect ratio video will be displayed. Otherwise we will show an empty containers. Suggest about this return statement we're gonna write if the controller dot value is initialized, then we're gonna return this careful with this aspect ratio wizard. Otherwise we're going to return a container. Inside the else statement. We're gonna return a simple container. Now the coding of our application is complete and you can test it. But before that I'm going to quickly go through the path. So the first step is adding the dependency inside your book spectrum KML file. After there, you need to set the minimum SDK version for your Android application to 21. And you need to add thumb lines inside your Info.plist file for your iOS application. Then in your application UI where you want to show the live camera footage, this aspect ratio video and this video to using this controller variables, you need to create this controller variable above and you need to initialize. Once this controller will be initialized, then if you want, you can also get this live camera for the frame-by-frame. And there we are specifying the device camera to open. So we are creating this list above. To get the list of device cameras. We're using this get available cameras methods. So there we are using this method. So these are the steps to add live camera preview inside your fluid replication and get that footage frame by frame. Now let's simply run our application and tested. 10. Importing Image Labeling live feed application starter code: Welcome to this lecture and inside of our previous lectures, we have learned to use ME labeling feature a file with a milk it inside our flat replication. But in that application we used images with the image labeling model. And now it's time to use the live camera footage with the image labeling model. So we're going to build a live feed ME labeling application in which we will pass the frames of life camera 42 over image labeling model. And the predictions will be shown to the user in real time. So we will start by importing over starter application code from GitHub. So open your browser and type this URL. You can take this URL from projects URL file shared with you. Here you need to click on this code, but turn and copy this repository link. After that, open your Android Studio and click on this Get from virgin control and paste the link here. Now you need to make sure that your version control is good. And here 2s the project location after that click Clone and this project will be ready in a moment. So now the project is cloned successfully and the greater good. So lax firstly launch this application inside a numerator and see what does this start replication contain? So I am going to light this shown inside an emulator, but you should lie this application inside your DLD y. So the last launch it. So now the application is installed inside the emulator, and that is the Dui of this application. So here you can see that we have the same background image which we have part of our images application. But now we have this LTD and in the handoff, this ltd, We have this video I Kern. So here when the user will click in the center of this LED, you will see that the live camera footage will be displayed here. So there you can see that that live cam now footage is being displayed here as we are in this application inside an emulator. So we are getting the camera preview for this imitator here. And when we are going to run this application on a real device, you will see that the live camera footage will be displayed. So you can test this application on your device as well. So that is the worst start replication that we are showing the live camera footage inside it. And now we need to pass the frames after that live camera 42 over the image labeling model and get that production. 11. Showing Live Camera Footage: But before that, let's look at this router application code. And then we're going to add the imagery being related code. So here let's expand this Flutter image labeling live feed and then open this Pub spec dot YAML file. Now this file is open and there you can see that inside the dependency section, we have this dependency upon camera package. So using this camera package, we are displaying the live demo footage. And we also have this Google ML Kit library because we are using here to access the feature of fire with MLK it. So after adding these dependencies, let's open our main dot dot phi. So here in the leap folder we have it. So just opening. So there you can see that we have this stateful widget here and here inside my homepage straight class, we have the code to display the live camera forte. So firstly, let's look at the GUI inside of our bill Matt third, we have the GUI of this application. So here you can see that we have this material lab and then we have the safe area of Egypt or knocker that we have this scaffold and container. And with the help of this container, we are setting the background of this application and that is this img2 dot CAPD. So here we are setting this background, the mean. After that we added this column, we get here. And then firstly inside this column we do it. We added aspect and we added it because we need to show this LTD and above this LCD, we will show the live camera 14. To place this live camera footage above this LCD image we are using the stack it means. So firstly, we are doing this LCD made, and after that we have this flat button we did. And they are inside this fat. But then we are checking if the IMD variable is equal to null. Then we're going to show a container which will show this video icon. And if this IMD variable is not null, then we will display the live camera footage and we are using this aspect ratio with you to show the live camera footage. So this widget is responsible for showing the live camera footage inside this application. You should be familiar with that code because inside of our previous sections, we have learned to display the live camera footage inside our flutter application. So if you did not complete it that section you must complete it and then proceed here. So here you can see that we are displaying the live camera voted, but we are checking the value of this IMDB. And that is actually our camera image that we declared here. So at the start, this IMD variable will be null. So in that case, we will show the camera icon. Otherwise we will display the live camera forte. And similarly after that we have this text we did in which we are going to show the upper picture. Now let's look at the code related to displaying the light camera footage. So there you can see that we have this initialized camera method and it will be executed once user will click on this LCD. So here you can see that we added this flat button and when you will click on this flat, but, and we will call this initialized camera methods so that the live camera footage will be displayed. And inside this method, we are initializing our camera controller. And then here we are passing several parameters and the first parameter is the camera you want to open. So if you want to open the front camera, then you will path camera one here. And if you want to open back camera, then you will pass the row here. We created this list above. So there you can see that we created this list of camera description. And then we are getting all the available camera. So the camera description will be stored inside this camera's list. So here we are passing the information of our back camera and then we are setting the resolution. After that, we are calling this controller dot initialize method. And if the camera is successfully mounted, we will start to get the A-frame love that live camera footage inside of our application. And we are doing that with the help of this controller dark start image stream method. Once the live camera footage will be displayed, we will get the frames of that live camera footage here inside this method. So far, each frame, this method will be executed and here we can process that frame. But now firstly, we are checking the value of this. So we created this Boolean variable here and it is by default Paul. So if this variable is hall, it means that our system is not be the, or there is no frame that is previously being processed. So here we will set it to true. And after that we will assign the CIMI to this IMDB. And then we are calling this blue image labeling method. Now let's understand that flow that we need to pass the frames half-life camera Hootie to our model. And we will do that in such a manner that only one frame will be possible over model at once. So here this is Vc variable is ensuring that, so we will firstly check if it is false, then it means that our system is not busy or the frame is not being processed. So we will firstly set it to true so that for the next frame, it will going to skip this section or that frame will be ignored. And after that, we will pass this frame to our model and get the upper addiction. And we're going to write this code inside this blue image labeling method. But hopefully you get the idea that how we are going to process these frame. So for each frame, this method will be called. And here we are going to process those things one by one. And then we're gonna show that is. So that is the worst not replication that we are chewing the live camera footage and we are getting the frames of live camera footage and we are putting into our machine learning model. And now inside of our next lecture we're going to add the code related to Firebase MLK, ME labeling API. So see you in the next lecture. 12. 3 livefeedimagelabeling: Welcome to this sector. As inside every previous sectors we imported over start replication code. And you have seen the working of that application and that application. We are showing the live camera footage with the help of camera package. And now we need to pass the frames of live camera 42 over ME labeling model and get the results in real time. And in order to do it, we're going to use the same Google MLK decade. So here inside your browser you can open this pub dot dev and then you can search for this Google MLK. It, as I have told you earlier. After that, you need to click on this installing sanction and you will add the dependency inside your spec dot YAML file. So just copy this dependency and paste it inside the dependency section of your Pub spec dot YAML file. But forever start replication. We have already done that. So here you can see that inside it over Pub spec not YAML file, we added this dependency and now own the documentation page, open the Read Me section. And here we have the instructions to use this PEC it. So firstly, we need to create our input image and we will pass this inputting me to over machine-learning or immediate labeling model. So here, previously we created this inputting made using the image file. But now we did not have the inmate, but instead we have our frame of life camera forte. And the type of each frame is camera image. So here we have the section where we can create an input Amy using the stamina Amy. So you need to copy this section of code. So just copy this code and after that, move back to the application. And here inside of our main dot dot file, we will paste this code, but we're, we should paste it. So here for each free MF live camera footage, this method will be called and then card that frame. We are calling this do image labeling method. And here inside this method we need to convert this frame into input a mean, and we need to pass it to over image labeling model. But here it is not preferable to paste all of this code. So we will create a separate method here. And in that method we are going to pass the camera image and that method will return as an input a me. So let's declare a method and that a return type of that method will be inputting made. And we will name it, get input image. And then inside the parameters we will specify a parameter of type camera. So that user will pass our camera image ready yet. And we're going to return and inputting me. So here after declaring this method, we need to add the import for this input a mean. So here, let's bring this image input two input MA, so I mistakenly wrote it, so input image. And so now we declare this method here. And this method will not take a camera image and it will go on our radar nothing input aiming. And inside this method, paste the code that we copied. So here we have couple of errors. So just firstly, remove this first line because we already get our camera description. After that, click on this write buffer and Beth al plus Enter and add the import for this foundation dark, dark. And now here wherever this camera images being used changes to image, because our camera images named as the image. Similarly, let's change it here. So now we replaced all the camera images with this image variable. And there we have this one error which is far this camera sensor orientation. So we need to pass the orientation of camera as we are getting the camera description from this list off camera description. So here we are going to use this camera's list. So here just write cameras and then specify the index to 0. Because while showing the live camera footage, we are also using this index 0. So here you can see that we are creating over input image from this camera image. So this method will not do it, and we will get our input image here. Now we need to return this inputting may suggest right rate and then pause this input image here and there. Now this method is complete. So far each day MF live camera footage, we're going to call this method and it will convert that frame into an input image. And we're going to call this method here inside this do image labeling method. So firstly, let's declare an input image variable and we will name it input. After that, we will call this method. So get input image. And then inside the parameters we will pass our camera in me. So we are storing each camera image inside this IMD variable. So we will pass it here. So I'm going to specify it there. And then I'd say now far the frame of live camera footage when this new image labeling method will be called. Here we will convert this framework camera image into this input image. And the next step is passing it to over martyrs phone, that documentation page. You can see that after creating the inputting me, the next step is creating an instance of detector. So we need to declare our image labeling here. In fact, they start replication. We already have done that. So there you can see that we declared our image layer blur. And then we initialize beauty inside our init statement. And now the next step is passing over inputting me to this Leibler and getting the predictions. So here we have this third section per day. So just copy the fourth line of this third section. And then we will take our low. I'm going to paste this line inside overdue image labeling mattered. So here just paste this line below this input image. After that, let string this immediate Leibler to Leibler because we named our image, label it as labeling. There. You can see that we're passing over inputting me to this label are dot-product image method. And it will return us the list of image label, just like our previous application. And the next step either iterating this list and showing the results to you other song, that documentation page we have this Section 2, I trade that labor. So just copy this section of code and paste it inside overdue image labeling method. So I'm going to paste it here. And now you can see that for each label data were modeled have returned for this particular frame, we are getting the name of the label, the index and the confidence. And here change this text to labor. In order to get the label name, we will use this label dot label property. And now this header is gone. So here for all the predicted label, we're going to get this information. And the next step is showing this information in, inside a vertex V0. So here inside our TextView we are showing this result variable value. So we will change the value of this result variable here inside this far Lulu. So here let's assign the value to this result. So result. And then we are going to append this variable because we want to show all the predictions. So here we will firstly show the name of prediction and then add couple of spaces. So result, and then we're going to append this result will give Bill because we want to show the prediction or all the predicted label. So here, right, result plus equal to. And then we will pass the name of labor, which is stored inside this text where he ever after that add couple of spaces. And then we will show the confidence score. And then call this spring as fixed because we only want to show two floating dangerous because this confidence score can contain a lot of digits after the floating point. So here call this two string as fixed, and then specify the number of floating digit store true after that, add a new line character so that the next prediction will be shown on a new line. And say, now you can see that each time we're going to pass this, inputting me to this labor, we're gonna get that either. And then we are storing the result inside this a desert area. And it will be shown inside of a text we did. And here above this far loop, we need to reset that text inside this result variable for that farther next dream, only the latest predictions will be shown. So here we will reset this text screw empty string. And after that, we need to write this statement inside a sad state blog. And that is because once we will store that inside this result variable, we want to update them inside of a text widget. So when we will put it inside a set state method, so wherever this result variable will be used, there's two inches will apply. So just call that state and then just cut this line and paste it inside this set state law. 13. Section introduction: Welcome to Firebase and milk each section of this course. And milk is a mobile SDK that brings Google machine learning model to Android and iOS in a powerful, yet easy to use package. In this section, we will look at the features of fire bits and milk it and build powerful MLB flood replication. We will start by looking at the Firebase, a market and the features it provides. After that, we will explore the EMI labeling feature of Firebase ML kit to build applications that will perform image labeling on images taken from camera or to them from Gary. Then we will build flutter obligation in which we will use the live camera 44 image labeling. Then we will look at the barcode scanning feature, a file with a milky to build applications that we'll get information encoded in barcode far made, we will again Bill to flutter applications in which we will use the images taken from camera or to them from Gary, and then scanning the barcode using live camera footage. After that, we will explore the face detection feature, a file with a multitude to build applications that will detect the person face is present in the images along with the facial landmarks and thus smile. Then we will use the text recognition API of MLK to recognize text in images of documents, receives m credit cards. So these feature of file which can be used for a variety of different applications. And we're going to learn to use them in flood replication. So hopefully you will be quite exciting to learn something unique and valuable. So let's begin our journey. 14. Importing Starter code for Flutter Barcode Scanning: Welcome to this lecture. In this lecture, we will start building over barcode scanning application using Firebase, ML kit and flutter. So let's begin. Now the first step is importing the starter code. As in the previous example, we imported the code from GitHub. But you can also get the starter code from the zip file with the name Firebase MLK, dark deep cured with you. So in that the file, you will have all the starter and the complete project score for all the application that we're gonna build. For example, in that file with ML kit folder, you can see that we have a section barcode scanning. And when you will open that you will have two folder, complete and Starter. And in the startup folder you will have a starter code and in the complete folder you will have the completed application. So if you stuck somewhere and you are unable to solve that problem, you can check that complete code as l. So now let's open the starter code for our br Court's pending applications. So here I'm going to open it and there you will see a bar code scanning folder and that will contain the starter code. So just copy this part and now open your Android Studio. And there you need to open an existing Android Studio projects. So just click and paste the copy link here. And now you can see that we get to that folder. Now I'm going to expand it and that is our application for just slept MLK barcode scanning and press OK. And now our footer project will be opened inside Android Studio in a moment. So now our project is opened inside Android Studio successfully. So let's firstly run this starter code and see what does it actually contains. So I'm going to launch my emulator again. So now we'll start. Our application is installed successfully inside an emulator device. So there you can see that in that application we have a wall and a frame hanging on it just like over immunolabeling application. So now when we will click in the center of this frame, the gravity will be opened and you can choose an image from gallery or you can choose the image of any bar code for here, you can see that I'm going to choose a random image for now. Let's choose the new image. And now you can see that that image is being displayed here. Similarly, when you long press in the center of this frame, then the camera will be open so you can capture an image of any bar code and that image will be displayed here. The flow of our application is exactly the same that we have for our immunolabeling application. So the user will choose an image Chrome gallery or capture it using camera. Then that image will be passed over bar code scanner and he will going to detect the information contained inside that barcode for that is our startup project. So now let's look at the code. So I already explained the code forever. Immediately applications for if you want detailed explanation of the starter code, you can write the second lecture of immunolabeling section because they explained that quote thoroughly in that lecture. But now we will begin by adding fire with MLB and inside this flutter projects so that we can perform bar code scanning. 15. Flutter Barcode Scanning code: Welcome to this lecture. As in the previous lecture, we opened the starter code forever barcode scanning application and you have seen the working of stata replication. Now in this lecture we will move forward and our step number two is connecting your Firebase project, this flutter application. But for this project, I have already connected it with the Firebase console. And if you want to move that, how to do it, you must work the image labeling section of this course because in that part I explained it completely that, that how to connect the file-based project with the flutter application. So now here you can see that in the main file we have our new i and the DUI is exactly the theme that we have for our image labeling application. So firstly, we have the background image here. Then in the stack we have the image of frame. And above that image we're displaying the image we will captured using camera or choose it from Gary. And then below all of these team, we have a text view in which the result will be shown. Same is the case with our Android sections. So there you can see that in the app folder we have our Google services.js and file. So we connected it with the file with console. Similarly in the build-up Gradle file, you can see that we have the plugin and also the dependency far our file with ML vision, image labeling model. And now when you open your project level beloved Gradle file. So here you can see that we also have the cloth path. Now we need to follow the next step and that is adding Firebase, MLB and package inside this flutter project. And we're gonna do that. So open your browser and here you need to search for Firebase MLB. And again, so open this link file beside Milvian because that is our desired package. And now go inside the installing section. And here you will find the instructions to install this package. And I am repeating the steps so that you will get familiar with the process. So just copy this line and open your YAML file inside the flutter project. So here when you scroll down, you can see that public.html file and below this image picker library, I'm going to paste this line and now click on this pump grant so that this package will be downloaded. And now we need to follow the next instruction. So move back to the page and here you need to click on this, read me. So just click on it so we can follow the next instruction. So here you can see that the next instruction is adding the dependency, but I have already additives for just copy this line and paste it in AP level build-up Gradle file. So just move back to your Android Studio. And here inside the app folder you have this build-up Gradle files. We'll just open it. And here you need to paste this line. So I already have this dependency for the starter projects, so I am not doing it for now. Now let's move back to follow the next instruction. So there when you scroll down here, you can see that we have this metadata tag. And that is because we are doing on device machine-learning and we want that our models will be downloaded when the application is installed. So for that purpose, you need to add this metadata tag inside the manifest files are just copy these lines and pays them inside the Android manifest file. And I already have them inside my manifest file so far now just to guard you, I'm repeating the process. So there you can see that I have this metadata tag and I specified the value two bar code here because we want to download the barcode scanning model as we are building the application for code scanning. And now let's move forward. So when you scroll further down, you will see the instructions for iOS, so you have to follow them. To connect your iOS app and the video lecture related to that will be uploaded soon because I don't have a Mac right now. When you scroll down, you will see that. So when you scroll down, you will see detailed instructions to use this package. So here the first step is creating a file-based region image object. So just copy this line and I'm gonna paste it inside of our main dot dot file, inside the blue bar code scanning method. So just opened the main dot dot file. So when you go at the top of this file, you can see that we have do barcode scanning method, just like do image labeling method. I'm going to paste this line here and there you specify your image file that is underscore image. And now you need to add the import for this package. So just press enter and here you need to import the libraries. Just click on it and the library will be imported. And now we need to follow the next instruction. And now the next step is create an instance of detectors. So in order to create one, you will use this line barcode detector is equal to Firebase region dot instance dot barcode detector to just copy this line. And we're gonna paste it inside our init method because we want to create this detector update once the application starts running. So just paste this line here, and now we need to declare it outside this init method so we can access it in our du bar code scanning method. So just copy the line and paste it just below this image picker variable declaration and remove the final from here because we are initializing it in the method. So now we initialized our detector. Now the next step is passing the image to the detector. And in order to do it, when you scroll further down on the instructions page, you can see that here we are using the barcode detector dark detecting image method, and we are parsing the image here, and it is returning us a list of bar code. So now copy this line and paste it in blue bar code scanning method. So let's paste it here. So I pasted it. There's, so now you can see that we have our image that we choose and from Gary or captured it using camera, then we are creating an instance of Thai Firebase VM image. Then we are passing this vision immature barcode detector and it is returning us the list of bar code. Now we will, I'll trade this list and get the information contained in the detected bar code. And now we simply need to iterate this list of barcodes and get the information. But on the instructions page, you can see that we have a block of code that is exactly doing that. So just copy these lines and paste them just below this list of bar codes. Then we will look at them. So there you can see that we simply have a for loop. And that loop is getting the barcode object one by one. And for each bar code we are getting a variety of information. And here in the first two lines we are getting an error. And that is because this barcoded bounding box is returning a rectangle. So just press enter and you will see that chain, the type rectangle two rect, So just change it. And now you can see that the error is disappeared. Same is the case for second line. And here you need to change it to list of opposite. And now this arrow will also be gone. Similarly in the third line, we are getting the value stored in their barcode. So using barcode dot raw-value it, we're gonna return the value. And then you can see that we are using barcode dot value type to get the type of values stored in that barcode. As the barcode can contain variety of information like phone number, email, SMS, Wi-Fi, credential, and so on. So here we need to check the type of information. This barcode contains a barcode that value type, and it is returning an object of type barcode value type. So here we will use the switch statements. So there you can see that we popped over value type. Then in the cases section, you can see that if this value type is of WiFi, then for the Wi-Fi we are getting the SSID, the password, and the encryption type. Then similarly, if the type is URL, then we are getting the title part that URL and the URL itself, so on you can add more cases here. For example, I'm going to add a case for email here. So case then barcode, then barcode dot value type. And here I'm going to choose Email Year and I will add a colon there. You will see that I will get the information relating to him is so Barkow dot email. And now for that email, I can get the body, the address, and the subject. So there you can see that you can get any information if that barcode is containing any mail, so on you can check for other cases. For example, here I will check for SMS, so barcode value type dot SMS. And now here you can see that using barcode SMS, I can get the message or text contained in that SMS and the phone number associated with this FMN. So you can see that barcode can contain variety of information and using MLK it, you can get it. But for now we will simply show that type of information barcode contain using toString method. So I'm going to update my result variable here. So reserved plus equal to barcode No.2 stream. So I'm gonna call the toString methods that the information contained in that barcode will be returned. And I'm going to add a new line characters so that if the image contains multiple barcodes, the information for all the barcode will be visible. So now we need beta will result variable. So just outside this far loop, I'm going to call the third state method. And here I'm going to simply write resilient. Now the coding of our replication is almost complete, but we need to call this do barcodes can method. And we're going to call it from image from camera and image from Gary method. So once the user will choose an image from Gary, this do barcode scanning method will be called. And same is the case with camera. We will call this new barcodes can method here. So now the coding of our application is complete. So now let's quickly go through the process. The first step is importing the code or creating your own flutter application. Then you need to connect to your Firebase project with this Predator application. And then the third step is adding file, this MLB and library indifferent or application. And the fourth step is creating a file-based via and image object from the image that you have chosen from Gary or captured it using camera. Then the fifth step is creating your detector object and passing the Firebase V and image to it. And the sixth step is getting the result from the detector and showing it to the user. So there you can see that we have completed all the steps. So now let's run our application and tested it. 16. Flutter Barcode Scanning Application Testing: So now I have installed over barcode scanning application inside a real device. And I have also downloaded some peaks of bar chords and QR codes from internet. So now let's test them for when I will click in the center of screen, you can see that I can now choose an image from Gary. So let's firstly choose this barcodes. I'm going to use it and you can see that the type of inflammation this barcode contains abroad. So when you will add a case for the product inside of a switch statement, you can also get the product related information like the name of the product and the registration number and so on. Similarly, now let's test it for another chord. And now I'm going to choose this QR code. And now you can see that this QR code contain a URL and the URL is off this website and ride advises. Similarly, let us do the another image and there aren't enough a QR code for SMS. And now you can see that the type of information this QR code contain is an SML. Similarly, we can get the phone number and the message or the texts contained in their decimals using that block of code that I showed you earlier. So now you have seen that our barcode scanning application is working correctly. 18. Flutter Testing Barcode scanning live feed application: So now we have installed over barcode scanning live feed application inside a real device on our last tested. So when I will click in the center of screen, you will see that the live camera will be opened. And know when I put this product in front of this camera, you can see that the type of information this barcode contains is our product. So this barcode scanner is working correctly as currently we are only printing the type of information while could contain not the information itself. So inside of a three statement, when you will add a blog for product, then you can get the product related information and you can use it for your particular use cases. But for now, you can see that this life with application is working correctly and it is detecting that it's a product. 19. Flutter Barcode Scanning Live Feed Application code: Welcome to this lecture. In this lecture we will look at the code for our barcode scanning application using the live feed from camera as we previously built our barcode scanning application, but that application was using images taken from camera or chosen from Gary. But now we're going to use the live camera footage for bar code scanning fly have already completed the code for this application. And I'm going to explain the completed code because the code of this application is exactly the same that we have wherever ME labeling live feed application that we've built in our previous section. So if you want more detail about this application core, you must watch that section because in that section, I explained the things thoroughly. For now, let's quickly look at the code. And so our first step was importing the course. So I have already opened this bar code scanning live feed application, and you can also get that application in the course, the sources inside section barcode scanning our fire with a milkweed folder. So you must open that project inside Android Studio. And now you can see that I have already launched this application inside an emulator and the UI is exactly the same that we have for our previous application. Now when the user will click in the center of this frame, leafy from camera will be visible here instead of the image that he will choose from Gary or take it using camera. So often importing the starter project over second strophe bars connecting over fluted application with the Firebase Console. So we need to create a Firebase application. Then we need to connect over fluted application with that Firebase application. So as you can see that for this application, I have already done this path f. So when you will open this Android section inside the app folder, we have our Google services.js and five. Similarly, we have a file for our iOS section as well. So if you don't know how to create a 5-bit project can connect fluted application with it. You must watch over image labeling section of this course because in that section I have explained the process in full detail. So now after connecting over application with the Firebase Console, the next step was adding Firebase MLB in library inside this polluter application. So once you will open your pump SpecRunner.html file, you can see that inside the dependency section, we have this file with MLB in library. Similarly inside of our main dot dot file, you can see that we have the spec in imported here. No, in this lecture we are using the live feed from camera. So here we need a view that can show the live camera 40. But previously we have an image widget here, but now we have an aspect ratio widget, display it here. So now let's look at overbuild method. And in that met her, you can see that we will firstly setting this background Amy, then we have our stack and in that step will firstly setting this frame and above that frame we previously have a flat button and in that frac button we have an image view. Similarly here we also have a flat button, but now inside this flat button, we don't have an image view, but instead we have this aspect ratio widget. And inside this aspect ratio we get, we have a camera preview. And that is the reason we will have a live footage from camera displayed here. And that widget is actually present inside a package name cameras. So when you will open your Pub spec dot HTML file, you will see that we have the dependency far this camera, we do it. And if you want more details related to this camera package, you can watch the lecture for image labeling live feed application. Because in that lecture, I have explained this package in detail. So there you can see that we have this view. So once the user will click this flat, but at this ONE path method will be executed and we will call this initialized camera method here. There you can see that inside this initialized camera method, we have a code for this live camera preview. So there you can see that we are firstly initializing this controller and we declared this controller about. So there you can see that we have this camera controller here and we initialize it there. So camera controller and we are passing two parameters. So the first parameter is the camera. As you can see that we are passing this first element of this camera list and we are creating this list here above. And that list actually contain device cameras. So there you can see that we have list off camera description and we're initializing it here. So cameras is equal to available camera. So if the device have to camera, then this list will have element and the element present at the 0th index if the back camera and the one the index is the front camera. So there you can see that we are initializing this controller and after that we are creating our Firebase and dot instance dot barcode scanners. So we must remove this line from here and we should instead paste it inside our Init state method because we previously initialize our detector inside this init straight method. For now you can see that the next step is initializing the cameras. So after initializing this controller object, we are calling this controller that initialized method. And this method will begin live camera preview inside the frame. And after this method is executed, we will check if the camera is mounted successfully or not. So if the camera is mounted successfully, then we are calling this controller.js trot image stream method. And we are calling this method because we want to do barcode scanning. And in order to do it, we need the frames of live footage so that we can detect the bar code present in that frame. So in order to get the frame for this live footage, we are going to call this method. So controller.js trout ME stream method will return us the frame. So here you can see that we are checking if the system is not busy over the previous frame is not being processed by the system. Then we're going to take the frame and we're going to call this do scanning method. So we're going to take the frame and we will check if the previous frame is being process, then we will skip the frame. Otherwise, we're going to pass the frame to over detector. So inside our do barcode scanning method. So there you can see that we are using this class Kenner UTIs and we're calling it's met her name detect for this method is taking two parameters. The first one is the frame and the second one is the method. So here we are parsing this label at our detecting him. He met, heard far over bar code scanner and forever ME labeling live feed application. We passed over prophecy me method, but now we are passing this detecting image method. And that is because we specified that the parameter that will be passed part this detecting image be a function that will take a file-based vision image as its parameters. And so both processing undetected ME take Firebase V and EMI as their parameter so we can pass both of them. Similarly, once this method will be executed, it will going to return a list of barcode. So we're checking that if this result is list of barcode, then we're going to iterate the list and we're going to show that result to user. So firstly, you can see that we're getting barcode in this list one by one. Then for each bar code we are getting the value type. Then we are adding this result variable using this value type No.2 strings. So we can print the type of bar code present in the image. Similarly based upon the type of bar core, you can perform certain operations, as we've seen in our previous application when we perform the barcode scanning using images. So once we will append this result variable for the processing of that particular frame is completed. So now the processing part, the next frame should begin. So we will set this visit to fall so that the next frame will be passed to this do barcode scanning method here. So as you can see that if this is busy fall, then this do barcode scanning method will be executed. So that's the core forever barcode scanning application. So now I'm gonna run this application so that we can test it. 20. Flutter Text Recognition Section Introduction: Hello and welcome to ducks recognition section of this course. In this section we will look at the text recognition feature, our file with a Medicaid with fire, with MLK decks recognition API. You can recognize texts in any Latin based languages. Seek admission, can automate the data entry for credit card receives and business card. Apart from that, it can be used for a variety of different purposes. So what actually we are going to build in the sections. So firstly, we're going to build a flatter application in which user would use an image from Gary or captured the image of documents using camera, then the image will be passed to our text recognition model or Firebase MLK. But then the results or the recognized texts returned by the model will be shown to the user. So in order to build this application, we're going to follow a six-step mechanism. Firstly, we will import the starter project from GitHub. After that, we're gonna create our file based project and connect our fluttered application with it. Then we're going to use a library name file with a Milvian inside of a floater project. After that, we will create our file-based text recognition object, using which we will perform our text recognition. Then we're gonna pass the image of document taken from Gary or captured it using cameras to over text recognition modern then that recognize tech returned by the model will be shown to the user. So this section in while building a very useful applications. So let's begin. 21. Importing Starter code for Flutter Text Recognition: Welcome to this lecture. In this lecture, we will build our text recognition application using Firebase, MLK, and floater. So let's begin. So the first step is importing the strategic goal. So as you know that we're importing the code from GitHub. So in the projects you are L5 shared with you, you will find the link to clone the project, or you can manually type this URL. And there you can click on this button and you can copy the URL to Cologne, the projects and just copy this URL here and open your Android Studio. And there you need to click get from where you are in control. And you need to paste the link here and make sure that you're really in control is good here. And now you need to choose the location where you want to clone the project. And now click Clone and your project will be cloned in a moment. And here you need to click yes, that we want to create a new Android Studio project. And then you need to choose the Gradle. So just simply to Gradle and click finish and the project will be built in a moment. So now the project distributed successfully and we're getting some matters. So just open your main dot dot file inside the lib folder so that we can remove these arrows. So there you need to click on this Get dependencies and you will see that these libraries will be downloaded and the errors will be gone in a moment. And now you can see that all the arrows are gone. Now let's firstly run our application and see what does this Trotter application contain. So now our application is successfully installed in Sinai numerator. So that's the basic UI of our application. So there you can see that we have a notebook here, and that is because we are building a text recognition application. Play created this GUI in a very fundamental. So firstly, user will choose an image from Gary or captured using camera by clicking in the center of this notepad. But when you will click on it, you will see that the Gary will be open so you can choose an image com Gary. And there you will choose a documented me. As for now, I don't have one, so I'm going to choose this image for now. And now you can see that the chosen images being displayed here. But first you will choose the image that is R for document, and we will complete the coding of this application. The text in that document will be recognized and it will be written on this notebook. So this GUI is quite exciting, isn't it? Uh, hopefully you like that. You. So you can see that the flow of our application is almost like a viremia labeling and barcode scanning application. In both of those application, we choose an image from girly or captured it using cameras. Then the predicted result will be shown in a TextView. Same is the case here. So we will choose enemy that will be displayed here, but our result will be displayed their own that notebook. For that, the basic flow of your application, which is almost like over previous applications. 22. Writing Flutter Text Recognition Code: Welcome to this lecture. As in the previous lecture, we imported the cord forever text recognition application and you have seen the working of starter projects. Now in this lecture we will proceed towards the next step and that is connecting the Firebase project with this folate replication. But before that, let us quickly go through the code for endeavor main.out dot file. You can see that we have all the code and the code is almost the same that we have for our previous applications. And if you want detailed description of this core, you can watch the second lecture of imi labeling section. So here you can see that in the build method we'll firstly setting the background image, so the image with the wall and this telephone. So here you can see that inside our build method, we're firstly setting the background image, the image with the wall and telephone. Then below that you can see that we have another container and we are setting this notebook dot PNG, which is actually this file. And that is the background of our text widget because the predicted test will be displayed here. So there you can see that we have a single child scrollView, and inside that we have a text view. And inside that TextView recognize texts will be shown and we put it inside single child scrollview because if the text is greater than the size of this notebook file, we want to enable the scroll. So when you scroll down, you will see we have a similar stack that we have in our image labeling application. So in that sense, we are firstly citing this clipboard ME, ME labeling section. We have a wireframe image here. Then below that we are showing the image that is captured using camera or two then from Gary. And when you scroll further down, you will see that we don't have a TextView. No, because our text will be displayed here inside that notebook. So that's the basic UI off of replication. So now let's move forward towards the step number two, and that is connecting over F with the Firebase console. But for the starter project, I have already done it. So when you will open your Android section and inside the app folder, you can see that we have our Google services.js IN file serves. You want to know how to create a Firebase project and connect flutter application with you. You must work the imi labeling section of this course, because in that section I will describe the process in detail. From now our step number two is complete. So now let's move forward by adding Firebase MLB in library in this floater projects. So open your browser and here you need to go to the site pub dot dell. And there you will search for Firebase ML VN, as we previously did far our applications. And now we will choose Firebase, MLB. And here. And now we are at the official page of Firebase. I'm Melvin library. So now click on this installing so we will get the instruction to install it. So just copy this line and we need to paste it inside our Pub spec dot HTML file as we did it previously. So if you already know the process, you can skip the part. Now, I'm going to open the floor to replication and I'm gonna open my pub SpecRunner.html file. And there just below our image picker dependency, I'm going to add it. And now let's click on this pub dot gets so that library will be downloaded. So it will be downloaded in a moment. So now let's follow the next instruction. And here you need to click on this, read me to follow the next instructions. And there you can see that we need to add the dependency inside our app level build-up. Gradle quiet, just copy this line and open your app level build-up Gradle file. A move back to the flutter project. And in the Android section you need to open the app folder. And there you will find build-up Gradle file. So open that file. And now in the dependency section, we need to place the line that we copied it from there you can see that I have already pasted it for this starter project, so you can skip that step as. So now let's move forward to the documentation pay. And when you scroll further down, you need to add this metadata tag inside your AndroidManifest file. So you need to copy it and paste it inside your AndroidManifest file. So I'm going to open it so I can show you that I have already pasted it there in the manifest file. So open the manifest file. And there you can see that I pasted this metadata tag and I specified the value two tests because I want that the text recognition model should be downloaded whenever application is installed from the Play Store or from the app store. And now let's follow the next instruction. And there you can see that we have the instructions for iOS, so you just need to follow those instructions. If you have Meg, I don't have a Mac right now. I am leaving this part for now, but if you have a Mac, then you can follow these instructions and the lecture related to that will be uploaded. And now we need to follow the instructions to do the actual stuff, and that is text recognition. So the first step is creating a Firebase vision image and you can do it with the help of this line. So just copy this first line. So open your main dot dot file. And here below this image from Gary method, you can see that we have do text recognition method. So pays the line in here because in that method we are going to write our text recognition code. There you can see that we are getting an error. So press alt, enter and you need to add the import farther library. And now the error will be disappeared. And here you need to specify the image file from which you are creating this Firebase we image. So in our case the image file is underscore image. And if you are getting any difficulty to follow those instructions, must recommend that you want the immediate neighboring section of this course. Because in that section I thoroughly explained everything. So after creating a Firebase via an image, now we need to create our Firebase text recognition detector object. So move back to the instruction page and there you can see that we are creating our text recognizer object here. So just copy this fourth line, paste this line inside our Init state method because we want to create a vertex recognizer once the application is started running. So just paste this line here. But now we need to declare this traffic neither outside this init method, so it will be accessible in our do text recognition method. I'm going to paste this line here and remove final and add a semicolon at the end. Similarly, I'm going to remove this text recognizer forum here. So now we successfully initialized over Texas because neither. Now the next step is passing over five bys V and image to this text recognizer. And in order to do it, we will follow the next instruction. So there you can see that we should call detecting image or processing image method as we are using the text recognizer. So we need to copy the fourth line. And there you can see that we are using text organizer dot process image method, and we are passing over a region image and it is returning an object of type text, which is actually the detector detects. I'm going to copy this line and we're gonna paste it inside overdue tax recognition method. So just below this file-based via an image, I'm gonna paste it. And there you can see that now we are getting our detected text in the form of Wee1 texts variable. And now we need to show that text to the user. As you can see that we have a similar result variable that we have for every MY labeling section and far our barcode scanning section, I'm going to use this result variable to show that they are inside of a do toxic admission method, we will assign the value stored inside this V and texts to our result variable. So result is equal to v and text dot text. And I'm gonna add a semicolon here. So that is the most simplest form to get the detected texts. You can also get the recognized texts in the form of paragraph, lines and words. And you can find that code here. When you will open the instruction page and scroll down, you will see that we have this block of code. And there you can see that we are calling this region test.txt and it is returning us the simple text, but you can also iterate this V and X variables. So we in textblock will return you the text blocks. And for each block you can get the bounding ball so you can draw a rectangle around each detected block. Similarly, you can get the text for each block and the recognized languages for each block. Then for each block, you can get the lines, and for each line you can get the element, which is actually our word. Or you can simply get the text using v and test.txt. Or you can get the text in the form of Bronx lines and elements. So it's up to you which approach you want to you. But here I'm going to use the simple approach and now we need to update our result variables. I'm going to call the set state method, and I'm going to simply write Research there for that the usage of this result variable will be updated. And you can see that we use this result variable here inside our texts videos, which is actually being displayed here inside this notebook. So now the coding of our application is almost complete. So now let's run our application and tested. 23. Flutter Barcode Scanning Section Introduction: Hello and welcome to ducks recognition section of this course. In this section we will look at the text recognition feature, our file with a Medicaid with fire, with MLK decks recognition API. You can recognize texts in any Latin based languages. Seek admission, can automate the data entry for credit card receives and business card. Apart from that, it can be used for a variety of different purposes. So what actually we are going to build in the sections. So firstly, we're going to build a flatter application in which user would use an image from Gary or captured the image of documents using camera, then the image will be passed to our text recognition model or Firebase MLK. But then the results or the recognized texts returned by the model will be shown to the user. So in order to build this application, we're going to follow a six-step mechanism. Firstly, we will import the starter project from GitHub. After that, we're gonna create our file based project and connect our fluttered application with it. Then we're going to use a library name file with a Milvian inside of a floater project. After that, we will create our file-based text recognition object, using which we will perform our text recognition. Then we're gonna pass the image of document taken from Gary or captured it using cameras to over text recognition modern then that recognize tech returned by the model will be shown to the user. So this section in while building a very useful applications. So let's begin. 24. Testing Flutter Text Recognition Application: So now I have successfully in Somehow I have installed our text recognition application inside a real device. So that's the basic UI of our application. So when you will click in the center of this clipboard, you will see that the gallery will be operand and I'm going to choose an image from Gary. So last cool, the center image. And now you can see that that recognize text is being displayed there. So as you can see that we have this Chapter one down the rabbit hole. Similarly, you can read the text and you can judge the accuracy of our text recognition, martin. So now let's do the another email, a lot more texts on gonna choose that image. And now you can see that we have the recognized track far that Amy, and you can see that you can scroll on that notebook and that is acquired beautiful effect, isn't it? So now you have seen the working of our text recognition models. So you can test this model for your particular use cases and check its accuracy. 25. Flutter Face Detection Section Application: Welcome to face detection section of this course. In this section we will look at the face detection feature of Firebase, MLK it with Firebase and milk eat face detection API. You can detect faces in an image, identify key facial features, and get the expression of detected faith is. So apart from detecting faces, you can also detect if a person is smiling or not. His eyes are open or close and the position of his facial landmarks. So what we will build in this sections, firstly, we will be left litter application in which users will choose enemy from Gary. You're captured it using cameras. That image will be passed over Face Detection Model of Firebase MLK. After that, based upon the results returned by the model, we will draw the rectangle around directed faces, and we will also show the facial expressions of detected faces. Then we will build a flutter application in which we will use the live camera footage. So we will take the live camera foodie frame by frame and pass it to our phase detector model. Then based upon results returned by the model, we will draw the rectangle around detected face is in real time. So to build these application, we will follow a six step mechanisms over first step will be importing the starter code from GitHub. After that, we will create a Firebase project and connect over flutter application where they'd. Then we will add a Becky named file by Semmelweis in inside our flutter project. After that, we will create a face detector object using the above package. Then we will pass the image is taken from Gary, your camera or the frames of live camera footage to our face detection model. Then finally, based upon the results returned by the model, we will draw the rectangles around detected faces as the section involves building two very exciting applications. So let's begin. 26. Flutter Face Detection Application Flow: Hello and welcome to this lecture. In this lecture we will look at the face detection feature of Firebase MLK. And previously we have built the image labeling barcode scanning and the text recognition application using flutter and Firebase MLK it. So now you are quite familiar with the process that how to use Firebase ML kit inside your flutter replication. So far this section we will see the working our face detection feature of Firebase ML gates flow. You have already created this application and I am going to explain the code for this application in this section. So firstly, let's look at the flow of our applications. So here you can see that inside an emulator, I have already launched their application. So that's the basic UI of our application, and that is almost the theme that we have forever applications. So you can see that we have this wildly me. Then above that you can see that we have this block where the immature them from gallery or captured from camera will be shown. So the user will click in the center of this block and the Gary will be open and user can choose enemy. Similarly, when he will long press on this icon, camera will be open so that he can capture the immune using Camera 1's the images to the node captured using camera. That image will be passed to over phase detector model. And he will going to detect all the phases present in that image. When the face is will be detected, we will draw the rectangle around detected faces for the additional part that we are doing in this section. 27. Flutter Face Detection code: Now let's see how we are achieving in so our first step was importing the starter code. So I have already opened this application, which is MLK face detection, and you can get it inside your Course Resources folder. So inside Firebase and milk iot folder, open the section face detection. And there you will find this application, MLK face detection, to open this application inside your Android studio. So there you can see that I have my main.js file opened here. So after importing the code, the next step was connecting your application with the Firebase project. So for this application, I have already done that for there, you can see that inside the Android section. So in the app folder we have this Google services.js infile. Similarly in the build-up Gradle file opening the app level build-up Gradle file, we have the dependencies and the plug-in for the plugin is that com dot google.png, ASM, Google services. Similarly, we have this dependency which is filed with MLB and face model as previously we have over image labeling dependency, but farther face detection model, you need to add this dependencies. So when I'm going to open the documentation page, so there you can see that we have pooled dependencies. One is MLB and image label model, and the another one is fight over Face Detection Model. So now finally we are using this dependencies. So just copy this line and paste it inside your app level bailed out Gradle file. So there you can see that I paste 3D. Similarly in our project level build-up Gradle file, you can see that we have this class path set, which is far Google services. So now you can see that we have connected this flutter application with the Firebase project. And if you want more detail of this process, you can write the imi labeling section of this course. So after connecting your app with the Firebase Console, the next step was adding file with MLB independency inside your application. So when you will open your project spec dot HTML file, you will see that inside the dependency section, we have this file-based Milvian dependencies. So we added this library entirely over pilot. So after adding this dependency, the four-step was creating over Firebase V and image object from the image that users have chosen, found Gaddafi or captured it using camera. So here you can see that the UI of our application is almost the same that we have for our previous application. So we have similar methods here, image from camera and image from Gary. And they will be called once the user will peak in the center of this block and he will long press in the center of blog. You can see that inside of an image from camera method user will capture enemy using camera. Then we will call this do face detection method. And there we wrote the code for our face detection. Similarly inside of Remy from Gary method, we are choosing an image com carefully. Then we are calling this do face detection methods. So now let's look at this method. So there you can see that I have my do face detection method here. And the first step here is creating over Firebase v in image. So we are using this Firebase vision Images.find file function and we are passing the image files using farm girly or captured it using camera. Then we are getting this Firebase MLB and object and we declared it above. So there you can see that we have over file-based via an image object with the name Vienna. Similarly, you can see that we have over phase detector variable declared here, and we are initializing this phase detector inside of init state method. So there you can see that we are initializing it. So phase detector is equal to Firebase Vn dot instance dot phase detector. But now inside this phase detector method, we are parsing the Firebase face detection options. And that phase detector options are far enabling and disabling. Some of the features are Firebase MLK, face detection model. So there you can see that inside this phase detector options, we have first pentameter that is enabled classification. So once you will pass through far this enabled classification, apart from face detection, you can also get, you can also get whether the person face is smiling or series, and also whether the person either closes and opens. So by enabling this feature, you can also get the smile probability and persons I state probabilities. Similarly, you can specify min phase sides here, and the value is between 01. So if you will specify small value here, then you will be able to detect small faces in images along with the large faces. Similarly, if you will specify a large value like 0.6, then only large faces will be detected. And the third parameter is the mode. So there you can see that we specified the mode to file-based detector more dotplot. So we specified the fast mode because we want to get the results quickly. Similarly, you can specify accurate here and in that case you will get result in some more time, but the result will be much more accurate. So these are the options using which you can customize the face detector feature of Firebase Medicaid. So after initializing our detector, let's move forward to overdo face detection method. So there we first liquidated over Firebase Vg and image. Then we are passing this V and image to our face detectors. So phase detector dot processing meat. And we are passing this image here and we are getting a list of faces. So there you can see that we have this faces variable and we declared it above. There. You can see that this faces is actually a list off phase, which is our Firebase phase object. So now when you scroll down, so there you can see that after getting the faces, we are calling this draw rectangle around faces method. So that's the additional part here. As far this application we are drying a rectangle around detected faces. So that method is actually doing that and we will look at this method in a moment. And after that method you can see that we have this if condition. So if this is done, length is greater than 0, which means that the detector at least detected one phase in the image. Then we're gonna get the smiling probability for the first phase. So there you can see their faces and the element present at the 0th index, then we are getting the smiling probability for it. And if that probability is greater than 0.5, then we're going to set this result where he able to smiling. Otherwise we're going to set this variable to Syria. And that is hard. The purpose, along with detecting faces and drying rectangle around person faces, we are going to get the expression for the first face of the image. So if that face is smiling, we will have a text smiling printed here. And if that face was serious, then we will have a text serious printed here. So now let's look at our drawRectangle around faces method. 28. Flutter drawing rectangles around detected faces: For now let's look at this, draw a rectangle around facies method. So that is the method. And here you can see that we are firstly reading the image file as byte using this underscore ME dot read as bytes method. And we are storing it inside this image variable. And the data type of this variable is Award. So you can see that we have water ME declared here. So this variable can store any type of data and it is storing image in byte far meat. Then we are calling this decoding me from list method, and it will go on to decode the image. And again, we are storing it inside this image variable. And after that we are calling the set state method, which will update the usage of this image variable faces list and this result variable. So now let's look at our GUI where these changes will take place. As you can see that inside our build method, we are firstly setting this background image, which is actually a while. So there you can see that just like our previous application, we are setting it. After that we have a stack, but now in the stack we don't have a frame images like previous applications, but we have this flat button. And in our previous applications, this flat button contain image video, which will show the image captured using camera or choosing from Gary. But now this flat button will contain another widget name custom period, and that custom paint is there because it will be used to draw a rectangle around person faces. And after detecting faces, we want to draw a rectangle around detected faces. So in order to draw the rectangles, we need to get the canvas and the canvas. We will firstly draw the image. Then above the image, we will draw a rectangle around the person faces. So in order to get the canvas, we will use this custom pate widget. And inside this custom paint widget, you can see that we specified the painter and the painter is face painters. So that is the class that we created and we will look at in a moment. And this phase spinner is taking two parameters. The first one is list of faces and the second one is the image variable. So this phase printer will take the image and list of faces, then it will draw the image. And after that it will draw rectangles around person faces, ONE that Amy. So now let's look at this phase painter cloth. And when you scroll further down, you can see that we have this phase pinto class created here. So these clauses extending over custom painter. And you can see that we have a constructor. You can see that it has two parameters. The first one is list of faces and the second one is a var variable image file, which will not contain image in a binary file. And there we have our constructor that we call above. So we pass the list of faces and the image file. After that you can see that we have this override method pinned. So once you will extend this custom painter, you need to implement a method named pinned. And this method is taking two parameters. The first one is the canvas and the second one is the psi. So this Canvas will be used to draw the image then the rectangle around person faces. So here you can see that we are checking if the image file is not null. Then using this canvas we're going to draw the ME. So canvas dot draw image and we're passing over the image file in a binary format. So this image will be drawn on the canvas inside this custom paint object. Similarly, after specifying the file, the second parameter is the offset, which will determine the position of image as we specified. The offset to the row for the image will start where this custom painter widget will start. As you can see that inside of a GUI we have this custom painter widget. So this widget is starting from here. So image will also be drawn starting from there. And now let's move back to the class. So there you can see that the third parameter is a Paint object and using it, you can customize the painting are female. But now we are passing the default constructor. After that, you can see that we're creating another Paint objects. So Paint P is equal to paint and we're specifying the color for this paint to red. And then we are specifying the style to stroke. And finally, we are setting the stroke width to two and that Paint object will be used to draw the rectangle. And we are setting the properties of rectangle using this parent object. So there you can see that after that we are checking if this rect variable is not null, then we are getting each phase in this list of faces. And for each phase we are drawing rectangle using faced dot bounding box. As you can see that I have named the variable to rectangle from calling a penciled out bounding box. And it will go and draw the rectangle around person faces. As you can see that canvas dot draw rect method and we are specifying the rectangle object and the Paint object for the Paint objects will customize these rectangles for the color of this rectangle will be red and it contained only the stroke of two. So that is our face painter widget. So you can see that it is taking list of faces and then it is drying the image on the canvas. And after that it is drying rectangle around each directed phase. So now let's move back to our GUIs. And so there you can see that we are assigning a painter far discussed champion widget and the painter if this phase painter class and it will going to paint the image and the rectangle drawn around person faces. So when you scroll down, you will see that we have over TextView down here, just like our previous applications. And inside this text widget, we will show the value contained inside the result variable. And above you have seen that we are storing the expression for the first three, that if the person faces smiling, we are storing smiling there. And if that person is serious, we are storing serious inside this result variable. So that will be displayed here just below this block of code. So now you have seen that you are, you're our faith detector application. So now let's run our application and test it. 30. Section introduction: Hello and welcome pretended flow night Models section of this course. And the format is a set of tools to help developers run machine-learning model on mobile and other devices. We have machine-learning model trained on variety of data sets. But to use those Martell on mobile and other small devices, they need to be converted into a TF flight far made using TensorFlow Lite. So in this section we will learn to use some of the popular pre-trained t f right model inside flutter application for some common use cases like image classification, object detection, pose estimation, enemy segmentation. Firstly, we will build our Image Classification application to identify different things present in images using the ImageNet model and the TF flight plug-in o, we're going to build two different application. In the first application we will use the image is taken from camera or to them from Gary. And in the second application, we will use a live camera footage for image classification. After that, we will learn about object detection in flutter using two famous model, mobile net and Yolo. Again, we will build two different application in which we will use the images from Gary, your camera, or the frames up live camera 40 for object detection. After that, we will explore the pause that model for human pose estimation and activity detection in flutter. So we will build flutter application using images taken from Gary your camera, or using the frames of live camera footage for human pose estimation. Then we will learn about image segmentation in flutter using the deep learning model. And again, we're going to build two different application to explore the features of that model. So all of these model can be used for a variety of different application. Then we're going to learn to use them in flutter. So hopefully you will be quite exciting to learn something unique and valuable. So let's begin. 31. Importing Starter code for Flutter Image classification application: Hello and welcome to this lecture. In this lecture we will start building over Image Classification application using ImageNet model in flutters. So let's begin. So the first step is importing the starter code from GitHub. So you can manually type this URL or you can get it from projects URL file. And then you need to click on this core section and you need to copy this repository pause, so just copy it and after that, open your Android Studio. And here you need to click on this, get from where you are in control. Then you need to paste the link that you copied it and make sure that you're very uncontrolled is good here. Now click on this clone button. And there it is asking to create our Android Studio projects. So just click Yes here, then you need to use the Gradle. So simply to this graded LM, click finish and your project will be imported in a movement. Now over starter project is clone successfully, but there we can see several errors. So just expand this project section and go inside this predator image classification and open your live folder. And here you have a main dot dot file. So just open it. And now you can see that we have several arrows. So just click on this Get dependencies button and all the libraries will be downloaded and these areas will be gone. And now you can see that all the errors are gone. So that is our starter projects allow less. Firstly run our starter application and let's see what does it contain. So I'm going to launch it inside an emulator. For now our application is successfully installed inside the emulator, so that is the basic DUI of our application. So here you can see that firstly, we have this background image which is displaying this flow and they swallow. And then above that, well, we have a frame hanging and in the center of frame we have a camera icon. That's the basic UIs for when the user will click on this frame, you will see that the Gary will be open so that he can to the enemy from gallery. So now you can see that the Gary is line, so here usable to the enemy. For example, if I'm going to choose this first domain, you will see that, that that image will be displayed inside the frame. So here you can see that we have that image. Similarly, when the user we long pressing the center of frame camera will be open so that he can capture an image using camera. So now you can see that the emulator camera is open and that is the default view for our emulator gamma. So when you will click the scepter Burt and the AMI will be captured and that image will be displayed inside our frame. So now you can see that we have that image that we just captured. So that is our starter application. But now in this application we're going to write the code that whenever usable to the enemy from Gary, you're captured it using camera. That image will be passed over ImageNet model and we will get the detected labor returned by that model and those label will be displayed here, printed on that while. So now you have seen the working of the starter application. In the next lecture, we're going to go through the code for this Trotter applications. Then we will add the co-related to image net model in TensorFlow Lite. 32. Starter code explanation for Flutter Image classification: Welcome to this lecture. As in the previous lecture, we imported the starter code and you have seen the working of strata replication. Now in this lecture we will quickly go through the starter code. So as you have seen that in our application, we have a layout where we have a frame and a bare ground Amy. So when the user will click in the center of frame, the Gary will be open so you can choose any MY familiarity. Similarly, when he will long press in the center of the frame, then the camera will be open so that he can capture the image using that camera. So now let's look at the code where this stuff is happening. From going to move to overbuild method where we have the GUI. So there you can see that inside of a build method, we firstly have this material lab and inside that we have a scaffold. And inside this scaffold we have that container. And Firstly, we are setting the background image part that container, which is the background image of our application. So there you can see that we have this images folder. And there when I'm going to open this IMD todo file, you can see that there's the same image that we have as our background. Now when you scroll down, you will see that we have a column we get and we have that column viewed because in our application, we firstly want to show the frame which will contain the image that user will choose. And below that we have a text view in which the predicted labels will be shown. So we have a column for their purpose. For the first element of the column is a container and that container will contain a static. And we are using the stack because we want to show the frame and above that frame we want to show the image that user will choose from Gary, you're captured it using camera. So we are using stack for that purpose. So firstly, we're going to place the frame image in the stack. And then above that frame we're going to show the image that user will choose. And in order to show it, we have another image video. So there you can see that we have that we did here this immediate ART file. So there you can see that we wrap it inside a flat button. And that is because we want to open the Gary when the user will click on that image or in the center of frame. Similarly, when the user will long practice ME from camera will be executed. He can capture an image using camera. We're going to look at both of these method in detail later. So there you can see that the second element of the stack is this flat button. And inside that we have our image video, which will show the image that user will choose from Gary, you're captured it using camera. But there you can see that we have a condition at, that is if this underscore image is not null, then we're going to show the imi that user will choose. Otherwise, we're gonna show this container which will contain this default camera icon, which was very well initially when we launched our replication. So what is this image variables? So when you move up, you will see that we have this image variable declared here, and that is actually a file. So it will going to store the image file that the user will choose from Gary, you're captured using camera. So now let's move back to the UAE. So there you can see that after that stack we have the TextView did here at the bottom. And this text widget, we're going to show the resultant label predicted by the model. So that's the basic UI for what application? Similarly now let's look at our image from camera and Amy from Gary method. As we are choosing an image from Gary, you're capturing using cameras. So in order to achieve that, we are using Apache name image because so when you will open your Pub spec dot yaml file, you will see that inside the dependency section we have this dependency image picker. So you need to add it if you are building your application from scratch. So I have added it here. And now let's move back to the main dot dot file. And there you can see that we have declared this variable image picker here with the name image picker. Then we're initializing it there inside of init state method and below inside of an image from camera method, we are using this variable to get the image or capture the EMA using camera. There you can see that image picker.gov. And we are specifying the source to camera and it will return an object of type picked file and using peaked file.js path, we're going to create a while and we're gonna store that image file inside our underscore image variable inverse. The image will be chosen. We're gonna notify wherever this underscore image variable will be used. So once it will be notified, this condition will become true because now underscore image will not be null and over image will be visible inside the iframe. So hopefully you get the idea that how the image is being displayed. And after that, you can see that we are calling this new image classification method. So once the user will choose an image from Gary are calculated using Camera, we want to perform image classification. So we wrote this method here, do image classification and we're gonna add the code inside this method later. So this method will be called. And inside this method we're going to pass this image over model, which we'll return at the predicted label. So hopefully you got the idea about the flow of over application. Similarly inside of an image from Gary method, I will think the same, but we have the source equal to image source dot Gary. No, it, we're going to open the Gary so we can choose an image, Tom Garey, and that image will be parked over model and we're gonna get the result. So that is the basic UI of our application. And here one more thing is remaining, and that is this load model files method. And inside this method we're going to write the code to load our model. Because in order to use the model, we firstly need to load it. And we're going to load it inside this method, and we're calling this method inside our Init state method. So there you can see that we are calling this load model files method. Hopefully you get the idea about over the initial code. And if you want more detailed description of this code, you can write the imi labeling lecture of Firebase and milk it section of this course. Now in the next lecture we're going to add the plug-in name TF light inside of a flutter project and write the code. So see you in the next lecture. 33. Testing flutter image classification application: So now let's test our application. So I'm going to launch it inside and emulators reversely. I'm going to launch the simulator and it is using Android version are. And now you will see that the emulator will be opened. For now the emulator is visible here, so I'm going to press on this green button to run my application. And you will see that after some time the application will be installed inside the simulator and weekend test date. The application is now installed inside an emulator. So that's the basic UI year for our applications. So when you will click in the center of frame, you know that the data will be opening and as we have only one named me, so I'm gonna choose that Amy, I let see whatever Martin thing it is. And you can see that our model thing itself, one home, which is the correct answer and the confidence score is a 100%. So our model performance is quite accurate as the model is classifying among 1000 classes. So this result is quite impressive. Similarly, let specified over gam rather, when we long press in the center of screen, you can see that the camera will be open. So that is the default view fire over emulator camera. So we're gonna capture this image and let's see whatever model detects in that image. And now you can see that our model thing, It's a studio code which is quite relevant in that case, because this image is not quite clear for these results are quite acceptable. Similarly, you can see that we only specified the number of results to five and we set the threshold to 22. That's why we're only getting one rather. And if you specify the number of rebel to let say 15 and you set the threshold to 0.1, then you will get more rebelled. But if you test this application on a real device with more relevant things, you will get more accurate result. 34. Importing Flutter live feed Image classification application starter code: Welcome to this lecture. As in the previous lectures, you have seen that we successfully created over image classification example using tender pro-life and flutter. So basically we are taking the ME from there, you're capturing it using camera. Then we are passing this emitter or image classification model using TensorFlow Lite. But now we're going to use live footage from camera apart image classification purple. So let's begin. So the first step is importing the struggle chord. So as you know that we need to import the starter code. So you can manually type this URL or you can take it from projects URL file. So now we need to click on this core section. Then you need to copy this link and using it, we're going to clone this repository. So I'm gonna copy this link, then open your Android Studio and go to her, you're in control and there you need to paste this link. So I'm gonna paste it here. And then you need to choose your version control. So I'm going to choose good here. Then let's click this clone Verdun. And after that, it was going to ask you whether you want to create a new product or not. So we will pass yes there. So click on this yes return and then you need to choose the Gradle for simply choose this Gradle and click finish and your project will be cloned in a moment. So now the project is open inside Android Studio, so we have couple of LLC or so to remove them, just go inside this project section and there you need to expand this floater image classification live feed. And inside this lib folder we have our mean dot dot file suggest opening. And then you need to click on this, Get dependencies so that other libraries will be downloaded and these will be gone in a moment. So now you can see that all the errors are gone. So now let's firstly run of a starter project and let's see what the start our application contains. So I'm gonna click on this Run button to run my application inside an emulator. For now the application is successfully installed inside the simulator device. So that's the basic UI EF over application and it is almost seem that we have for our previous application, the only change is that now we don't have a frame here, we have an LED and in the center we have this video I cannot Occam, right? And so when you will click in the center of the screen, you will see that the live camera footage will be displayed there. So there you need to give the permission. And now you will see that the default camera preview for this emulator will be reasonable. For now you can see that the default camera preview is being displayed here. So when you will run this application on a real device, you will see that you will have a live footage from camera being displayed here. So now in this application we're going to use this live 40. So basically what we're gonna do is that we're going to take this voltage and we're gonna pass it frame by frame Hoover model and get the detected result. And the result will be shown here similar to previous application as in the previous application, we are choosing the images from Gary. You kept hitting them using camera and then we are passing those images to our image classification model. But now we're gonna take this live footage from camera and we're gonna pass it to our model frame by frame. Then we are going to get that result and show it to the user in real time. So now let's start working on this application. 35. Starter code explanation of Flutter Live feed Image classification application: Welcome to this lecture, as in the previous actor we imported is our starter code and you have seen the working of replication. Now in this lecture, we will quickly go through the starter code. As in the previous application, we are choosing the image from Gary and capturing it using camera, using Apache name image picker. But now we're getting the live camera feed and we are achieving it with the help of Apache name camera. So when you will open you up spec not YAML file, you will see that inside the dependency section we don't have a hypocrite dependency, but we have this camera dependency here. And using this package, we are getting this live footage from camera. For now, let's move back to our main dot dot file so that you or your iPhone application is exactly the theme that we have for our previous application. The only changes is live camera preview. So there you can see that we are personally setting the background image. After that we have our stack and own that step. We're firstly setting this LED ME, but previously we are setting the frame image. And now above that LED, we're not setting the image, but we are setting the live camera footage with the help of widget name, aspect ratio. So there you can see that we have this video ID name ratio and we are specifying it to strive to our camera preview. And we'll passing this variable controller here and we're going to look at it in a moment. Similarly, you can see that we enclosed inside a flat buttons for that we can set own press listeners. So there you can see that inside this ONE press, we are calling this init camera method. Similarly, at the bottom we have a text view similar to our previous application where the predicted label will be shown. So now let's go to at the top where we have the code for initializing the camera preview. And we will look at the controller there. For there you can see there at the top, we firstly have over Init state method and we're calling over load model method here, just like our previous application. And this method is now so you can see that this method is empty. Similarly after that, we have over init camera method, which we are calling when the user will click in the center of this LED. And in this method, we will firstly initializing this camera controller. So we declared this variable here. So there you can see that camera controller with the name of controller and we are initializing it here. And we are firstly passing the camera that we want to open. So we are getting this camera list here at the top. So there you can see that at the top of obligation we have a list of camera description and we are initializing hidden inside of our main method. So basically we are calling this available cameras method and it will going to return us list off device cameras. So if device has two cameras, then this list cameras will contain two camera and the back camera will be at the index 0 and the front camera will be at the index one. For now we have this list and we are using it. They're inside every need camera method. So now we are passing the back camera because we want to open it. Similarly, we are specifying the resolution to medium. We are calling this controller that initialized method. And this method will actually going to show the live camera footage here. So once the user will click in the center of LED, this method is called. Then we will initialize this controller. And then this controller that initialized will go and assure the live camera preview. And after that you can see that we have this then method. And inside this then method, we are firstly checking if the camera is mounted successfully or not. If it is mounted successfully, then it is okay. Otherwise, we're going to return from here and we are doing it because we want to take the footage frame by frame. And if the camera is not successfully initialized, then we cannot take the 40. So if it is successfully initialized, we are going to take the footage frame by frame using this method controller.js trot image stream. And there you can see that we have this variable image and that is actually the frame for this controller does astronomy stream will gonna return that camera footage frame by frame. Then you can see that we have a line here. So firstly, I'm going to set the formatting for this line so that it will be easy for us to understand aid. For now you can see that inside the starting MY stream we are getting the camera footage frame by frame for this image will contain the frame. And we're going to firstly check if the system is not busy and we are using this variable. And that is because we want that the footage from camera will be processed frame by frame. It means that if we take the first frame than weren't, the execution of that frame is completed, then the next frame should we paused to over model. So in order to achieve it, we are using this variable is busy, so we are initializing it here to false. So at the beginning, obviously system is not busy, soviet is not busy. Then we are firstly going to set it to true that the system is now busy. Then we are storing this frame inside this IMD variable and the type of this variability, the camera, I mean, and after that we are calling the start image labeling method. And inside this method we're going to write the code to pass this frame to our model. Then we're gonna get the result for that particular frame and we're going to show it to the user using the text we get that we have at the bottom. So the process is almost the same that we have heard over Image Classification application. The only difference is that now you're not passing the image is chosen from Gary, you're using camera, but we are passing the live camera footage frame by frame and we are getting the result. So there you can see that once the camera will be initialized, we will get the Pucci frame by frame. Then we are calling this method and it will going to pass this frame to her model and we will get that is a, for now we're going to do that inside of a next lecture. So see you in the next lecture. 36. Writing Flutter Image classification code: Welcome to this lecture as now we have imported the starter project and you have seen the working of starter application. So now let's move towards step number two and then adding the model inside your flood replication. So there you can see that we have an asset folder here. And inside that we have over mobile net we wonder t applied file and over mobile net we 1.2x defiled. So this TF light finally there were model file and TSP, the label file. So far this project, I already placed it there. And if you have your own model and label phi, you need to create your asset folder and place them here, I mean R. To create this folder, you can right-click on this parent folder and go to new. And there you need to create a new directory and you need to specify the name to acids. Similarly, inside of a pub spec dot yaml file, you will see that inside over flooded section we have a subsection name acids. So there you can see that we have this flatter section and this subsection acid. And there you need to declare this folder as well. So I'm going to write assets here. And that is because by doing that, all the files present in that folder will be accessible forever. Applications for after doing, you'd have a step number two is complete now the third step is adding a Becky name Tia flight inside of a flutter project. So I'm gonna repeat the same process that we followed forever previous Image Classification application. So if you've got the idea, you can try it by yourself, but I'm gonna do it here again. So now let's add this, Becky. So go to your browser and now you need to go to the site pub dot-dot-dot. And there you need to search for T f line. So I'm going to search it and press enter. And then you will get the search results. And now you can see that the first package is HDF lights are just 0.8 for that's their documentation page on this package. And now let's go inside this installing section so that we can install this package inside of a flutter applications. So just copy this line and you need to pay this line inside your spec dot YAML file inside the dependency section. So there you can see that just below this camera dependency, I'm gonna play this line. Then you need to click on this pub depth so that this package will be downloaded inside your flood replication. And once it's done, we need to follow the next instruction to use this package inside our application. So now let's move toward this read me section and there you will find the instructions. And now let's scroll down. And there you can see that firstly, we have some instructions for Android and iOS so far and dried, you need to place these lines inside your app level build-up Gradle files and just copy this line and move to your project. And they are, you need to expand the Android section. And now inside this app folder, you have this build-up Gradle file. So just open it. And now this file contains a section name and dried. And just before the closing of this Android section, we're going to place these lines. So there you can see that I already have them for this project, but you can paste them here. So I'm going to remove it, then I'm going to pay them again for, let's paste them here. And often that we need to follow the next instruction. These are for iOS. So if you are following this course on man, you can follow these instructions, but I don't have a Mac right now, so I cannot execute it for now, but the lecture related to that will be uploaded soon. So in case of iOS, you will get any of the arrow. You can get the solution from here because solution to some build error. So this actually that two containing solution or some of the common errors in case of Iowa. So now let's scroll further down to see the next instruction. So now we have step-wise instruction here. So the first step is creating the asset folder and placing over model and label phi as we have already did that. Now let's move to the next step and that is importing this package. So let's copy this line and move to your main.out. File. So I'm going to move to the project. And here inside of our main dot dot file, at the top of the file, I'm going to play this line, so let's paste it here. And now let's follow the next instruction. And that is the loading our model. So there you can see that using t applied dot load model function, we're going to load our model and labeled file. Similarly, you can specify other options and if you need further details related to that, you can while the previous application lectures. So now let's move to our project. And there you can see that inside of a load model function, we're going to paste this file. And there you can see that we have this function, so I'm going to paste it here. But there you can see that the label filename is labeled dxdy, but inside a word asset folder, our label file name is mobile network 1.2x T. So here you can simply rename this file or you can change the name here. So I'm gonna copy this name from above and I'm gonna paste it there. And I don't want to rename my label files on. Wanna paste it here. And now you can see that the label file is mobile net V1 and dot TXT. So once each turn, our model will be successfully loaded because when our application will be started, this initial state will be called. And there we are calling this load model function. And now the next step is passing this camera frame to over image classification model. So now let's move back to the documentation page and there you will find a section alluded to that. So when you scroll further down, you will see that we have this section run on image stream. And there you can see that using TF, lidar, runModel and frame method, we are actually passing the frame to our model and getting the results. So just copy this section then we will look at it in detail. So I'm gonna copy it and I'm gonna paste it inside our do image labeling method. As you can see that we have this method with the name do image classification in our previous application. So if you want, you can rename this method. So there you can see that now we are calling this method deafly dot runModel ONE frame and they're the first parameter is the byte list and which will contain the frame in a bites are made using these IMD variable that we declared above. And its type is camera image. And we are initializing it here and we are storing camera frame inside this variable. Now we are using it to get the bytes for their particular frame. So immediate planes and there we are using this map function and then it is returning us plane in a list format. So we are parsing it here. So now our frame will be passed to our model in a Boyd foreign made. Then you are specifying the image height and the width or using this IMD that hired, you can specify the height. Similarly using IMD dot, which you can specify the width. And after there you can see that we specified over mean and standard deviation just like previous application. So you can again change these values if you want and compare the result of the model. Similarly, you can specify the rotation here. So by default the rotation is 90, but if you want to change it, you can change it to any angle you like. And there we have a number of results, so we are specifying it to two, but I'm gonna change it to five because we want to get minimum five result and we're specifying the threshold to 0.1, which means that if the confidence score for a particular label is greater than 10%, then that predicted label will be included in our result. Otherwise it will be dropped. Similarly, we want to execute this process asynchronously. So we are passing async true here. So once this frame will be executed, we will get the result in never recognition object just like over previous application. So now we need to iterate this recognition of debt. So there you can see that using this recognition, using this recognition, we're going to call this for each loop. And then you can see that we're going to iterate each element of this recognition Lee. So firstly add a semicolon there. Now we're going to iterate this recognition object and using this element, you can, using this element we can get the label name and the confidence score. So now let's take it and we're going to store it inside of a result variable, because inside of a TextView that we are using that result variables. So there you can see that we have a result variable being used here, and we declared it above at the top of obligation. And in that variable we're going to store the result. So there you can see that we have a result variable. And now we're going to set the values for this result variable. So result. Then we're going to append it and using this element. So element, then we're going to get the labour name using this key label. And after that, we're going to add two spaces. Then we are going to get the confidence score so element. And then we will specify confidence here. As you can see that this confidences or double value. So we're gonna get double and then I'm gonna wrap it inside braces. And that is because we only want to get fixed number of fraction digit part, this double value. So I'm gonna write toString as fixed. And there you can specify the number of faction digits. So I'm going to specify two. It means that if the confidence score is like 78.98 times 98, then using this matter you can only get 78.98. So it is quite useful method. Then we're going to add the new lines so that the next label will be visible on the next line. Once we will update this result variable or show all the predicted label, then we need to update the usage of this result variables. So we will call this set result. And there we're going to simply call this weather. But one more thing remaining, and that is for each frame we only want to show the new label. So we need to reset this result variable above. So we're going to reset it. So result, and we're going to set it to empty strings so that for each label, the updated result will be shown to the user. But once the processing of this frame will be completed, we need to set this to false. So we're going to set it to false so that the next frame will be passed through this model and we will get the result for that frame. So I'm gonna set it to false. So now the coding of our application is complete. So now let's go through the Apophis that what is actually happening. So you can see that once the user will click on the frame, this ONE path method will be called and it wasn't I initialize the KML. And inside this init camera method, we're initializing the camera controller. And once the camera is initialized, we will get the camera footage frame by frame. Then we're going to pop this frame to our start image labeling method. And there inside this method, we're going to pass this frame to over model. So there you can see that T apply dot runModel and frame. And we are passing this frame to our model in a bites far made. Similarly, we're gonna get the result and we're gonna iterate these results and show them to the users. Hopefully you get the idea that how you can use your image classification model inside flutter application, both for images and fertilize seems so now let's run our application and test date. 37. Flutter Testing Image classification live feed application: So now what would you misclassification live feed application is successfully installed inside a real device. So let's specify a savanna with clicking the center offended you, you will see that the live camera footage will be visible. There. You can see that we have the live camera footage. And when I will put microphone in front of our camera, you will see that we have this predicted label microphone with a confidence score above 80%. So there you can see that the confidence score is varying, but the results are quite good as you can see that similarly, whenever you point it at my laptop, you will see that we have this label laptop here with a confidence score between 0.800.90. So at life rid application is working quite well for these objects. Similarly, you can test this application by yourself and check the result. 38. Flutter Object detection section introduction: Welcome to image classification section of this course. A common use of machine learning is to identify what enemy represent. For example, we might want to know what type of animal appears in the following photograph. The task of predicting what enemy represent is called image classification. And image classification model is trained to recognize various classes of images. For example, you can train a model to recognize three different types of animals, like cats, dogs, and rabbits, as we have used the imi libeling feature of Firebase MLK. But using the free version of a market model, we can classify among 340 classes. And if you want to use Cloud version of that model, you can classify upto 1000 classes. But here we're going to use a mobile net model, which can also classify among 1000 different classes. So meanwhile, let our series of low power model used for image classification on mobile devices as the computation power on mobile devices is less as compared to computers, they provide good, accurate result on such low power devices. The model we're going to use his mobile net V1 model and it can classify among 1000 different classes. And you can further read about this model using this link, what actually we are going to build in this section. Firstly, we will build a flutter application in which users will choose an image from gal, you're captured it using camera, then that image will be passed to our mobile net model using a plug-in name P applied. After that, the result returned by the model will be shown to the user. So after completing this application, we're going to build another application in which we are going to use the live feed from camera for image classification with the live footage from camera will be passed to our model and we're going to get the results in real time. And we will show these results to the user. So to build these applications, we're going to follow a six-step mechanisms over. First step will be importing the starter code from GitHub. After that, we're going to add the models inside our flutter application. Then we will add a plugin name TF light inside over flutter project. After that, we're going to load our model. Then we will, then we will pass the image taken from Gary or camera or the frames taken from live footage to the model using t applied plugin. Then finally, we're going to show the results returned by the model to the user as the section contained building two very exciting applications. So let's begin. 39. Importing Application code object detection flutter application: Welcome to this lecture. In this lecture we're going to look at our object detection application in which we're going to take the images from Gary, you captured it using camera. Then we're going to detect the objects present in those images using two models, mobile neck and you LO So let's begin. So the first step is importing the court from GitHub. So you can manually type this URL or you can take this URL from projects URL file. So they are go inside this core section and copy this repository links. I'm gonna copy that open your Android Studio and there you need to click on this veered from version control. And now here you need to base this line to clone the repository. And there you should select the volume control to weird. Now let's click on this clone vertex. And it was going to ask you whether you want to create a new product or market after some time. And here you need to select Yes. And after that you need to to the Gradle. And we're going to simply do this grid LM, click finish, and your project will be cloned in a moment. So there you can see that the project is currently loading. And now this code is not actually a starter code, but it's a complete application as we have covered the section image and in flutter, now we are quite familiar with the process of using the library TF light. So that's why we're going to look at the complete code in this lecture and we will try to understand it step-by-step, because the majority of the things are exactly the theme that we covered in our previous sections. So now let's begin. So there you can see that now we have certain added. So now go inside your project section and expand this flute or DFE flight object detection. And there go inside your lib folder and open main.js file. And there you need to click on this Get dependencies so that all the libraries will be downloaded and you will see that in a moment all these adders will be gone. So now you can see that approximately all the arrows are gone, but there is one error remaining. So now let's move to that portion. And there you can see that we have this error that I can image rounded IS NOT patterns. So they are, you can simply remove this underscore rounded from here, but for now you can see that this header is gone. So now let's move to the top of the file. So now we have imported our application. And so now let's firstly run our application and see it working so that you can understand this code easily. Some gonna run it inside an emulator device. I'm going to click on this Run button, and I have already opened the emulator device here. And you will see that in a moment of our application will be opened here. So now our application is successfully installed inside the emulators. That is the basic view AF over object detection applications. So there you can see that we have a black background and then we have our Imager icon there and at the bottom we have a bar. So firstly we have a button and upon clicking on this button, camera will be open so that you can kept it anemia using camera. And that image will be passed over object detection model after their similarly on the right-hand side you can see that we have our image icon and upon clicking on it, Gary will be open so that you can choose enemy from gravity. And then that immediately we passed over object detection model, just like our previous image classification example where we took the ME from Gary or calculated using cameras allow less tested. So there you can see that I'm going to click on this gallery icon and you will see that the Gary will be open. And I'm going to choose an image from Gary. So let's choose this first image for now. And you will see that this image will be displayed here. And you can see that the image is being displayed here. And we have the object detected in those images. There are quite a lot of labels here. So when you look closely, you will see that there we have a laptop object with a confidence score 75%, and the rectangle is surrounding this laptop. Similarly, for this laptop screen, it is detecting it is a TV with a confidence score 60%, which is quite relevant. And similarly for laptop keyboard, we have a confidence score 52%, and this rectangle is actually around the keyboard. Then we have a cup, and then finally we have a potted plant. So you can see that our model successfully detected all the objects present in 30 minutes and we have successfully drawn at tangled around those objects. But now we are achieving with the help of our while net model. So now less use over YOLO murders. And there you can see that we have a switch. So when you will enable the sweet, and now let's do the same image again, and let's see whatever YOLO model detect in the image. So I'm gonna use this image again. For now we have the result again. But now you can see that we have this TV monitor with 40% probability, cough, with 50% and this lambda with 87%. But we did not get the correct point forever lepto bounding well, similarly, you can see that we have the t mode with 44%. And similarly you can see that we have this potted plot, but it's rectangle is not drawn at the right position. So now you have seen the working of both model, SSD model and the ulama. 40. Flutter Object detection code: So now you have imported the code for this object detection application and you have seen the application running. So now let's look at the code. So I'm gonna start explaining from adding the modal inside of a flutter project. As previously, we created an asset folder and placed over model and label filed in it. So similar to that, we have over asset folder here. And you can see that we have the model and the label file for both of the models. So firstly, we have our SSD mobile lead dot t applied file. Then we have this mobile net dot TXT file, which will contain the label part of this model. Similarly, we have our YOLO V2 tiny dot flight file, which is a model file. And we have this TXT file which contain label for this YOLO model. And often reading it, you need to go to your pump SpecRunner.html file and there you need to declare your asset folders. So when I'm going to go there, you will see that there I have a section name acids. So now all the files present in this asset folder will be visible or useable for our application. After that, we added a petty named P applied inside our flutter project. So there you can see that we have this Beckett and we use it inside of a previous section. And if you are not familiar with the process, you must wide the lecture related to image classification sanction. And so there you can see that we also have over the image picker dependency as well. And after reading it, you need to pass on this pop.gov and these libraries will be downloaded. So once you will add this package, then you need to set the things far over Android and iOS application from unit to go inside your Android application and inside this app folder and open this build-up Gradle file. And inside this file you need to play some lines inside the Android section. So there is the Android section and just before the clothing of the section, you need to paste this line. These lines are specifying that over t applied file, which is our modern fired, who not be compressed in case of our Android applications often pasting these guns that were Steph for adding over D applied package inside this flute or obligation will be completed. And now in order to follow the next instruction, we're going to look at the documentation page for these p applied package. So inside the browser, I have already opened the documentation page for this T applied package, and we explored this package inside our image classification section. So that is the documentation page for the spec. Either when you scroll down, you will see that we have a step-by-step guide for adding t applied package. Similarly, there in order to use this package, we have these tabs. So the first step is creating over asset folder and placing over model and label file in it as we have already did that. Now let's move to the next step. And the next step is importing the libraries. So you need to paste this line inside your main dot dot file. So here you can see that inside our flutter application we have this import here. So after pasting this import, Now let's look at the next instruction, and that is load the model and the label file. So here inside our flutter project you can see that we have a method name load model. When you scroll down, you will see this method load model. There you can see that we are loading our unimodel and over mobile net model and we are loading them on the basis of user selection. So we have this underscore model variable in each stripy, that clear screen. So when you scroll up, you will see that this variable is declared here. And inside this variable we're going to store the model selection, for example, by default is storing assets D. So we declared both model names here at the top. So there you can see that the string SSD contain Mobile let modal named similarly, YOLO contain tiny yellow V2 for when you scroll down, you will see that by default we are choosing this SSD model. And once the user will change your selection, then this model will contain YOLO. This model will contain your load than the Yolo model will be loaded. And if this model contain SSD, then our SSD mobile net model will be loaded. So now we are, we are calling this method, so we are calling it inside of a inet straight methods. So there you can see that when the application will be installed and open, then this industry, it will be called for our model will be initialized at that time. Similarly, we will again call this method was the user will change your selection of model by using the 3D that we put at the bottom of our screen. So after loading our model, the next step is passing over image to this model. But before that, now let's look at the GUI of our application. So in order to look at the code for DUI, let's open the simulator again so you can understand better. So that is our GUI. So now let's move to ever build method and let see what code we have there. So when you scroll down, you will see that we have overbuilt method here. So firstly, we are creating a variable psi then in which we are storing the screen size of the device in which the application is installed. After that, we have a list of VD ID name stack children, and currently it is empty for now, when you scroll further down, you will see that here we are returning a scaffold. And inside the scaffold we have a container, and inside this container we have a stack and the children far desk track is actually that list of videos for all the GUI that we are showing is actually contained inside the stack children list. So now let's look at this list that how we are adding these UI elements in it, as in this application's user will do the enemy from Gary and captured it using camera, then that image is being displayed here. But after that, we also need to show that rectangle around detected objects. So in order to achieve it, we are using stack. So as you know that using stack you can place one widget on another widget. We're using the same concept here. So in the stack we'll firstly placing the image that the user will choose from Gary, you're captured it using camera. After that we are placing rectangle on the imi that user will choose based upon the result returned by the model for that particular image. So there you can see that we have RMA, then above that we have a rectangle and that tax. So now let's see how we are achieving it. So at the top of our build method, you can see that inside the stack children we are firstly adding this position we did and that we did can only be used inside stack and it is used to position element inside stack. So there you can see that we are specifying the top and left 0.200, which means that if our stack is beginning from here, then this element that we will place inside this position we get will also begin from there. And here you will see that we are setting the width for this position v, u to the screen widget as the size variable contain the width and height of screen. And after that, we are checking this underscore image variable. Remember that this underscore image variable is actually a file which will store the image that user will choose from Gary, what captured it using cameras. So if this image is not being chosen, then we're going to show a container and default image icon and that we had when we run our application for the first time. So there you have seen that we have that image icon here. But once the user will choose an immediate captured it using cameras. This underscore image variable will not be null, then we are showing the image that users have chosen. So in our case right now we are showing that image because it's underscore images not null. And below that you can see that we are adding a method named render box. And this method is actually containing all the chord which is used to draw a rectangle around data objects. So we will look at it later. So now let's look at the bottom bar code. As you can see that in our layout we have this bar. This bar contains several button, a switch Android TextView. For in that bar we use the row layout. So there you can see that we have over row layout. So firstly, we placed this rise buttons. This button is actually for capturing image from cameras. So there you can see that we are calling our image from camera method here. Similarly after that we have our text widget inside we are displaying this SSD texts. Then we have our switch visit. And inside this switch we did, we're setting ONE change listeners for if the switch is not selected or model will be SSD. Once a user will click on the switch, it will be turned on, then the value will be true and our model will be yellow. And here you can see that we are again calling of a load model method. And that is because far over YOLO and mobile net model, we need to in each live them each time user make a selection. So after that you can see that. And after that we have this TextView displaying the tax YOLO. At the last we have a rise button again using which you will choose an image from Gary. So there you can see that upon clicking on this button, image from Gary will be executed. And similarly, upon clicking on this button, which is the first button, image from camera will be executed. So now let's look at both of these method that what we are actually doing in these method. So there you can see that we have both of the methods. So there you can see that just like the previous application inside of our image Chrome camera, we are using this image picker library to capture the image using camera. So once the image will be captured, then we are calling this predict image method. Similarly, inside of an image from Gary, we are choosing enemy from Gary. So once the image will be chosen, we are calling this image again. Now let's look at this predict image method that, but actually we are doing in it. So here when we will look at this method, you can see that inside this method, we're firstly checking the model twice. So if the model choice is your load, then we are calling this YOLO V2 tiny method. Otherwise we are calling this SSD mobile net method and we are passing the image that user have selected from Gary or captured it using camera. And after calling these method, we are getting the image height and the image width using this file image class. So we are going to use this information later so that ME, that user with US home Gary, you're captured it using camera. We're getting its height and weight and we are storing it inside our image height and the image width variable. And after that, we are updating this underscore image variables so that the image will be visible over layout. So now let's look at this yellow V2 tiny method and SSD mobile lead method. So when you scroll further down, you will see these methods. So once these YOLO V2 tiny method will be called here firstly, we are noting the time in millisecond and we're storing that time inside of a start time variable. And we are noting this time because we want to get the time taken by the model to detect objects in the image. So we are storing the start time here. Then using TF flight dot detect object on image method we are parsing the immediate user have to then from Gary you're captured it using camera. We're model. So there you can see that we are answered specifying the model name here and which is yellow. So worse, this method will be executed, image will be passed over YOLO model and we will get the result which will be stored inside this recognition object. And after that block, you can see that we are calling set state. And inside the set state, we are assigning our results to this underscore recognition variable whose type is actually at least so when you scroll up, you will see that we declared this variable above. And the data type of this variable is a list, so it will store a list of objects that our model detected in the image. So when you scroll down, you will see that we are assigning the value to this object. And at the end we are noting the time again, and we are storing the time inside our end time variable. Then we are taking the difference anti minus startTime, and it will return the time taken by the model to detect objects in the image. Then we are printing this time. Similarly inside of our SSD mobile net model, we are doing the same stuff. So we are noting the time and we are passing the imi to ever detect object or Mimi method. But here we are not specifying the model name, but in case of YOLO, we specified it. So using this detector object or naming method, and we are passing over image file there, we are setting the threshold to 0.4. So you can also change this value if you lie. So once we will get the result, we are against storing these results inside this underscore recognition list. And after that we are getting the end time and then we are printing the time taken by this mobile net model to do the interference. So by doing that, you can compare the time taken by both of the model for detecting objects in the images. So now let's quickly go through the proffer that we have followed peeled now. So there you can see that inside of a GUI we have the buttons to the image problem Gary and captured it using camera. So once the user will click on either of the buttons, these method will be executed. This image from Gary and Amy from camera. So once the image will be 2s and then we are calling this predictive method. This method we're going to call the respective model based upon the choice of users. So if user have enabled the switch, then this YOLO V2 method will be called. Otherwise SSD mobile net method will be called. So once this method will be called, There we are passing the image to our model and we are getting the result. And we are storing these results inside of a underscore recognition lists. And we're, and we're assigning this value inside of a search state because we want the changes to be applied wherever this recognition object is used. So now we get the result from our model. Now the next step piece using those reserves, drying rectangle around detected objects. As I have already told you, we are using render boxes method to do that stuff. For now, let's look at our GUI again. And there you will see that inside of a GUI we are calling this method here. So inside of a stack children we are adding this render boxes. So now let's look at this method that what code it contains. So there is the method. 41. Flutter Drawing Rectangles around detected objects: So now let's look at our render boxes method. So as you know that this method is responsible for drawing rectangle around detected objects on the image. So now let's look at the code in this method that how it is drawing those rectangle. So here you can see that firstly, we are checking that if this underscore recognition variable is not null, which means that we have the results. And after that we are checking these height and the width variable is not null, which means that users have already chosen any May. And after that, we are creating two variables, factor X and the vector y. And these variables are used to scale up the points that are returned by the model. As you will know that in case of object detection model will detect the label name, the confidence score, and also the position of object. So when you will open the documentation page there, you will see that the format of output returned by these models. So the flow inside this object detection section, we have this outward farmer. So it will going to detect the detector class name, it's Confidence Code then this rectangle object and this rectangle object contained four thing, x and y coordinate and the width and height. So x and y will be the starting point of rectangle. And this w is the width of rectangle and h is the height of rectangles. So if we think this box is a rectangle, then this x, y actually that top left point. And when we will add the width inside this exponent, we will reach there. Similarly, when we will add the height inside this y point, we will reach there. That's how we are drawing the rectangle around it acted up deck. So as you can see that these values are between 01 and we need to scale those values. So here you can see that you can scale x and width by the width and height by the high daffy me. So based upon the width and height of image, you need to scale those values so that we can get the correct position of object in the image. So here you can see that we are using these two variable for that purpose. And after that, we are using this underscore recognition to get the detected objects one by one. And we will get the information. And based upon this information, we're going to draw the rectangles. So here you can see that we are using this underscore recognition dot map function, and it will return us the result one-by-one inside this variable. And inside that we are returning a position widget has, you can see that the return type of this render boxes is list of widgets. So it will going to return a list of rectangles. Then those rectangle will be visible inside our application UI. Here you can see that for this position widget, we are setting the left-hand top point equal to the x and y point of the detected object. But before that, we are multiplying this x point with the factor x to scale it up. Similarly, this y point with the factor wide scale it. And after that we are setting the width and height of this position widget equal to the width and height of the object detected. And we are also scaling. It's after setting the position of this position widget, we have a container inside it. And for that container, you have a border. So there you can see that inside this container we have this Bach decoration. And inside this box declaration, we have this border property. And we are setting the yellow color border width. The width to this container with the border is actually our rectangle. So basically we are using this position. We get to Ra, this container. This container contain a border which gives us the impression of our rectangle and the position of this container is specified with the help of information returned by the model for that particular object. And inside this container we have a text widget. And there we are displaying the detected class name and the confidence score. So as you can see that we are showing this information here so far, that particular object, we are returning this position widget. And this position widget will show the container at detected object location, and this container has a border, so we will get the impression of our rectangle. So for each object we have this position, we get returned and at the end we are returning all of those position widgets, which will be actually a list of videos. And these widgets will be displayed inside of a GUI. Here you can see that when we call it our render boxes method, we are actually adding in the inside of a stack children. So once these position widget with the container will be added inside this stack children, they will be displayed in our GUI. We will have the impression of rectangles being drawn around detected up there. So the whole story is that this method is using this position widget to draw container at a specific location returned by the model. And this container is containing the border, which gives us the impression of a rectangle. So that's how we are drying rectangle around detected objects. 42. Importing the code for live feed object detection flutter app: Welcome to this lecture. In this lecture, we will start looking at our object detection application in which we are going to use the live footage from camera for detecting up there. So we're gonna detect those objects and we're gonna draw rectangles around them in real time. Firstly, we need to import the core so you can manually type this URL. You can take this URL from projects URL file. So there you need to click on this button and now copy this URL so that we can clone this repository. So now put your Android Studio and here you need to click on this get from version control. So just click there and there you need to paste the link. So just paste it there. And 27Nian control, too weird. Now let's click on this clone button so that the repository will be cloned. And it was going to ask you whether you want to create a new project or not in a movement. And here you need to click yes. And after that you need to choose the Gradle. So I'm gonna choose simply greater than now let's click finish and your project will be clone in a moment. So now our project is cloned successfully and we have several laterality. So just open this project section and go inside this plurality applied object detection live feed, open your live folder and open this main dot, dot phi. And here you can see that we have all the arrows. So just click on this, Get dependencies for that all the libraries will be downloaded and you will see that these areas will be gone in a movement. For now. You can see that all there they're gone. Now this particular is actually a completely working application, just like over object detection application. And we're going to look at the code for this application. But before that, let's run our application and tested so that you can understand this code banter. 43. Flutter Testing Object detection live feed application: For now about object detection, live feed application is successfully installed on a real device. And when I will put the camera in front of a laptop there you can see that our model is successfully detecting that the laptop is present. They are indebted also predicting its location. And you can see that we have rectangle drawn around the laptop. Similarly, there is also keyboard present, so we're modelling is also detecting it and also drawing the rectangle around the keyboard on the laptop. So there you can see that our application is working correctly. Similarly, when i will put the camera in front of a mobile phone there, you can see that we have the cell phone detected here and direct angle is also drawn around it. Similarly, as my hand is also visible, there is also a labor related to that with the name person. So there you can see that our application is working quite well. Similarly, you can test this application for different objects supported by a Vermont. 44. Flutter Live feed object detection application code: Welcome to this lecture. As in the previous lecture, you imported the code for this application and you have seen the working of this application. Now in this lecture, we will look at the code for this object detection using live feed application. But before that I must recommend that you watch lectures led to two image classification example using live feed and over object detection application that we covered in our previous lectures for after creating your flutter application, the next step is creating the asset folder and placing your model and label file in aid. So here you can see that inside the asset folder we placed over models there you can see that we have the model files and the label file for both of these model, this mobile net assess D model. Similarly over YOLO model. After that inside over PulseNet not YAML file, we declared this asset folders so that these file will be useable for our application. Then we added the dependency for over t applied package and this camera penalties. So using this camera package, we're gonna get the live camera footage. And using this TO flight package, we're gonna use our model and after that inside of our Android section in the AP level build-up Gradle file, we specified some options so that our TF lite file should not be compressed for Android application. So there you can see that we placed it there inside this Android section of this app level build-up Gradle file. So after doing that, the setter forever TF lite package is complete. Now let's look at the code inside of a main.js file. So firstly, let's move to the GUI code. So when you scroll, you will see that inside of a build method, we have a court which is quite similar to our object detection code. Will we use images for detecting objects? Now in this application, the only difference in the GUI is that now we are not showing the image, but we are assuring the live camera preview. And apart from that, everything else is seems so a strategy will be that inside stack we're gonna firstly place our live camera footage. Then we're going to pass this live 40 frame-by-frame to over model. And the model will detect the object and it will going to return that the information related to position of those objects. So after getting that information, we're gonna place rectangles in the stack at the respective position where the objects are President. And so here you can see that just like our object detection application, we have this list of names, stack children, and inside it you can see that we are firstly adding this aspect ratio we did. And using it, we are showing the live camera preview. So it has two parameters. The first one is the aspect ratio and the second one is camera preview. And we are passing a variable named controller, which is an object of type camera controller. And if you want detailed description related to that package, you can write the lectures related to image classification using live feed that we covered in our previous section. So after placing this aspect ratio or camera preview widget, you can see that we are adding this render boxes method and you never previous application. We have seen that using this method we are drawing the rectangles around detected objects and we will look at it later. And when you scroll further down, you will see that the bottom bar code where we only displayed the TextView, which is containing the model name, which is SSD in our case, because we are building this application using only SSB model. And if you want to build it for YOLO, there are only few changes that you need to make. And similarly, when you scroll further down, you can see that we are returning a safe area widget and inside it we have over scaffold. Then inside the scaffold we have a container, and that container is containing thus tag. And inside the stack we are placing this track children. So whatever view I early math that we pasted inside of a statute released will be visible inside this stack and this stack is present inside our application to hopefully you got the idea about the flow of your application. So now let's look at the other code that we have at the top. So there you can see that firstly, inside our Init state method, we are calling this load model function. And inside this function we are loading over SSB models. So using this p applied load model function, we are specifying over modal filename and our label file name. And after that you can see that we are calling this int camera method inside of a release straight method. So in this method we are initializing our camera preview today you can see that we are initializing this camera controller variable here. So Controller is equal to camera controller and we are opening the back camera of the device. So we are putting the element present at the 0th index of this camera ablaze. And we get this list at the top. So there you can see that we get this live. So we declared this list here with the name cameras. Then we're getting available device cameras using this method here. And it will return us all the Device camera. And using the index, you can access the back camera or the front camera. So the back camera will be at index 0 and the front camera will be at index one. So here you can see that we are initializing this controller. So here you can see that we are initializing this controller, and here we are calling this controller dot initialize method. And once ever camera will be mounted, then we are starting the image stream as we want to get the live camera footage frame by frame. Then we're going to pass each framed over model in our model will return the result. And using these results, we're going to draw a rectangle around detected objects. So there you can see that for once ever camera view will be initialize. We're going to start the image stream. So controller does start image stream and it will return the camera 40 frame by frame. So we're gonna take this frame which is stored inside this variable name image. Then here you can see that we are firstly checking if the variable is false, which means that there is no previous stream that is being processed. Then we're gonna set this variable to true, and we're going to store this frame inside this IMD variable and the data type of this IMD variability, the camera image. So you can see that we declared this IMD variable here and each data type is OK. MR. Jimmy, similarly ever controller is off type camera controller. And here we are declaring variables. So at the beginning system will not be busy. So we are storing files inside it. So once the frame processing will start, then we're going to set it to true so that firstly the processing of previous stream will be completed, then we're going to pop the next frame. So when you scroll down there, you can see that after storing the scream inside this IMD variable, we are calling this runModel and free method. And inside this method we are passing the stream to our model. I'm getting that either today you can see that inside this method we are firstly getting the image heightened, the image width though we are storing this image whisked inside this underscore image width variable and height inside of an underscore image height variable. After that, using this TF light-dark detect object to frame method, we are passing over framed over model in bytes for me. So I m d dot plane. Then using map function, we're going to get the voids present in all the planes of that image. And we're gonna pass it to over object detection model. And then you can see that we are specifying over model name, which is SSD mobile net. Then you can also specify the image height and the width. So we're specifying it here. And after that we are specifying the image mean and the standard deviation. So you can change these variable and check your model result. Similarly, you can see that we are specifying the threshold, which is 0.4. then we'll setting of a number of results per cloth and our threshold. So once this method will be executed, will be stored inside this underscore recognition variable. And the data type of this variable is only so it will contain list of detected objects. And after that, we are setting these visit to false, which means that the processing of this frame is completed. Now here at the top, the next frame can be passed to our model. So when you scroll down there, you can see that we are updating the use of this IMD variable using the set state method. And we are using this variable here inside our build method. So there you can see this was the state of this variable will be changed. Then we are drying around detected objects using this method. So now let's look at this method as we have previously seen it there you can see that inside this method we are returning list of widgets which are actually leased off these position widget and these position we use are drawn at the points returned by the models for each object we are getting the detected class name, it's confidence score and the rectangle object. And we're using the rectangle of the information here to draw this position object. And inside this position where we have a container with blue color border and the width of that water is two. So that container is actually giving us the impression of rectangle being drawn around the detected object. Similarly, inside that container we have this TextView, which was showing us the detection, the class name, and the confidence score for that clause. So hopefully you get the idea about this application code. So we are using two packages, the camera package forgetting life hoodie from camera and the TF flight Peggy, using which we're gonna pass the frames of that live 42 over model. So over model, we'll gonna process those frames and we're gonna get the detected. Then using this method, we're going to draw a rectangle around those detected objects. For the whole story behind this obligation is that inside of a GUI we have this aspect ratio widget, which is responsible for showing live camera preview. And below that we have this render boxes, which is responsible for drying rectangle around it acted up there at the top. You can see that inside over in each state method we are loading the model, initializing the camera. Once the camera will be initialized, we're gonna get the live footage from camera frame by frame. And we are passing these frames to overrun model on frame method. And inside this method, we're gonna pass that frame to over model and get the result. And after getting these result of a render boxes method, we're going to use this information returned by the model to draw a rectangle around detected objects. So that is the whole story. So we are getting continuous stream, then we are processing them and showing the result to the user in real time. 45. Flutter Pose estimation section introduction: Welcome to the pose estimation section of this course. Pose estimation is a computer vision tasks that infers the position of a person or object in an image or video. This is typically done by identify, locating, and tracking a number of key points on a given object or a person. For objects, this could be coordinators or rather significant features. And for human, these key points represent major giants like an elbow or knee. We can clearly see the poverty pose estimation by considering its application in automatically tracking human movement from whichever sport coaches to AI powered personal trainers pose estimation has its potential to create new automated tools designed to measure the precision of human movement. In addition to tracking human movement and activity, pose estimation has application in a range of areas such as augmented reality, animation, gaming, and robotics. For the model we're going to use in this section is named as Posner, Posner television model that can be used to estimate both of a person in an image or video by estimating where key body during its Are these key points super then made their diets like elbow, knee, wrist, and so on. This is referred to as human pose estimation. So the Posner's model we are using is for human pose estimation. And you can further read about this model using this link. So what actually we are going to build in this section. Firstly, we're going to build a flutter application in which users will choose enemy from Gary or captured it using camera, then that image will be passed to over pose net model using a plug-in name TF flight, then they're detected points will be drawn on the image at their respective positions. After that, we're gonna build a flute replication in which we will take the live footage from camera. Then we will pass the footie frame-by-frame to over pose net model. And that detected points will be drawn on screen in real time. Or to build these applications, we are going to follow a six-step mechanism, or first step will be importing the project code from GitHub. After that, we're going to add a modal insider flutter project. Then we will add a plugin name Tia flight inside of our flutter application. After that, we will load the model using this TF flight plugin. Then we will pass the EMI taken from Gary your camera or the frames taken from live footage to our model using the tf flight plugin. Then we will show the result returned by the model to the user as the section involves building two very exciting applications. So let's begin. 46. Importing Flutter Pose estimation Application code: Hello and welcome to this lecture. In this lecture, we will start looking at our image segmentation application in flutter using the deep lab model in TensorFlow Lite. So let's begin. Firstly, we need to import the application code from GitHub so you can manually type this URL or you can get this URL from projects URL file. Then you need to click on this button. And here you need to copy this link. After that, open your Android Studio and there click on this grid from version control. And now you need to paste this link here and make sure that you're really in control is good. Now let's click on this clone button. And there it will going to ask you to create ion drives for your projects. So click yes. And after that you need to choose a Gradle. Simply choose this grid, finish. Your project will be cloned in a moment for now over project disclosed successfully, but we have certain errors here. So click on this project section and expand this fluidity of light image segmentation. And there you need to open this Live folder and then you need to open this main.js file. And here we have the arrows. So just click on this great dependencies so that the libraries will be downloaded and you will see that these errors will be gone in a moment. And now you can see that all the Amazon uhm, but there you can see that we have one narrow remaining and that is due to this icon. So here you need to simply remove this underscore rounded, and now all the arrows are goons, so that is our application code. And in that application, we are choosing an image Hungarian kept hitting it from camera. Then we're passing it to over deep lab model and over D blab model is performing image segmentation, ONE that image. So now let's firstly run our application and see it working. And after that, we will look at the code for this application. Because after seeing the live demo of the application, understanding the code will be much more easier. So now let's run our application. For now, what application is successfully installed inside the emulator. So that is the basic UI of our application. And you can see that they're almost the same that we have for our poles.NET applications. So there is the image icon in the center of screen and below we have a bar where we have two Burton. So the first button is for captioning images using cameras. The second button is choosing an image Chrome Gary. So now let's test our application by choosing an image from Gary is I'm gonna click on this button and you will see that the gallery will be open. And here we have several images. I'm going to choose this first image far now. And now you can see that we have a segmented image here, and this image is continuing a person's over models successfully separated it from its background. Similarly, you can test this application for different pictures and you can customize it for your particular use cases. So now we have seen the working of this application. So now let's look at it. Scored. 47. Flutter Pose estimation code: Welcome to this lecture. As in the previous lecture, you imported the code for this pose estimation application and you have seen the working of this application. Now in this lecture we're going to quickly cover the code for this application. And the code for this application is almost the same that we have our object detection application. The only difference is that now we are using the Posner model and we are drying dot center to the key points deducted by the model. As previously inside of an object detection application, we are drawing rectangles around detected objects, but now we are drawing dots at the detected points, so the detected key points. So there we have our render boxes method, but now we have a render key points method. Here you can see that we have this method and below where we call a render boxes method. Now we are calling this render key points Smith. And here you can see that we are adding these key points inside the stack children. So if you took the lectures far over object detection application where we detected objects in images, then you will not get any difficulty to understand this code, but I will also explain this code quickly. So now let's start. So our first step was adding the model inside of a flutter application. So there you can see that we created this asset folder and we placed over model here for that is our polls that dot t haploid Bartle. Then inside over Pub spec dot yaml file, we added this asset folder so that the file available inside this method should be visible. So there you can see that inside this flood of section, we declared this folder. Then we added the pecking named PF light inside over flooded application and also over image bigger library. And we are using this library to get images from Gary. You're captured them using camera and we're using this dF flight package to pass those images to over model and get the reserve now inside our main dot dot file, let's firstly look at the UI of our application. So when you scroll down, you can see that inside of a build method, we have the code further BY someone. Just scroll further down here you can see that we are returning the scaffold. And inside the scaffold we have our stack. And inside the stack we are showing these children variable. And that is actually a list of widgets for all lawyers present inside this deaf children will be visible inside our application. And we are using the stack because we firstly want to draw the image. Then above the image, we want to draw the key points as you have seen in the application. Now when you scroll further up, you will see that here we declared the statue release. And Firstly, we are reading on container and inside this container we are showing the imi that user will choose from Gary, you captured it using Canvas as when the application is launched, then there is no ME that users have chosen. So in that case, we are hearing a container with this image icon. Otherwise, if this underscore dummy variable is not null, which means that users have chosen the image Hungarian scripted is using camera. Then we're going to show the image using this immediate file function and it will go on display. And after that, you can see that inside the stack children, we are reading list of videos using this render key points method. And this method is actually returning us list of videos which are actually the vigorous containing the key points and they will be run on the image. And we will look at in a moment below that you can see that we have the bottom bar for our application as they are, we have two buttons, 1.2.2, the image from Gary, and another one for capturing gimme using cameras. So there you can see that we have a row and inside this row we have these rise button. Once user will click on these methods, so m0 from camera and ME from Gary will be executed. So now let's look at these method. So when you scroll up, you will see that we have these methods here. So inside this image from camera, we are using this image picker library to get image from cameras are ones the user will get the imi Pfam camera. We are calling this predict image method. Similarly, when the user will choose him if I'm there, we are again calling this predict image method. And inside this method we have the code to pass this image to our pose net model. But to use that model, we firstly need to load the model and inside ever in each state method we are calling a method load model to load our poles net model. And inside this method we have the code to load the models. When you scroll down, you can see that that is a load model method. And here we are using this D apply dot load model function to load this model. For now let's look at our predictive method which is called inside our image film camera and image from Gabby method for there you can see that inside this method we're firstly checking if the image is not null, then we are calling this pose net method. And in that method we are going to pass our image to urban model and get the result. And then after that we're getting the image height and the image width using this file image cloth. And finally, we are storing the image file inside this underscore image variable and which is displaying the image inside our GUI. And we are calling the set state so that the changes will apply and image will be visible inside of a layout. For now, let's look at this pole, that method. And inside this method you can see that using this TF lidar run polls net own image, we are passing over image part. Then we are specifying the number of results to two. So once these method will be executed, we're gonna get the list of results which are stored inside this recognition of the, and after that you can see that we are storing the values present inside this recognition variable, boo, this underscore equal mission variable. And the data type of this underscore recognition is actually a list and it will going to contain a list of key points identified by our pause net model. We are doing that inside of a set state methods for wherever this underscore recognition is used, the fringes will apply for. Now let's look where this underscore recognition is being used. So inside our render key points method, we are using this underscore recognition and this method is actually responsible for drawing points on the image. So once the user will choose enemy from Gary, then our predictive method will be called. And inside this predict PMI method, we are calling this pose net method. And in that method we are passing the image2 over Posner's model and we are getting these reserves and these. Then we are storing these result inside of an underscore recognition variable. Then finally, our render keypoints method will draw points on the image. So now let's look at these method. Firstly, we are checking that if this underscore recognition is not null, which means that we have the results. And after that we are checking the image height and the image we thought not null, which will verify that the user has chosen image home. Gary, you look after defusing camera, and then we are declaring two variables, vector x and the vector y. And we are using these variable to scale up the points returned by our model. So when you will move to the documentation page there at the bottom, you will see the output format for our pose that model. The output format is x and y, and these values are between 01. And you can scale x by the width and the height of the image. So these values will be infraction and they will be between 01. So in order to get the correct position for the points, we need to scale these values to the height and the width of image. So here this vector x and y are being used to scale those points. Then after that you can see that we declared a list of wizard. And using this underscore recognition lists, we are calling this for each method and we will get each result one-by-one inside this variable. Now after that, you can see that we are randomly creating a color to each time you will pass enemy to model, the points that will be drawn will be off a random color. And after there you can see that using this RD variable we are getting access to the key points. As you have seen that the output format is the x and the y value for each point. So using these req.query points, dot-dot-dot Foo.main function, we're going to get a variable key. And using this variable K, you can get the x and y coordinate for each point. So here k dot x will return at the exit point and the k dot y will return as the wipe on it. And we're multiplying them with the vector x and the vector y, respectively, that the value will be scaled and we are subtracting minus x from here to draw them at the correct position. So you can chain these value or you can simply remove this value and you can check the position of point, but you need to choose this value carefully so that the points will be drawn at the correct position. And we are assigning these points to this left-hand top property of this position, we get, as you have seen, the use of this position widget inside of an object detection section for this position widget will start from this point. And after that, we are specifying the width of this position, v, u 200. Similarly, we will specify the height to 12 and you can change it as well if you want. And after that, inside this position where we have a text widget and insight, That's we did. We will show the name of body part on the name of key points. So there you can see that using this div and we're specifying the T2 part and it will return the name of that body part or the key point. So for each key point returned by the model, this loop will be executed and we will get a position widget for each key point. Similarly, when you scroll down, you can see that we are calling this tool list method. And it is returning these videos in a list format and storing inside this list variable. Then at the bottom we're adding this list inside of a list variable. So there you can see that. So there you can see that we are adding this list inside this list and we declared this variable about. So there you can see that we have this variable declared here. So it will going to contain the position widgets for each of the detected key points. And at the bottom we are returning this list. So there you can see that we are returning it. And the data type of this variable is list of videos. So once this method will be called, we're going to get a position widget for Egypt detected points and we are storing these position we did inside our list and we are returning death lists. So once this list will be returned, it will be added inside of a stack. Children and staff children will display it inside of replication Dui for we will have key points drawn on the image once this method will be executed. So hopefully you get the idea that how we are drawing points are deducted points. Let's quickly go through the process in our application for the user will choose an image from Gary. You're captured it using camera. So once the user will press any of the burden, these method will be executed. And in these methods we are going to choose the image. And after the image will be chosen, we're calling this predict payment method. And inside this method, we are calling this Posner Timmy method. And we are also getting the image height and the width. And inside this bone anatomy method, we are passing this emitter over model. I am getting the result. And once we will get the result, we will store them inside of our underscore recognition where yet, but, and once this variable will be updated, this render key points will be called and it will going to draw those points on the image. 48. Importing pose estimation live feed flutter application code: Hello and welcome to this lecture. In this lecture, we will start looking at our image segmentation application in flutter using the deep lab model in TensorFlow Lite. So let's begin. Firstly, we need to import the application code from GitHub so you can manually type this URL or you can get this URL from projects URL file. Then you need to click on this button. And here you need to copy this link. After that, open your Android Studio and there click on this grid from version control. And now you need to paste this link here and make sure that you're really in control is good. Now let's click on this clone button. And there it will going to ask you to create ion drives for your projects. So click yes. And after that you need to choose a Gradle. Simply choose this grid finnish and your project will be cloned in a moment for now over project disclosed successfully, but we have certain errors here. So click on this project section and expand this fluidity of light image segmentation. And there you need to open this Live folder and then you need to open this main.js file. And here we have the arrows. So just click on this great dependencies so that the libraries will be downloaded and you will see that these errors will be gone in a moment. And now you can see that all the unreserved wound, but there you can see that we have one narrow remaining and that is due to this icon. So here you need to simply remove this underscore rounded, and now all the arrows are goons, so that is our application code. And in that application, we are choosing enemies. Hungarian kept hitting it from camera. Then we're passing it to over deep lab model. And our d block model is performing ME segmentation, ONE that image. So now let's firstly run our application and see it working. And after that, we will look at the code for this application. Because after seeing the live demo of the application, understanding the code will be much more easier. So now let's run our application. For now over application is successfully installed inside the emulator. So that is the basic UI of our application. And you can see that the almost the same that we have for our poles.NET applications. So there is the image icon in the center of screen and below we have a bar where we have two Burton. So the first button is for captioning images using cameras. The second button is choosing an image from Gary. For now, let's test our application by choosing an image from gallery. So I'm gonna click on this button and you will see that the gallery will be open. And here we have several images. I'm going to choose this first image far now. And now you can see that we have a segmented image here, and this image is continuing a person's over models successfully separated it from its background. Similarly, you can test this application for different pictures and you can customize it for your particular use cases. So now we have seen the working of this application. So now let's look at it. Scored. 50. Using PoseNet model for Flutter Live feed pose estimation application: Welcome to this lecture. As in the previous lecture, you have imported the core for this application and you have also seen the working of this pose estimation live feed application. Now in this lecture, we will look at the code for this application as the core for this application is almost the same that we have our object detection live feed application. First, better that you try to build this application by yourself, by looking at the code faraway object detection live feed application and you oppose.NET application that we built previously in which we use the images. But for those of you who stuck somewhere, I'm going to explain this code here. So now our first step is adding the model inside of a flutter application. So there you can see that we created over asset folder. We placed there were pools Node.js flight file here. Then inside ever poke spectra YAML file, we'll declare this asset folders. So there you can see that we have this folder and above that, you can see that inside that dependency section we have co-dependency is one part that camera package using which we will get the live camera preview. And other one is t of light back in using which we will pass the frame from live footage to over model and get the result. So now let's look at our main dot dot file. So here over GUI is present inside of a build function. So when you scroll down, you can see that inside our build method we have the GUI. So the GUI is almost the same that we have our object detection application. So here we have a safe area widget, and inside that we have Oscar for lead. Inside the scaffold we have a container, and in that container we have our stack and stack, we will place the GUI element, as you have seen in that demo application, we have a live camera footage being displayed in our application and own that live camera footage, we have the points drawn. So here you can see that the children of the stack is the stack children variable and we're declared this list here. So there you can see that we created a list of videos, name stack children. And inside this track children, we firstly added disposition widget. And this widget is used to position layout element inside our stamp. So there you can see that we specified that top left point, then we specified the width, then we added a container. And inside this container, we unfortunately checking if the controller is initialized or not. This controller is actually a camera controller and it is used to show the light camera preview. Then we're setting the width and there we have our container. And inside this container we have this aspect ratio widget. And inside this aspect ratio region, we have this camera preview for this aspect ratio widget is responsible for showing live camera footage. And inside this visit we have two things. The first one is the aspect ratio, so controller.js q-dot aspect ratio will return this. Similarly, the second thing is the child, which is a camera preview. And we are passing this controller here. For this controller is an object of type camera controller and we initialize it above. So when you scroll down, you can see that below that we are calling this render key points method, just like our previous application. And this method is used to draw points on the live camera feed. So now let's look at the code above for when you scroll at the top, you can see that Here we have this camera controller variable declared along with other variables. So we have this List underscore equals Venetian, it we're going to store that result. Then we have variable to store image height and the width. Then we have our camera image variable and it is used to solve the frame. Then we have over Boolean variable is visible and it is used to track that only one frame will be processed at a time. Then we have the string result variable. We actually don't need it, so you can simply remove that. And after that we have the selection area below and you can also remove it. We don't need these variable. And after that you can see that we have a very niche state method. And here we are firstly noting over model using this load model functions. And when you scroll down, you can see that inside this load model we are loading over pause net model using this load model function. And after that we are calling this init camera method. And this method is actually initializing the cameras. So when you scroll nouns, so there you can see that we have this unit camera method. So we are initializing this camera controller and we are calling this initialized methods. So once the camera will be in each slide and we have a live camera 40, then using this controller dot-dot-dot image stream, we will get the live camera footage frame by frame. So using this controller dot astronomy stream, we're gonna get this live camera footie frame by frame. Then we are passing this frame to over Model and getting the results. So there you can see that inside this block we have some course, but now let me firstly format this block. So here you can see that we are getting the live camera footage frame by frame. So the frame will be stored inside this image variable. And here we are taking if the system is not BZ, which means that our model is not processing any frame, then we are setting this visit to true, which means that now the processing governor frame is started. And after that we are storing the string inside this IMB variable, and we are calling this runModel and frame method. So below that you can see that we have this method and here we are firstly getting image width and height. And after that, we are passing this frame to our model using this t applied doctrine Posner don't train method. And there we are passing this framed over model in a byte formatted. So using IMD dot planes route map function. Here we are returning the bytes present in each plane of the screen, and we are returning it in a list format using this tool is to method after that we are specifying image height and the image width. And finally, we are specifying the number of phases, which are two. Here you can see that once these method will be executed, we are storing over result inside this underscore recognition of using this underscore equals initial object, we can get the result one by one. And these results will be the points with the name and their location which will be drawn on the image. So below that you can see that we are setting this is easy to fall, which means that the processing of this frame is completed. Now the next frame can be passed over model for processing. And below that we are calling the set state method. We are simply updating the uci dot this IMD variable. So once the statement will be executed wherever this IMD variable is being used, that changes will take place. So when you scroll further down, you can see that inside of our build method, we are using this IMU variable here and we are checking if it is not null. So once we will update this variable, this instruction will be executed and under key points method will become. And this method is responsible for drying key points on the live camera 40. So now let's look at this method. The code inside this method is exactly the same that we have look in our previous application. So there you can see that we are firstly checking if the underscore recognition is not null along with the image height and the ImageView. And after that, we are getting the scaling factors. Then we are iterating each result returned by the model using this for each loop. And for each result we are getting x and y coordinate for each key point and also the name of that key point. Then we are showing this name of key point along with the dot with the help of this position we are drawn at this point. So far, each key point we are going to get this position, we get drawn at that respect to key point. Similarly, for all the points, we're going to get a list of these position widgets. So this method is returning this list of widget. So once this method will be executed, we're going to get the position widget for each of the key point. We are returning this list of widgets here. So there you can see that we are returning this list and this list is being readied inside of a statue children. And that is the reason that we have that we have points drawn on the live camera footage. So we are using stack and above that stack we are firstly adding the lifetime of food aid. And we are placing these key points on the live camera footage inside the stack. So that is the core for this application and you should want to get better understanding of this application. Make sure that you watched the live feed application of image classification section and directions action. 51. Image segmentation section: Welcome to the image segmentation section of this course. Image segmentation is a commonly used technique to partition an image into multiple parts. So region after based upon the properties of pixel present NIMA, it could involve separating the foreground of image from it's where ground image segmentation has number of application and some of its application are in medical imaging for detecting tumors, automatic driving for detecting objects, and in video surveillance. So the model we're going to use in this section is a deep layer model. Diep Lab is a state of deep learning model for semantic image segmentation, where the goal is to assign labels to every pixel in the image. You can further read about this model using these link. So what actually we are going to build in this section. Firstly, we will build a flutter application in which users will choose enemy from Gary, you're captured it using camera, then that image will be part two of our deep layer model using a plug-in name t applied. After that, we will show the segmented portion of image along with the original image itself. Then we will build a flutter application in which we will take the live footage from camera. Then we will pass this live footage to our model frame by frame. Then the segmented portion present in each frame will be shown to the user in real time. So to build these application, we're gonna follow a six-step mechanism. So our first step will be importing the project code from GitHub. After that, we will add the modal inside our flatter application. Then we will add a plugin name Tia flight inside our flutter project. After that, we will load the model using this plugin and we will pass the imi taken from Gary your camera, or the frames taken from live 40 to the model using the t applied plug-in. And at the end we will show the result returned by the model to the user as this section involves building two very exciting applications. So let's begin. 52. Importing Flutter Image Segmentation Application code: Hello and welcome to this lecture. In this lecture, we will start looking at our image segmentation application in flutter using the deep lab model in TensorFlow Lite. So let's begin. Firstly, we need to import the application code from GitHub so you can manually type this URL or you can get this URL from projects URL file. Then you need to click on this button. And here you need to copy this link. After that, open your Android Studio and there click on this grid from version control. And now you need to paste this link here and make sure that you're really in control is good. Now let's click on this clone button. And there it will going to ask you to create ion drives for your projects. So click yes. And after that you need to choose a Gradle. Simply choose this grid finnish and your project will be cloned in a moment for now over project disclosed successfully, but we have certain errors here. So click on this project section and expand this fluidity of light image segmentation. And there you need to open this Live folder and then you need to open this main.js file. And here we have the arrows. So just click on this great dependencies so that the libraries will be downloaded and you will see that these errors will be gone in a moment. And now you can see that all the unreserved wound, but there you can see that we have one narrow remaining and that is due to this icon. So here you need to simply remove this underscore rounded, and now all the arrows are goons, so that is our application code. And in that application, we are choosing enemies. Hungarian kept hitting it from camera. Then we're passing it to over deep lab model. And our d block model is performing ME segmentation, ONE that image. So now let's firstly run our application and see it working. And after that, we will look at the code for this application. Because after seeing the live demo of the application, understanding the code will be much more easier. So now let's run our application. For now over application is successfully installed inside the emulator. So that is the basic UI of our application. And you can see that the almost the same that we have for our poles.NET applications. So there is the image icon in the center of screen and below we have a bar where we have two Burton. So the first button is for captioning images using cameras. The second button is choosing an image from Gary. For now, let's test our application by choosing an image from gallery. So I'm gonna click on this button and you will see that the gallery will be open. And here we have several images. I'm going to choose this first image far now. And now you can see that we have a segmented image here, and this image is continuing a person's over models successfully separated it from its background. Similarly, you can test this application for different pictures and you can customize it for your particular use cases. So now we have seen the working of this application. So now let's look at it. Scored. 53. Flutter using DeepLab model for image segmentation: Welcome to this lecture. As in the previous lecture, we imported the code and you have seen the working of this image segmentation application. Now in this lecture we will try to understand the code for this application. And now you have built a number of applications using TensorFlow Lite martyrs for understanding this code will not be a big deal for you. So let's start by adding our model and label file inside this Predator application. So there you can see that we have, we're asset folder here and there we have a very deep lambda2 t applied file and over deep lab dot TXT file. So entire this PSD file, we have number of labels and these are the objects when whichever model can perform image segmentation. So you can perform on person motorbike part. It plots sheep so far and so on. And after creating this assets folder and adding over model and label file in it, we need to declare it inside of a pump SpecRunner.html file. So open your book SpecRunner.html file. And here inside of our flutter section, you need to declare this folder. So there you can see that we declared this folder here, and after that we added the import for this Becky. So there you can see that we have this d of light package inside of our dependencies section. Similarly, we have this image picker package as well. And using it we are capturing an image com camera and choosing it from Gary after adding these dependencies inside of our Android application section, you can see that inside this app folder we have a build-up Gradle file, so just open it. There. We added aligned so that over tf flight model files you will not be compressed. So there you can see that we added this portion here and we added it just before the closing of this Android section. So there you can see that we have the section software completing these steps. The setup for adding this TF flight package inside of a flutter application is complete. So now let's look at the code inside of our main dot dot file. So here we have all the code. So firstly, let's look at the UI of our applications. So when you scroll down, you will see that we have a build method here. And inside this bill method we are returning a scaffold and that's, and that's scaffold is containing a container. And inside this container we have a stack just like our previous pools.NET application. And inside the stack we are showing the widgets which are present inside the stack children list. So we declared this list above at the start of our build method. So there you can see that we created this stack children entity that truly are list of widgets. So we're going to add all the values that we want to display inside of our application UI. Then we're going to set this variable as children to overstock. So firstly, inside this list we are reading this position. And this position widget is responsible for showing the image that user will use from Gary, you're captured it using camera. So at the launch of our application, we don't have an image to show. So in that case, we are showing a container and inside this container we are showing this image icon. Once users will choose enemy from Gary, then this underscore image variable whose data type is a file will not be null and it will going to contain the image file. So in that case, we're going to show the ME that users will choose. And after that we have a section using which we are displaying the segmented image and we will look at it later so far. Now let's skip this section. And below that you can see that we have the code for our bottom bar. So we have a row layout here. And inside this row layout we have two buttons for the first button is called capturing image from camera, and the second button is choosing image from Gary. So there you can see that for both of these Burton, we are setting the listeners. So upon press this image from camera will be executed. And upon pressing on this Burton image from Gary will be executed. So now let's look at these methods. Now at the top of our application, we declared this method. So there you can see that inside of my homepage state clause we have these methods here. So that is a very much com camera and that is our image com Gary. So inside this method we are using this image picker library to get the imi com camera. So once the user will choose the image, then we are calling the segment mobile net model and we are passing the imi that user captured. Similarly in case of very, we are choosing the image and then we are calling the segment mobile net murders. And now let's look at this method. For this method is declared here. And inside that method we are passing the image that user have twos and to our model. But before using this model, we need to load it and we are doing it inside of a load model methods. So that method is declared here. And you can see that inside that method we are using this P flight dot load model function to load this deep lambda2 flight file and deep lab dot TXT file. So once these method will be executed, this model will be loaded. Then we are calling this method inside of a Init state method. So once our application will be learned, this in next state will be called and our model will be loaded. And after that we can use that model. So now let's move back to the method. So here you can see that inside this segment mobile net method. Firstly, we are noting the time and that is because we want to know the time taken by the model to perform image segmentation. So we're going to note the start time and after that using TF lidar run segmentation or an image, we're going to pass the image part, then we are specifying image mean and the standard deviation. And this method will return us the result inside this recognition of the, so the output returned by over D model will be actually a PNG image or an area of pixels. So when we will look at the documentation page, so I'm going to open the documentation page and they are inside of our image segmentation section. You have the output far made. So the output type or this deep learning model will be a wide area of PNG image or byte array of RDB values of the pixels. So when performing the impedance and you will specify the output type to PNG. Then you will get byte area of P&G Me, and you will not specify the output type, then you will get byte array of RDB values, as we did not specify this output type, we're going to get a byte array of pixels. So now let's move back to our fluid replication. There you can see that we have our result or byte array of pixels here. And now we are storing this result inside over underscore recognition list and we're calling it inside of a such state method. So wherever this underscore recognition is used, the fringes will apply. So when you scroll down inside of our build method, we have a section where we are drawing the segmented image. So there you can see that we are using this underscore recognition here. Once user misuse enemies com gallery of capture diffusing camera, that image will be passed to our deep lab model and it will return us the last result I'll return. Then the section will be executed at this variable will be updated. And inside the section you can see that inside of a stack children we are writing another position widget. And there we are firstly checking this underscore images null, which means that users have not selected on e mi. Then in that case we are showing this text widget with the test knowing we selected. Otherwise we are showing the container and inside this container we are showing the imi that users have chosen from Galileo, captured it using camera. But now we are enclosing this image we did inside our opacity widget. And this opacity widget is used to make it still run trough parents. So you can specify the opacity value between 01. And if you specify the opacity value to 0, then it style will not be drawn at all. And if you specify it, one and its child will be drawn fully. And now we are showing this underscore image inside this visit. So if we said this opacity to 0, then you will see that our image will not be drawn at all. So when I will run the application again, now