Smart Android Development with Firebase ML Kit and Auto ML Vision Edge | Hamza Asif | Skillshare

Smart Android Development with Firebase ML Kit and Auto ML Vision Edge

Hamza Asif, Android Developer | Instructor

Smart Android Development with Firebase ML Kit and Auto ML Vision Edge

Hamza Asif, Android Developer | Instructor

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
4 Lessons (29m)
    • 1. Class Introduction

      1:31
    • 2. 1 Preparing dataset and Training model

      7:53
    • 3. 2 Evaluating and testing the model

      3:29
    • 4. 3 Building Android Application

      16:32
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

7

Students

1

Project

About This Class

In this class, we will explore the one of the features of  Firebase ML Kit for Android. We will use Auto ML vision Edge feature of Firebase ML Kit to train our machine learning model and develop Android Application for it. You don't need to have any background knowledge of machine learning to complete this class. Just having a very basic knowledge of Android development is enough.

So during class we will train the model on stones dataset and build an Android Application to recognize different types of stones. During your project, you will train the model fruits dataset and build Android Application for it.

Meet Your Teacher

Teacher Profile Image

Hamza Asif

Android Developer | Instructor

Teacher

Hello, I'm Hamza.

I have a degree in computer science and have a passion for Android Development.

Powering Android Application with ML really fascinates me. So I learned Android development and then Machine Learning. I developed Android applications for several multinational organizations. Now I want to spread the knowledge I have. I'm always thinking about how to make difficult concepts easy to understand, what kind of projects would make a fun tutorial, and how I can help you succeed through my courses.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

phone

Transcripts

1. Class Introduction: Hello and welcome to this Android machine learning plan. Machine learning using mobile devices is increasing, but people having skills to develop powerful MLB to Android applications are really obsolete. So it's perfect time to learn both. This class, he'll guide you towards practical implementation of machine learning and computer vision in Android. So if you have basic knowledge, Japan dried and want to make your application smart, this class is for you. In this class we're going to use a library near empire of its MLK and the features that provide. So what is fired with amalgamate? It is actually a library provided by Google which will enable you to use machine learning for some common use, GitHub, face detection, barcode scanning, object detection, text recognition. Similarly, it provides some other text-based model for a language identification and text translation. Now apart from using these pre-trained model, if you want to train the machine learning model on your own data Third, and develop Android application. They provide you the feature of R2 MLB Jin edge. So using this feature we're going to train a model on over stones data set. Then we're going to develop Android application which will be able to recognize different types of stones. Then in the project section you're going to train the model on a fruits data set. Then you will develop Android application which will be able to recognize different types of fruits. So the class contain a lot of exciting stuff. And if you are interested in practical implementation of machine learning and computer vision in and ride, join me in this journey. So see you in the first lecture. 2. 1 Preparing dataset and Training model: Welcome to this class. In this class we will see how you can print machine learning model and develop Android application per se. But the exciting part is you're going to train a machine learning model on your own data set without any background knowledge of machine than hello, let's start. So for this class we're going to use fire with U2, MLB and edge feature, which is provided by piled with ML cubed. So open your browser and here you need to type fire with R2 MLB and had no press the enter. And there you need to find the link firebase dot google.com dogs ML Auto ML image. So now left click on it. So now it will take us to the documentation page two here we need to follow the instructions. So just press on this Get Started button and we're going to follow the instructions step by step. So here you can see that before you begin, you should have a Firebase project. So now in order to create a Firebase products, you firstly need to create a firebase account. If you don't have one, if you have one, then you just need to log into Firebase console and create a new project to open another tab. And I'm going to go to Google and they are, you need to type Firebase console. And I'll press enter. As I have already a fire distant soul, then I'm going to directly move to the file. So if you don't have a firebase account, then you firstly need to sign up for their. After signing up for Firebase, you need to go on the official site, so click on it. And now it is going to take me to the official space. So that is my file disk soul. And here you can see that different types of five, this budget that I created. So now in order to create a new project, I need to present it at project button. And there you need to specify the project name as I have already reached the project quota limits. I can't create a new project for now, but it's actually a quite simple process. Firstly, you need to spec are you project name. Then it will going to ask you whether you want to enable Google Analytics or not. The entirely you will associated with Google and analytical counts. I already created some file this project from going to use them. So for this example, let's do this demo projects. I'm going to click on. Now we're going to use this demo project for this class. So know, let's move back to the documentation page and follow the next instruction. So after creating a Firebase project, you need to assemble your training data for in order to train your model, you firstly need our data. In this example, I'm going to use tones recognition. So I have a stones data ready and I'm going to share it with you. But you can also create your own data. So let's look at our data set first. So in the Stones folder, I have a my stones data. So there you can see that there are several folders there. For example, folder diamond, air thigh, Ruby, sandstone. And so for example, when I'm going to open the diamond folder, so there you can see that it will contain different images of the image. And similarly, when I move back and open the Ruby folder and it contained different images of Ruby. So now you need to arrange your data set like that. If you are trained the model on your own data set, for example, you are working on our fruit recognition example, and you have three foods, mango, banana, and fL. Then you need to create a folder, mango and place on the mango images in it than a banana folder with all the banana images and an apple folder with all the EPA limit. Then you need to put all these three folders, which are APL, banana and mango in a single folder like fruit. And then you need to create the zip file par that folder. So in our case, I created the several folder for different types of stones. Then I put it in a stones folded and then created the file. So hopefully you will get the idea that how to arrange you data set for model training. Now we need to follow the next instruction. So here you can see that after assembling your data, you datasets of two should be like that, that are main folder. Then you have several tough folder with images in them. Now the next step is uploading our data on the Firebase Console and training the model. So now let's move back to our file. And here you can see that we have machine learning feature here. So just click on it. And now it we're going to take you to the official page for machine learning Firebase. And there you can see that we have up to him and lift it suggests good Cooney. And now you can see that here you can add your net assets. I'm going to press on it so I can upload my data set. So here you need to set the option whether you are doing single level classification or multi-label classification. So what does that mean? Single-level classification means that each image in your data set belong to only one category. For example, the diamond image only belonged to diamond category. So each belong to only one category and it can't have more than one level associated with them. So if you are doing multi-label classification in which each mechanic present multiple label, then you should use this multi-label classification option here. But I'm going to using the liberal left division and I'm going to specify the name of records at two stone, for example. Then press Create. And now we need to upload our data set here. So just click on this for a file or you're not going to drag the file here. So I'm going to drag my zip file, which is going to run the file. I'm going to drag it and paste it here to upload that at a third. Now it's a three-step process overtly ever dataset will be uploaded, then it will be validated, it will be imported. So now we need to wait for the path until our data that is applauding. Now our dataset is loaded and imported successfully. So you can see that different images present inside of it at a third. And you can see all through the folders which we upload it to. For example, I'm going to open the diamond folder again. And you can see that all the images present inside that folder RVs evil. As you can see that there is a warning message which is that there should be minimum number of a 100 images in each category. But it is not compulsory as we only have 123 images in all these seven categories. But after reigning over model, you will see that our model performative quite accurately. So now we need to train the model. So you just need to pass on the spin model button. And there you need to select an option. So the first option is low latency, and the second one is general purpose, and the third one is higher fidelity. So what does that mean? Low-latency means that our model size is quite less, but our model's peak is quite high. As you can see that it will gonna return the prediction in only in two millisecond. But the model size is 2 M B, but its accuracy will not be that high. Note the second option is general purpose enrich our model will return at the prediction in 65 millisecond and ever model size will be above four MB. In that case, accuracy will be quite high. So in general purpose we have good latency and good accuracy. And the third option is higher accuracy. And in that you can see that our model size is quite high, but it's performance that it will return the result in 105 milliseconds, which is lower than this 2265 millisecond, but the accuracy will be quite high. So if you want more accurate results, then you need to choose this higher-quality option. And if you want to keep the model size low and you can make compromise on the accuracy that you can have low latency. And there is an in-between option which is general purpose. I'm gonna choose a soccer, choosing it to press on the schedule training. So when I press on it, you can see that I'm getting Sura too busy error because the third-order quite busy at that time. So I need to try out some other time. But when you will schedule this training, you're training will be in acute process and it will be done in about two to three hours. And you will get an email regarding that your model training has been completed. 3. 2 Evaluating and testing the model: So now our model training is complete. Now the next step is testing over model when we move that to the recommendation phase. So here you can see that after training the model, the next step is evaluating the model. So now we're gonna do that. So now let's move it. And here you need to press on this training complete button. And it will going to take you to the page where you can test your model or use the model so far now we are going to test it. So I'm going to pass on this test model button. And there you need to place the images to test the model performance. So I already downloaded some images from one to open them. So here you can see that I have two images. The first one is a 0, cuz I downloaded it from Goodwill and the second one is okay. So firstly, I'm going to drag it and paste it here to see whatever modal thing it is. So now you can see that our model thing, it says it, Kuhn and the confidence score is 92.5% is over murder prediction is right and the confidence code is quite high. Similarly, now let's drag the other image, which is a cat size. I'm gonna drag it and put it here. And now you will see that our model thing, it's the carrots I with a confidence score 85.9%. So now remodel performance is quite accurate. Now the next step is using our model. So I'm gonna move that. And there you need to press on this terrain incomplete again. Now you need to click on this use Model button. And there you have two options. So the first one is download model and the second one is published model. So if you want to do ONE device machine learning, which means that inside your Android application in which you want to use this model, you don't need internet connection and you want that, you want to, should we present inside that application, then you'll want to download this murder and put it inside your Android Studio project. And if you want that your model will be downloaded off to your application, will be installed, then you're gonna publish your model. It have couple of other advantages that you don't need to update the application if you want to update the only the model. But for this example, we're going to download our model and put it inside of our Android Studio project because we were doing ONE device machine. So I'm gonna just press on this download button. And now you will see that our model will be downloaded or ready for downloading a movement. So it's currently being loaded and now you need to press on this download button and your download will be ready. So now you know to download this zip files, I'm going to download it. And after that we're going to explore the 65. So that is our model zip file, which is downloaded from Firebase console, so no less extracted, so I'm going to extract it here. So now you can see that it contains 35. The first one is a dictionary dot TXT, which contain all the labels, which are actually the stone named on which we trained our model. So here you can see that all Simon Stone names are listed. Similarly, we have a model, that flight file, which is our trained model file, which is actually our trained model. And now the third file is manifested adjacent, which is actually the file which contains information for both of these file, this dictionary dot TXT and the module.js applied by. So I'm gonna open it. So here you can see that it contains the information that our model file is modeled, our T apply it and have a label file is definitely dot TXT and the model diabetes image labeling for now we downloaded our model. Now the next step is building the Android application forever model. And we're gonna do that inside of a next lecture. 4. 3 Building Android Application : In the previous lecture, you have seen that we prepared our data set, then we uploaded it on Firebase console. Then we trained our model on the data set and evaluate. And after that we downloaded over modern. Now in this lecture, we're going to use this trained model to develop an Android application using Android Studio. So firstly, move back to the documentation pig. So here you can see that we followed all these steps. Now we need to follow the steps for building our Android application to here, just click on this end time and it will going to take you to the documentation page for developing android application for the model trained using our two m l. And now we need to follow all the instance. So the first step is creating your Android Studio project. So I'm gonna create a new Android Studio project and I'm going to them take two p. Then I'm going to select the name to mice don't recognize. Oh my no, press finish. And you'll riddle will be built and the project will be ready in a moment. So the Gradle is will successfully and my Android Studio project really is ready. Now the first step is adding the model inside this Android Studio project. And we're going to add it inside of an asset folder to create an asset folder, just right-click on the folder, go inside New and here you need to choose folder then asset folder, snowpack, peanuts and us. And your asset folder will be created. So there you can see that we have our acid folder. Now inside this assets folder, I'm gonna create another directory. So go inside knew and choose a directory and I'm gonna name it model pies. And in that directory I'm going to place our file, which is model dictionary and the manifest coil. Cool. Let's move back to our model folder and copy all the three piles which we downloaded and paste it inside the model file directory and all press OK and these files will be added. So here you can see that all the three files are added. And now after adding these files, we need to add the dependency. So let's move back to the documentation page. Here you can see that we need to add this dependency inside our build-up Gradle pile because we are doing on device machine learning. And if you are doing remote machine learning or cloud machine learning, then you need to add these dependencies because they are caught dynamically downloading a model Palm five, someone a copy that dependency and open the build-up Gradle file and paste it inside the dependency section, someone a bit there just at the top. And now we need to add another thing inside the build-up Gradle file. And after that, we're going to think of a project and that thing is to add the Gradle options. So I'm gonna copy these four lines. Then I'm going to explain them after pasting in inside of our Android tears of build-up Gradle file. So there you can see that we have Android tech inside our build-up Gradle file. And you need to put the file deal. So that is the Android tag and it's closing here. So just before it's clothing space, the lines we copied and we are here specifying that our project should not compress the T applied pilots because they are our model file and it will be, you will rise the functioning of our application. Snp can be think knows who the libraries will be downloaded and read. And you know, meanwhile the libraries are being downloaded. Let's build a UFO replication for go inside you. So here you can see that I only have a constraint layout with a TextView here for the basic plausible replication is dead when the user pressed the button. Tld will be offered with all the images. So user can select ME, stock or user reflect the image. The image will be shown in the image view and the image also we pass through the model and the result will be shown in the next few snow. Let's build that UI E So you will get the better idea about the flow of our application. So I'm going to copy an image view and I'm gonna paste it here. And then let's let this avatar for now. And now we need to set the cost transfer the image view. So I'm gonna select all the cost for it and change the width and height to the width should be matched constant and the height should be 300 dpi. Guess that enough. And after that, you need to choose an textview in which the predicted label will be shown to drag it and pasted just below this image view. So I said the cost trend and now set the left and right constant current, this TextView to neglect thing, the text size to text size. And let's make it 24. And ask for the move the tech present inside this TextView. So let's remove it. And now the third thing we need is a button. So I'm going to place a button here and I'm going to actually attached to the bottom. And after that, we will change the text on this button to choose. So let's make it choose that center. So now when you repress this Jews, but I'm Gary will be opened and who they're with you that an image from the love and the user will press this. Jews Britain, Galileo will be open so user can choose an image garry after user will choose enemy, that image will be displayed here. And arthropods Buddha model and the predicted label by the model will be shown in this text wheels. So hopefully you now get a better idea about the flow of our application to know, let's initialize the new I actually met. We play it in a very active T XML files. So firstly, we placed image views, so ImageView and emit in, you know, a TextView, which we'll be able to rest TV, and a button with the name 2s. And now we need to add the import for all these three things. So now let's initialize them over ImageView is equaled to find view by ID, R dot Id dot ImageViewer. Similarly less initialize the text on the button. So now we initialize it with UI elements below. Let's set a listener Carver Britain, then, when the user, we'll press on this button. So tools that setOnClickListener and new onClickListener. Here we're gonna implement the on click method. So implemented. So now we are going to write the code inside this onClick method. So spend the user will press this button. We want to open a gallery with all immediate. So we're gonna create an intent, intent iss equal to new Intent. And after that, we're going to set the type of intent to image. So set type two image so we want to see all the images present inside the gallery. Now set the action to action, get content because we are requesting some content. Now we need to learn this in turn, full stop results. Then we're gonna use intend dark, chooser. And there we're gonna pass our intent then our text which will be shown inside the red, even that every will be opened with imaging tools in me. Then here we need to pass a constant value, which will be a web request code. So I'm going to pass it. Let's, they went to one for now. So it's better that you should create a public final variable here for that Perkins. Now when the user we press this button and intent will be created and this intron will gonna open the garry With all the images, so user conflict and image. And here this image is specifying that we only want to see all the images. So when the user will choose the image, F22 result method will be called for now we need to create that method. So just blow with onCreate method, breath all plus insert. And there you need to select override methods and search for our own activities isn't here. So onActivityResult is there. So just enter to now over ONE activity resulted at eight. So here we firstly need to check if the request code is one-to-one, which we passed here that this ONE activity resulted being called for this particular request. Then we're gonna write the code for here. We firstly need to show the image inside of an ImageView, ImageView, that image URI. And we're going to use this data object. So data.gov data. And it will return that the chosen image URI. So this data object actually contains the URL of the image so that it got getter tells you return the URI. And after that, we need to pass this image over model and get the predictions. And now we're going to do that. Let's move back to the documentation page. So here you can see that we added these options. We also created the asset folder and placed all the three files. We need to follow the next instruction, which is creating a local model object because we are doing only with machine than me to copy this portion of code. And I'm going to copy it and I'm gonna paste it inside my onCreate method just below this on-click listener and I'm gonna paste it, then I'm going to play next. So add the import. So here you can see that we're creating an alto ML image label and local Martin. And here we specify the path of manifesto adjacent pile, but over manufacturer adjacent file is present inside this model files folder. So I'm going to specify it here. So model files. So inside this model file, we have over manifest or Jason, I specified it here. It's tougher creating over local model, which is actually helping us to load the model. We need to create an image labeling objects, so we are going to follow the next instruction. So here you can see that we firstly need to configure a local model focused. They will create through this R two M a local model object. And after that, we need to create our image labeling. Procreate ME Leibler from your model to just click on it. And we're going to take you to that portion of code. So here you can see that we're creating an image Leibler using image labeling that get client. And here we are specifying an object of type Auto ML image labeling options. So if you want to customize them enabler, you need to specify Auto ML ME leveler option. So we are creating that object just above this image labeling. So there you can see that our two ML in equilibrium. Then the variable name, then new auto Emily, military options, village. Then we are specifying the local model which we loaded. And then we're specifying the confidence threshold that each labels for each category should be included with a confidence score above 0. And here you can specify 5.5, which means that if the confidence score for particular labor is less than 50, we are going to not include it. So now we need to copy this portion of code and we need to paste it inside of our own create method. So just paste it below this local model up there. So pasted it here. Now at the impulse that the important part, it's image label or as Hal, ME labeling. And now we need to pass the image to this image labeling, but this method don't have access here inside of our own activity results, so we need to declare it year globally. So just copied and pasted this ever own create. Here. We're going to remove it from there to just remove this immediate blur from here. Now it is accessible inside our own active period. Now we need to create an instance of type input image which will be passed to this labeling. Let's move back. And here you can see that prepare the input image sections. So just click on it. And here you can see that using a file URI, so you can create inputting me from different options. For example, using Bitmap by Terry for using URI. And we have the URL of the image. So we're gonna use this using our file URI method and just copy this portion of code as is creating the input image from viewer I of the image and just copy it and paste it inside our own activities. They'll just below the image URI method. So paste it here and add the imports. And now here you need to specify the context. So get application context, then the image URI, which will be data dot get data, and it's returning the URL of the image that you selected. Now we have over inputting me. Now we need to pass this input image over Leibler and the court heard that is also written there. So run the image label is action, just click on it and the copy that code. Then I'm going to explain the course, this copy it and paste it below this input image. So now at the imports, there are a couple of import car image labor list, then part of our own success listener and own failure listener than Finally forever not null. So there are quite a lot of imports. So now here you can see that we are passing this input image to a well labelers for Leibler.com. And we are passing the input image. Then we are adding to callback add-ons that test listener and Adam failure thoughts. If a remodel successfully predicted the label image, then add on the listener will be called and the on_success method will be executed and we will get the list of predicted label. But in case of any error that our model is not able to run the impedance or the critical path of this own failure listener will be called an own failure method will be executed and we will get the exceptions here. No, Here you can see that when the user with select the image from Delhi, it will be passed to this live there and we will get the list of predict delivers a. We need to show the predicted label inside of a resistivity. So now we're gonna do that. So just move back here. And then you forget the Skoll and you will find a for loop. So just copy it and is doing the same path and paste it inside of our own success method. So now here you can see that we are iterating this list of liberals and we're getting the label name, the confidence Kool and the index. And now we need to show them inside of a river TV for STV dot setText. And there you firstly need to specify name, which is text. And then you need to specify the confidence goal, which is protein side confidence. And then you should add a new line corrected because there may be multiple predicted label. So and here you need to change it to append. We want all the predicted label to be shown with the concrete. And immediately you need to reset this result TV so that for the next image, the level of previous images should be removed. So rest tv dot txt to empty string. No replication coding is almost complete. So now we need to run our application and testing. So here you need to add semicolon because that's missing. I'm gonna repeat the whole process here. So firstly, we prepared our data set, then we uploaded the data set on fire with control and trained a Vermont. And then we evaluated over model and downloaded over model file that we created a new Android Studio project. And we added our model file inside the asset folder. And after that, we added that dependence is entitled build-up Gradle file and specified this options. Then we created the new IF of replication, which is quite simple. Then we wrote the code for the Dui element. So firstly, we initialize a new UI elements here. Then we said the listener forever button that when the user will press the button, again, we will be opened and users can choose the image from Ghana who, after the user to image hmm GAVI, this won't have to. Period L will be called. And we, and we showed the image of the selected inside this image view. And then we wrote the code forever model predictions. So firstly we created local model object. Then we created an object of type image Leibler, which will be used for prediction. Then we created an input image object from the URI of the image is selected. So here you can see that we created this input immediately there. Then we pass this in V2 over Leibler and get the prediction. And we showed those predictions inside of a TV.