No-Code Machine Learning with IBM Watson AutoAI | Nigma Kuzenbayev | Skillshare

No-Code Machine Learning with IBM Watson AutoAI

Nigma Kuzenbayev, Consultant, Educator, Scientist

No-Code Machine Learning with IBM Watson AutoAI

Nigma Kuzenbayev, Consultant, Educator, Scientist

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
15 Lessons (50m)
    • 1. Introduction

      1:38
    • 2. Introduction to IBM Cloud

      4:59
    • 3. Introduction to Artificial Intelligence

      4:05
    • 4. Introduction to Machine Learning

      5:40
    • 5. What is AutoAI/AutoML?

      2:10
    • 6. IBM AutoAI

      3:19
    • 7. Create an IBM Cloud Account

      2:19
    • 8. Classification Problems

      2:11
    • 9. Classification Metrics

      4:50
    • 10. Classification Problem & Dataset

      1:10
    • 11. Classification using AutoAI [Hands-on]

      8:16
    • 12. Regression Analysis

      2:15
    • 13. Regression Metrics

      1:37
    • 14. Regression Problem & Dataset

      0:33
    • 15. Regression using AutoAI [Hands-on]

      4:50
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

5

Students

--

Projects

About This Class

Learn to Build and Deploy Machine Learning Models Without Any Coding (Zero-Code Approach to AI & ML)

Artificial Intelligence (AI) and Machine Learning (ML) are two very hot topics nowadays. Experts claim that AI & ML are going to revolutionize the world. This course is designed for those who want to take a short cut to these technologies. Auto AI and Auto ML are new tools that provide methods and processes to make Artificial intelligence and Machine Learning available for non-experts. Auto AI and Auto ML technologies aim to eliminate the need for skilled data scientists to build models.

Meet Your Teacher

Teacher Profile Image

Nigma Kuzenbayev

Consultant, Educator, Scientist

Teacher

Hello, I'm Nigma.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

phone

Transcripts

1. Introduction: Although AI and machine learning are much more accessible nowadays, it is still hard to become skilled enough to build decent AI and machine learning models. Auto, yeah, Auto ML and new tools that provide methods and procedures to make artificial intelligence and machine learning available for non-experts. These tools allow you to provide the training data as input and receive an optimized model as output. It's as simple as that. In this series of video lectures, I'm going to introduce you to auto EA by IBM Watson, which is one of the best existing solutions that allow you to build machine learning models without any coding. My name is Enigma, and I'll be your instructor for this course. This course is designed for those who want to take a shortcut to artificial intelligence and machine learning with no coding arrow. Let's take a look at the content of the course. First, will get acquainted with cloud computing, IBM Cloud services, artificial intelligence, and machine learning. Then we'll figure out what is auto AAA and how it works. Finally, he'll build and deploy your first Machine Learning models using real-world datasets. So let's begin. I hope you'll enjoy this course. 2. Introduction to IBM Cloud: In this section, I'll give you a brief introduction to IBM Cloud. So first let's start with cloud computing. Cloud computing is the on-demand delivery of IT resources over the internet. With pay-as-you-go pricing, you typically pay only for Cloud services you use, which lowers your operating costs. Cloud computing is a big shift from the traditional way. Businesses think about IT resources. Businesses are turned into cloud computing services because cloud computing eliminates the capital expense of buying hardware and software and setting up onside datacenters. Secondly, cloud computing services run on a network of datacenters, which has systematically upgraded to the latest generation of fast and efficient computing hardware. In addition, many cloud providers offer a set of policies, technologies and controls protect data from potential threats. Nowadays, there are many companies offering called Services, and most of them are very reliable. Top five Cloud service providers are Amazon Web Services, IBM Cloud, Microsoft Azure, Google Cloud Platform, and Adobe Creative Cloud. In addition to storage, networking, and processing services, these providers offer other solutions, such as databases, artificial intelligence, data analytics, and serverless infrastructures. Cloud computing is also categorized into three models of service delivery. Infrastructure as a service, platform as a service, and software as a service. Infrastructure as a service contains the basic building blocks for cloud. It. It typically provides access to networking features, computers, and data storage space. Platform as a Service removes the need for you to manage underline infrastructure and allows you to focus on the deployment and management of your applications. Another concept that comes up together with pass is the idea of serverless. Whereas the cloud service provider manages a scalable infrastructure that utilizes resources according to demand. Software as a Service provides you with a complete product that is run and managed by the service provider. Saas model is the most recognizable by the general public. A lot of users interact with SaaS applications without knowing. Typically, examples of SaaS applications are Gmail, outlook, Google Drive, and Dropbox. Ibm cloud offers both platform as a service and infrastructure as a service capabilities for building, running, and managing applications. Ibm Cloud Platform offers a wide range of services for securing, storing, serving, and analyzing data. Was IBM cloud developers can focus on building an excellent user experiences with flexible compute options, choice of DevOps tunings and the powerful set of IBM and third-party APIs and services. Ibm Cloud platform is composed of multiple components that work together. Robots, consoles that serves as a front end for creating, viewing, managing your cloud resources. And identity and access management component that securely authenticates users and controls access to resources across IBM Cloud. A catalog that consists of hundreds of IBM cloud offerings. A search in tagging mechanism for filtering and identifying your resources. And finally, an account and Bill management system that provides exact usage for pricing plans, secure credit card fraud protection. Watson is IBM suit for enterprise ready AI services, applications and tooling. And outer AI is one of the tools in Watson studio. 3. Introduction to Artificial Intelligence: Artificial intelligence and machine learning are two very hot topics nowadays. Experts claim that these technologies are going to revolutionize the world. Artificial intelligence, satellites, ways to build intelligent programs and smart machines capable of performing tasks that typically require human intelligence. Artificial intelligence has a broad range of ways in which it can be applied. From chatbots to predictive analytics, from recognition systems. Autonomous vehicles. Ai is generally divided into two categories. We KI, which is focused on one particular problem. And strong AI that focuses on building intelligence that can handle any task or problem in any domain. Strongly is focused on creating intelligent machines that can successfully perform any intellectual task that a human being can. And this comes down to three aspects. First, the ability to generalize knowledge from one domain to another and take knowledge from one area and apply it somewhere else. Secondly, the ability to make plans for the future based on knowledge and experience. And lastly, the ability to adapt to the environment as changes occur. On the other hand, we care is good at performing a particular task, but it will not pass for human in any other field outside of its defined capacities. Examples of weaker at technologies such as image and speech recognition or AI chatbots. One of the most famous examples of weak AI is Deep Blue. Deep Blue was a chess-playing computer developed by IBM. In 1997. Deep Blue made history as the first computer to beat the world champion Garry Kasparov in a six game match under standard time controls. Garry Kasparov is a Russian chess grandmaster who many consider to be the greatest chess player of all time. When Deep Blue took the much by winning the final game, Kasparov refused to believe it. The blue used a brute force or exhaustive search approach. The blue is capable of examining 200 million moves per second, or 50 billion positions in the three minutes allocated for a single move in a chess game. Another famous example of wiki is AlphaGo. I forgot, is a computer program that plays the board game Go. It was developed by DeepMind Technologies, which was later acquired by Google. Go is known as the most challenging classical game, PA, artificial intelligence. Because of its complexity. It is practically impossible to calculate all the possible moves in this game. Alphago is the first computer program to defeat a professional human Go player. Ifa goes algorithm used a combination of machine learning and research techniques. Combined with extensive training, both from human and computer play. Artificial intelligence has become a crucial part of daily human lives today. Every time you do a Google search, book_id trip online, receive a product recommendation from Amazon or open your Facebook news feed. Ai is lurking in the background. Aaa is at the core of the First Industrial Revolution, which will fundamentally alter the way we live and work. It will affect all of us. They ALL deeply influence the future of work and your career. As a result, no matter the industry you are in. 4. Introduction to Machine Learning: Machine learning is a subset of AI. In other words, all machine learning is AI, but not all ai is machine learning. Machine learning provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Deep learning is a subset of machine learning in which Artificial Neural Networks adapt and learn from vast amounts of data. Deep learning will be discussed as well later in this video. Machine learning uses a variety of algorithms that iteratively learn from data to improve the scribe data and predict outcomes. A machine learning model is the output generated when you train your machine learning algorithm with input data. After training, you'll be given a prediction. Machine learning is now essential for creating analytics models. Many of the widely used machine learning algorithms are rooted in classical statistical analysis. Data scientists combine the Manoj with expertise in math, statistics, and computer science to use all disciplines in collaboration, regardless of the combination of capabilities and technology used to predict outcomes. Having an understanding of the business problem and beans goes is essential. You cannot expect to get good results by focusing on the statistics alone without considering the business side. Basically, you can consider machine learning as a tool to solve your business problem. Depending on the nature of the business problem being addressed. There are different categories of machine learning, such as supervised learning, unsupervised learning, reinforcement learning, and neural networks, and deep learning. Most of the practical machine learning uses supervised learning. It is called supervised learning because the process of an algorithm learned from the training dataset can be thought of as a teacher supervising the learning process. We know the correct answers while the algorithm iteratively makes predictions on the training data. And this corrected by the teacher. Learning process stops when the algorithm achieves an acceptable level of performance. In general, supervised learning occurs when a system is given a labeled dataset with input and output variables. And the model needs to learn how they are related. The goal is to produce an accurate model that can predict the output when the new input is given. In this course, we will mainly focus on supervised learning because it is the most use in practice form of machine learning and has proven to be an excellent tool in many fields. Supervised learning problems can be grouped into regression and classification problems. When the label is continuous, it is regression. When the data comes from a finite set of values, it is the classification. Unsupervised learning is best suited when the problem requires a massive amount of data that is unlabeled. Understanding the meaning behind this data requires algorithms that can classify the data based on the patterns or clusters it finds. Therefore, the supervised learning conducts an iterative process of analyzing data without human intervention. Reinforcement learning is the behavioral learning model. The agent learns to achieve a goal in an uncertain potential, a complex environment. In reinforcement learning. And artificial intelligence faces a game-like situation. The computer impose trial and error to come up with a solution to the problem. Aa gets either rewards or penalties for the actions it performs, and its goal is to maximize the total reward. The most common applications of reinforcement learning, our robots and AI boats in video games. Deep learning is a subfield of machine learning concerned to his algorithms inspired by the structure and function of the human brain called artificial neural networks. Deep learning is especially useful when you're trying to learn patterns from unstructured data. Complex neural networks are designed to emulate how the human brain works. So computers can be trained to deal with abstractions and problems that are poorly defined. A neural network consists of three or more layers. An input layer, one or many hidden layers, and the output layer, data is ingested through the input layer. Then the data is modified in the hidden layer and the output layers based on the weights applied to these nodes. The term deep learning is used when there are multiple hidden layers within a neural network. Using an iterative approach, a neural network continuously adjusts and makes interferences until a specific stopping point is reached. Neural networks and deep learning are often used in image recognition, speech recognition, and computer vision applications. 5. What is AutoAI/AutoML?: There are tons of courses and well-written textbooks available online that give good introduction to artificial intelligence and machine learning. And a fraction of the cost. There are countless frameworks that make it easier to build and deploy models. It is now possible to use an existing model with a basic understanding of the algorithm and the few lines of code. However, artificial intelligence and machine learning are still difficult when implementing existing models to work well for your new application. The good news is that artificial intelligence and machine learning have become an even more accessible. Also, AI and ML aim to eliminate the need for skilled data scientists to build models. Good data scientists combine the Manoj, whose expertise in mass, statistics and computer science. As for now, it is practically impossible to replace the main knowledge with automated systems. But it is now possible to automate many of the complicated and time-consuming task. Machine learning, like hyperparameter optimization, model selection, feature selection, et cetera. Basically, everything except the main knowledge can be automated. Auto AI and ML are new tools that allow you to provide the training data as input and receive an optimized model as output. Exist in auto AI and athermal solutions like IBM Watson, auto AI allow everyone to build and deploy optimized models without the need to be a pro at data science. 6. IBM AutoAI: Although EIA is a graphical tool in IBM Watson studio that automatically analyzes your data and generates machine-learning models for your problem. Currently, auto AAA addresses problems related to classification and regression. This types of problems, I have the core of many data science initiatives. Using auto yea, you can build and deploy a machine-learning model with no coding. First, you need to provide a row data in a CSV file. Then author AI helps you to prepare our data for machine learning. Automatically selects best algorithms, performs hyperparameter optimization and feature engineering. As an output, you get an optimized model that you can save and deploy. Let's go through the steps in more detail. Most datasets contain different data formats and missing values, but standard machine learning algorithms work with numbers and no missing values. Alto AAA helps you to analyze, clean, and prepare your row data. It automatically detects and categorizes features based on data type, such as categorical or numerical. Depending on the categorization, auto AI conducts missing value imputation, featuring coding, and feature scaling for your data. Then auto AI automatically selects best algorithms for your problem. Here you can see the list of default algorithms used for automatic model selection, for classification and regression problems. Feature engineering attempts to transform the input data into the combination of features that best represents a problem to achieve the most accurate prediction. A feature is an individual measurable property or characteristic of a phenomenon being observed. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. Although EIA does automatic feature engineering by using reinforcement learning to maximize model accuracy. All machine learning models have parameters, meaning the weights for each variable in the model. Hyperparameter optimization step refines the best-performing model pipelines. This model pipelines are displayed on a leaderboard ranked according to your problem optimization objective. You can start using auto yay for free with the life plan of the IBM Watson Studio cloud, which includes 50 capacity unit hours per month. You can check all pricing plans for IBM Watson studio by following this link. 7. Create an IBM Cloud Account: In this video, I'm gonna show you how to create an IBM Cloud account and activate Watson services. Firstly, you need to follow this link, www.ibm.com slash cloud. Then we press signup or login. Here we type in our email address and then choose a password. After email verification and provide personal information. You'll be able to log in. Since I already have an account, I'll just login using my email address. Continue login. After login in, you'll see your IBM Cloud. There's book. In order to activate Watson studio will go to the catalog and search for Watson studio. Lot some studio. Here it is. Now we can select a region and the pricing plan. Let's choose London. And the light bulb, which is offered for free. Or you can change the service name. But I'll leave it as it is created. And that's it. 8. Classification Problems: Classification and regression are two types of supervised machine learning. Basically, classification is about predicting the class. Classes are sometimes called targets, labels or categories. And regression is about predicting a quantity. In machine learning and statistics. Classification is the problem of identifying to which category or class a new observation belongs. There are two types of classification based on the number of predicted classes. Binary classification refers to those classification tasks that have two class labels. And multiclass classification refers to those that have more than two class labels. Typically, binary classification does involve one class that is the normal state, and another cost that is an abnormal state. The class was a normal state, is assigned the class label 0. And the class was the abnormal state is assigned the class label. One. Examples of binary classification problems include spam filtering. Is this email spam or not? For detection? Is this transaction legitimate or not? And medical testing to determine if a patient has certain disease or not. Unlike binary classification, multiclass classification does not have the notion of normal and abnormal states. Instead, examples are classified as belonging to one of the multiple classes. In this section of the course, I'll show you how to build a binary classification model using auto AAA. But before we build one, we need to understand how to evaluate the performance of the model. 9. Classification Metrics: Evaluate in machine learning models is essential for any project. Evaluation metrics are used to measure. Suppose there was a machine learning model. Using evaluation metrics is critical to ensure that your model is operating correctly and optimally. There are many different types of metrics available to test the classification model. It is very important to use multiple metrics to evaluate your model. Because a model may perform well using one measurement from one evaluation metric, but may perform poorly using another measurement from another evaluation metric. In auto AAA, each modal pipeline is scored for a variety of metrics. The default ranking metric for binary classification models is the area under the ROC curve. And for multi-class classification models is accuracy. Let's look at the popular metrics used in classification problems. The Confusion Matrix is one of the most intuitive and easiest tools to find the correctness and the accuracy of the model. Confusion matrix itself is not a performance metric as such. But almost all the performance metrics are based on confusion metrics and the numbers inside it. You can see that in case of the binary classification, confusion metrics is a table with two rows and two columns that reports the number of false positives, false negatives, true positives, and two negatives. True positives are the cases when the actual class of the data point was one and the predicted is also one. False positives are the cases when the actual class of the data point was 0 and Z predicted was one. False negatives as the cases when the actual class was a datapoint was one and the predicted is 02 negatives are the cases when the actual class of the data point was 0 and the predicted is also 0. Accuracy in classification problems is the number of correct predictions made by the model over all predictions made. Accuracy seems like it could be the best method. However, accuracy only works for classification promise, which are well-balanced. Other commonly used metrics are precision and recall. These two performance metrics are often used in conjunction. Precision is the number of items correctly identified as positive. Out of total items, identify it as positive. And recall is the number of items correctly identified as positive out of total true positives. So basically, if we want to focus more on minimizing false negatives, we would want our recall to be close to a 100%. And if we want to focus on minimizing false positives, then we would want our precision to be close to a 100%. F1 score is a weighted average of precision. And recall. Therefore, this score takes both false positives and false negatives into account. Simply stated, the F1 score maintains a balance between the precision and recall. Area under the ROC curve indicates how well the probabilities from the positive classes are separated from the negative classes. Auc is the area under the ROC curve. And the ROC curve is a graph showing the performance of a classification model at all classification thresholds. You should understand that our forecast is never exactly one or 0. Instead, we predict a probability. Typically, the classification threshold is 0.5 is the predicted probabilities hires and 0.5. isn't our forecast is one. Entity is 0, is the probability is less than 0.05. but we could have a better performing classification threshold. Area under the ROC curve provides an aggregate measure of performance across all possible classification thresholds. It measures the quality of the models prediction irrespective of what classification threshold is chosen. Unlike F1 score or accuracy, which depend on the choice of the threshold. 10. Classification Problem & Dataset: In the next video, we'll use a dataset from the National Institute of Diabetes and the JSON kidney diseases. It contains information of 768 females of which 268 were diagnosed with diabetes. Oh, patients were females at least 21 years old of Pima Indian heritage. The objective is to diagnostically predict whether or not a patient has diabetes. Based on certain diagnostic measurements included in the dataset. Our model should be able to predict whether or not a patient has diabetes. Hence, our dependent variable is diabetes. One represents the presence of diabetes and 0 represents the absence. Information available includes eight variables such as number of pregnancies, glucose, insulin, h, et cetera. More detailed description about the variables is listed in the table. 11. Classification using AutoAI [Hands-on]: In this tutorial, we are going to build and deploy a binary classification model using Watson studio. So last time we created an IBM Cloud Light Account and activated Watson studio. Now let's get started. Create a project. Create an empty project. Before we create a project, we need to add an object storage instance and then return to this page. Click Add, light, plan, create, confirm. Now we need to refresh this page. Here we type in our project name. Auto, binary, classification. And click Create. Now to start build now modal, click Add to Project from the top. And select Auto AAA experiment. Give it a name. Binary classification. Also, we need to associate a machine-learning service instance. Let's click here. Again the light lamp. To confirm. Click reload. And now we can create it. Now you need to upload the CSV file from your computer, click browse, and choose the file. What do we want to predict? We need to predict this column. Although a automatical identified the prediction type binary classification, positive class one, and offers to use a metric accuracy. We can go to experiment settings. Here we can choose the split ratio. Let's choose 90%. Select columns to include. We select all columns. Then click Prediction, binary classification, positive Classes, one, and optimize metric. Let's choose. Area under the ROC curve. Also, we can choose how many algorithms to include. We select all the algorithms and algorithms to use. We can choose three, Save Settings and run experiment. The experiment begins while it is generating the models. There are two different views through which you can visualize the progress of the pipelines being created. The progress map and the relationship map. So this is the progress map and that's a relationship map. So finally, auto execution is completed. And on the progress map you can see the whole process. Read Dataset, Split holdout data, retraining, data, preprocessing and model selection. Here we can see that auto EIA has chosen logistic regression gradient boosting classifier and SGB classifier S, top-performing algorithms for this use case. You can also see all the 12th pipelines generated. And you can compare them here based on different metrics. Here you can see the pipeline leaderboard. Pipeline number three is our top-performing pipeline. Here you can see the model evaluation. This is our curve. Roc curve measures accuracy, area under the ROC curve, precision recall F1 measure average precision analogue loss. Also we can see the confusion matrix. So now let's save this model pipeline. Save S model, safe, safe model successfully. Now let's go back and let's deploy our model. Here we can see our model. Now let's deploy it. Deployments. At deployment. Deployment name is safe. Our deployment is ready. Let's open it and test the model. Now we can test our model from this interface. We can either provide input in JSON format or enter input details to the fields given here. For example, let say number of pregnancies to glucose level AT pressure, 60 triceps, 30, insulin 20, BMI, 25, pedigree is 0.2. And h, let's say 30, predict. And we get a prediction which is 0. So our model suggests that this person doesn't have diabetes. You can also see the probabilities. 12. Regression Analysis: Regression analysis is a form of predictive modelling technique which investigates the relationship between a dependent variable and independent variables. This technique is used for forecasting time series modelling and finding the causal effect relationship between the variables. Machine learning models for regression problems predict a numeric value. For example, what will the temperature being London tomorrow? Or what price will this house sell for? One of the easiest and commonly used regression techniques. It's simple linear regression, where we predict the outcome of a dependent variable based on one independent variable. And we assume that the relationship between the variables is linear. The objective is to find the line that most closely fits the data. This task can be accomplished by using least square method. It is the most common method used for fitting a regression line. It calculates the best-fit line flows observed data by minimizing the sum of the squares of the vertical deviations from each data point to the line. Training a regression model is the process of iteratively improving your prediction equation by looping through the dataset multiple times. Training is complete when we reach an acceptable error threshold or when subsequent training iterations failed to increase accuracy. Usually professionals know only several types of regression, which are commonly used in real-world. However, there are numerous types of regression models that you can use, each with their own advantages and disadvantages. Therefore, choosing the optimal regression model can be difficult. Aaa automatically selects best models for your problem according to the various metrics to evaluate the results. In the next video, we will discuss is the most widely used regression metrics. 13. Regression Metrics: So let's look at the metrics used in regression problems, such as root-mean-squared error or RMSE and coefficient of determination or r squared. Root mean squared error is the most widely used metric for regression tasks. And this is the square root of the average squared difference between the target value and the value predicted by the model. The smaller it is, the better your model is performing. It is preferred more in some cases because the errors are first squared before averaging, which poses a high penalty on large errors. This implies that RMSE is useful when larger, undesired. Coefficient of determination or r squared, is another metric used for evaluating the performance of a regression model. The r-squared tells you how well your model is fitting your data. The equation for r-squared could be described as explained variance divided by total variance. Explained variance, explained by the model. R-squared values can range from 0 to one. And the r-squared value of 0 means that the model is not explaining any of the variance in y. And the r-squared value of one means that the model is perfectly explaining all the variance in Y. So in general, the closer R-squared is to one, the better the model is describing the inputted data. 14. Regression Problem & Dataset: In the next lecture, we're going to build a regression model using author EIA that predicts SAT score based on the average high-school GPA. The SAT is a Test widely used for college admission in the United States. The dataset includes 100 observations. Our dependent variable is SAT score, and the only independent variable is high school GPA. 15. Regression using AutoAI [Hands-on]: In this tutorial, we are going to build and deploy a regression model using auto yay. Last time we created a project for binary classification. And today we'll create new projects for regression. All the steps are basically the same. Let's go to my projects. And we'll create a new project. Create an empty project. Let's give it a name. Auto regression. Now, at the project auto experiment, name and aggression. Also, we need to associate the machine learning service instance. By clicking here. We choose an existing service instance. Select reload. And now we need to upload the file CSV file GPA dataset. What do we want to predict? We need to predict SAT score. This time we will use default settings. Prediction type is regression optimized metric, RMSE run experiment. So the experiment is completed. We have eight pipelines generated. And we can see the relationship map and the progress map. This time auto, yes, elected to top-performing algorithms, linear regression and gradient boosting regressor. And we have eight pipelines. Let's compare them, who they are. And let's see the leaderboard. We can see that the best pipeline is the pipeline number three. Let's choose it. And here we have model evaluation measures, RMSC, R-squared, explained variance, and so on. Let's save this model. Save. And now we can deploy it. Auto regression. We choose the model. Deployments. At deployment. Deployment name. Save. Deployment is initializing. And now it's ready. Test. Here we can type in high-school GPA, for example, 3.5. and predict. And that's our prediction. Now let's try another GPA. For example, 3.7 predict. And that's our prediction. If 4.0 SAT score increased. And we can see that higher the GPA highlighted the predicted SAT score.