Transcripts
1. Introduction to Course: Hello everyone. Welcome to minute course, artificial intelligence and machine learning with Python. I am soap Han and I'm proud to teach you this course. I've worked in this field from 2011, and I do lots of AI approach logic, tailoring my work. This course is divided into four major parts. In part one, we discuss about neural networks and its application, especially for prediction. In part to the classification methods will be discussed in Part three and for regression, analysis and optimization methods will be presented respectively. In part one, you will learn different types of neural network like multilayer perceptron, NLP, recurrence. New Roland's work are in and long short term memory, neural network, LST, VMs. You learn how to model problems and predict the output of different datasets like stock price, nasdaq Index, temperature, and when the Spirit. In the first section, you can forecast output of different datasets using LSD, unrolled network and chorus library. Use cross power to forecast Google's stock price with high accuracy. You can also see the effect of training epochs on total error of LSD M neural network. In the second section, you learn how to use Python and SQL and MALP classifier to forecast outputs of different datasets. In this part, overall accuracy will be enhanced using hidden layer. Next, you can make neural network to detect buses and cars. In this part, you will use length and weight of unknown vehicle to train your neural network. In that third section, you can forecast output of different datasets like LA, temperature and a random dataset. Using cross library. You can use cross library to forecast international airline passengers. In this part, you learn how to use past data to forecast future data with high accuracy. In part two, of course, you will learn basic classification methods like Bayes, k-nearest neighbor, Support Vector Machine, and logistic regression. You learn how to model problems, can classify objects into different categories we with proper accuracy. In the fourth section, you learn how to use Python to build k-nearest neighbor classification model to classify datasets. In this section, you can classify Python Beethoven dataset and iris flower dataset as well. You're also can create your own Python program that classified data by using simple k nearest neighbor functions. And their fifth section, you learn how to use Python to classify output of your system by using Naive Bayes classifier method. In this section, you can classify diabetes and iris flower datasets. In this part you create your own Python program that can classify data by using simple naive Bayes functions to find probability of being male or female. It's really great. Six section, you'll learn how to use Python to build support vector machine classification model to classify Iris dataset and simple dataset. You also classify handwritten digits by using different kernels and SVM methods. In the seventh section, you will learn how to use Python to classify output of your system by using logistic regression. In this section, you can classify output of beloved datasets. And you learn how to use this method to classify handwritten digits. In part three, you will learn basic regression methods like linear, multilinear and polynomial regression. You learn how to model problem predicted variables and how to find the possible future output. It's really sound grades. In the eighth section, you'll learn how to use Python to estimate output of your system using linear regression. In this section, you can estimate output of random numbers and diabetes datasets. Owls price estimation in Boston will be presented in this part. In ninth section, you learn how to use Python to estimate output of your system with it multivariable inputs. In this section, you can estimate output of global temperature dataset and advertising campaign dataset and how to make a successful campaign. Intense section, you learn how to use Python to build polynomial regression model to estimate output of your system. In this section, you can estimate output of non-linear sine function and relationship between temperature and CO2. In last part of this course, you learn bio-inspired optimization methods like genetic algorithms and particle swarm optimization. You will learn how to optimize problem and find minimum, maximum point of complicated functions. In this section, you will learn how to use genetic algorithm for function optimization. First, you learn how to create a Python code to optimize simple function. This function is only add some binary numbers and we want to find maximum points of this function. Then you go far there. You will learn how to optimize more complicated function by using GA and deep Library. At the end, we are going to solve real-world problem. In this part, we optimize travelling salesman problem, or TSP for 17 cities using genetic algorithm. In the 12th section, you will learn how to use particle swarm optimization method for function optimization. First, you learn how to create Python code to optimize simple function. This function is only multiply x sine of x, and we want to find minimum point of this function. Then you go further. You will learn how to optimize more complicated function by using P, S or N deep Library. After it, we are going to solve a standard problem. In this part, we optimize the rest region. Function using PS or dysfunction is a standard function in optimization failed. In this course. You have a 14 hour of content and I do my best to update the course and answer your questions. So start your journey to artificial intelligence world. And I can't wait to see you. Good luck.
2. Recurrent neural networks and LSTMs theory: Hello everyone, welcome to artificial intelligence Number six, LSS team neural networks with cross. In this course you will learn recurrent neural network, RNN and LSD. And by using cross library in Python, you learn how to forecast the weather datasets by LST em to forecast temperature and when the speed. Next you go for that. You will learn how to forecast time series model by using l STM neural network in cross environment. In this course, you can forecast output of different datasets using LSD, M networks, and cross library. You can use cross power to forecast Google stock price with its higher accuracy. You can also see the effect of training airports on the total error ASTM neural network. Then you will learn how to use cross and LST am network to forecast nasdaq index properly. You can also learn how to use delays to forecast it better. Do you like to forecast temperature? If you say yes, you can use for us, for New York temperature for casting. Next, I want to show you how LST TM can predict noisy data like Vine deceit. In this part, we will lose when the speed delays to forecast it more accurately. In every lecture you will get Python source code completely envy details, any restriction, you can also download datasets in CSV format. I do my best to update the courses regularly and consider that you have a 30-day money-back guarantee. You can see free preview if X1 to X4 farther. Finally, start your Janae to artificial intelligence ward I convey to see you. So uppers and robots.
3. Predict Google stock price using LSTMs - Part1: Hello everyone. In this lecture we want to forecast Google stock price and convene will using ls Stam and recurrent neural network for this proposed. And I have downloaded, I have downloaded the dataset of Google stock price. You can download it for on course materials. And we want to use it as our dataset and we want to forecast the price of Google stock. So create a new blank Python file and import following libraries. So first of all, import numpy as np, then import pandas library as heating. And we need the sum and plot library to visualize out outputs. So a lot and import matplotlib pyplot as a PLT. And then we need to import some cross a libraries. So from Caro's, the models import sequential sequin shell. It's Must be a capital S. And then from and cross layers in port DNS from Caro's dots little layers imports dense layer. And finally from care AS that layer agains imports ls CME functions. Then we need some metrics to measure our accuracy. So from Caracas imports for x, we will use these metrics in our accuracy calculations. And because our data, our real data we need to as Tom Dart, Islam Dar dies, least data and who we must import falling libraries from SKLearn. So from SKLearn dots free row. So seeing imports mean max as scalar, min, max scaler, capital as a scaler. Alright, we import our required libraries. Now. We must inputs our stock price dataset to our programs. So we use pandas dataset as follows. Define a DataFrame as follows. Df equals two pd dot read underlying CSV to read Excel files. And here import our files, our CSV file. As you see it, it is g w o g dot csv. So come here and write it, d o dot cs. Ok. We need to measure the length of these datasets. So use this comment to measure the length of our DataFrame. You can print it. See the length of these datasets. So save everything and builds your code to see their length of dataset.
4. Predict Google stock price using LSTMs - Part2: Alright, as you can see, the length of our latest, that is ponder it and 51 lines of data or so we want to use, we want to use high, low, and kilo's columns to forecast the Google price, the Google stock price, and V input high and low price as a input data. And we want to forecast kilo's price based on these two columns of data. So first of all, we need to define these data for our program. So I define high columns as follows, NP Dutch array, and here define proper columns. So d f dot x, and here define it as follows. I define third columns as a high. So the comeback here and check it again. And as you can see, 123. And because we start to be the 0, we must use added to number four D's columns. And for local M, I simply copy and paste this code here and change it to three number. And finally, for close price V to paste it again here and put four here. Okay? We want to visualize these states are. So we can pull out these three dataset, these in these three columns of data simultaneously by using following comment. So I define h for height, and here I use plt.plot. And here I use high dataset as follows. So here we want to use first row of this dataset. So 3u in views these array for high. And before it we must define Plot.ly figure one for better, pull out numbering. And here define low. And here again, I copy and paste this, the code here. And here I change it to low. And finally, four kilos I defined C and paste it again here. And finally, use close. You can define as a legend for your plots. So define it as follows, Plot.ly legend. And here we must use L and see for our legend. And here we defined our proper text here. So you can define high for edge and low. Sorry, we must use double quotation. And here you can define kilohertz. Finally, define plt.plot. Show to see the output of these results. Now builds your codes and wait to see the outputs. Alright, as you can see, our output, pellet data has been created properly. We have a tree type of data, high, low, and close. And as you can see, the kilo's data is between high and low. And you can see the high V, it's blue color are closed with green and Low Vt orange color. Alright, next, we want to create our required data to input them to our
5. Predict Google stock price using LSTMs - Part3: Okay, next we need to combine high and low columns into the one input data or x. And we need to define why datasets or closed poleis dataset as an output data. So to do this, we need to define x as follows. So x equals np dot concatenates. Np dot con terminates. And here we use high and low. And here define axis equals to 0 to concatenate these data are based on first axis. And then we need to print a shape of this dataset to see the shape of this data are so builds your code and kilo's these and as you can see, we have a two by 251 data of we need to transpose these data to make it similar to this dataset. As you can see here we have 2251 columns of data. And here we have a reverse situation. So we need to comment these and here we need to transpose our data. So define transpose function transpose and a transpose x as follows. And then we need to define y as follows. Y equals two kilos per ice. So I define y as follows. And similar to previews x, we need to transpose a again, so we need to write the following lines of code. And here transpose y. All right, after it. To input these states are to our run network module. We need to make these data is Scalar or EST andar does least datas. So define a scalar as follows. A scalar equal to min, max scaler function. And first we want to fit this data to a scalar objects. So define. And here input x is in this function. And finally, x equals to S Keller dot transform of X. Then the similar theme for why data so defined and other scalar object and name it as scalar one. And a cause means to mean Marxist scalar function and a gain, the u's scaler, Wanda's feats. And here, use y and y equals to sorry, scholar 1.So transform. And here, put the y, Alright, 7y and make our data scholar and, and these data are between 01 now. So to continue with l-s-t and neural nets, fair? And we need to do and on their work. And another thing, because now that's because LST and neural network input must be three-dimension. We need to reshape X into the three-dimensional array. So define it as follows, x or redefine it as follows. Np.zeros reshape. And here define x and x dot 01. And then finally define x dot shape one. This line of code make it three-dimensional array two inputs, two LSD M neural network. Finally, to see in the shape of our inputs, you can use a x-dot shape. And as you can see, we have these shape of datasets and we want to import these data are to our LCME neural network. So.
6. Predict Google stock price using LSTMs - Part4: So to create our neural network model, Cournot model equals two sequential. And here we use l STM neural network as a inputs layer. So use modal dot add. And here use LSD am. Now run Netflix. So we want to use 100 fully connected unit in the first layer. And we used activation function as follows. We use hyperbolic tangents for these activation function. And the most important thing is inputs shape and reuse inputs. Shape equals to 12. As you can see, we have at 251 learns of data v8 these dimension envy must input these dimension to input shape. Then we need to use another activation function for recurrent activation function. So use recurrent underlying activation equals to heart sigmoid. Alright, we created our first layer, the elastic neural nets for it. Then we need to add the output layer for these neural networks. So model dot add, and here we use only the NS outputs, a layer, and we use one for dimension of these datasets. So we can briefly discuss about these neural network. We use high-end, low price of Google stock. And we want to forecast the output or kilo's price of stock. So we need to compile the model. So model.com. And here we must define a loss function. Last function equals to mean square error. And we use an optimizer to do compile task for us. Optimizer equals two RM S probe. And finally, to measure the accuracy of model, we need to define a metrics. And metrics equals two. Metrics, mean absolute error or MAPE. And then we need to feed our x and y data into our model. So define model that feed and here defined x and y and v must define number of apples here. So I want to define first a poise size equals to 100, and we can define the batch size for it and equals to one. And finally, we use verbose equals two to see what's happened during the seats, a task and its visual, visualize for us in the common line. And finally, we need to predict our model by using model.predict. And here we use x and we can use verbose equals to one. And we can print its friends. Sorry for Predict. So builds your code and wait to see what happened during the training module. And as we can, moving down, the Mean Absolute Error and loss had been decreased. And now we need to use some plot function to visualize our predicts and comparable rates of reads, the real output data. So create a new plot object and add predict and y data to it.
7. Predict Google stock price using LSTMs - Part5: So come here and define plt.figure number two. And here define one is scatter plot for this type of blood and pelleted scatter. And here they find Y and predict for your power pose. And to show this palette function, we need to use PLT dots show. But before it, we need to Sibos plot simultaneously. Saw I use block equals to false here. To see balls palace simultaneously. And for bitter time efficiency, I use 15 for airports. And as you can see after 15 APOE airports. And if you have not any valuable modification for Mean Absolute Error and loss, and it is near to numbers in 15, 15th a points. So I use 154 or eight points n. And then we need to define another plot object to see what's happened after or prediction. So defined. Third plot as follows. And define test objects. And I add them to test objects. So plt.plot y. And for predicates, I use predict object for this collapse. So here defined predict and we can define some legend object for this and plots and here are used predicts and test and define, require tastes for it. So here I define predicted that data for eight and finally define real data for this object. And finally, we use PLT dots show for these Palade objects. And here again, we used the luck equals to false to see these three plot objects simultaneously. So here are used by luck equals to false. So we are ready to run our code to see what happened after training our model for forecasting Google stock price. So builds your code again and vague too. See what happens. If we have a good accuracy. V must have intense population of data in this line, the line with 45 degrees of slope. And if we have a bad accuracy, our data scattered in these two free angles. So as you can see, we have a very, very good accuracy envy forecast, schoolgirl price, Google's stock price based on only high and low price. And finally, in the last figure, we can see some really important and let me zoom in this region. As you can see, the predicted data has been plotted V8, orange color and real data are plotted with blue. As you can see, we have a very, very close predicted data to our real data and eats. It show us at the poverty of the LSS team neural network and it, and it is really important for us too. Predict a time, sorry, any type of time series like temperature, like stock price, like electricity load, precisely end these type of neural network. Let us to these, and I hope you learn good things from this lecture. And in the next lecture, we want to use another data set. And I want to show you the power of elastin neuron network to forecast these dataset. So I hope to see you in the next lecture. Thanks for watching.
8. Forecast NASDAQ Index using LSTMs and Keras library - Part 1: Hello everyone, welcome to this new lecture. In this lecture I want to show you and other forecasting problem. And in this lecture I use and nasdaq dataset as the time series dataset for forecasting. And in this lecture, reuse LSD and recurrent neural network for our forecasting purposes, you can access these dataset in course materials similar to previous lecture. And so let us start our coding. So creates and blank Python code and add a fallen and libraries as follows. Import numpy as np. And we need to import Pandas to read our CSV file and define it as a PDF. And we need to import matplotlib, plots, imports, glance ballads, and leap dot py. Lots has PLT. And then we need to import some cross libraries as follows. So imports cross dot models, sorry, from for on.carousel dot models import sequence shells in ports. And from carer OS dot layers imports and dense. And from Caro's data layers, import ls, Tn. And to measure the accuracy of our model, we need to import some metrics. And then we need to import some useful library from SKLearn song from SKLearn. And pre-processing. Imports mean max scale. And we need to define and train and test parts for our forecasting. So from SKLearn and gain, the model selection imports, train test, split, train underlying test, underlying splits. Alright, v imports our required libraries to our program. Then we must inputs our dataset into our code. So define a DataFrame as follows. Pt dots will read underlie CSV. And here v must add a Nasdaq dot csv. So here write spec. Then we need to measure the length of these datasets. So I define L equals 11 of the F, and I want to print datasets and print this number. So save your code and build a R code to see the possible output error in character. Ok, as you can see there, a length of our dataset is 252 lines of code. And we want to use these data to forecast the close price or kilo's number of nasdaq index. So to do this, we need to define air outputs. Here. You find out with as follows, np dot array. And here we need to use some index of our dataframe. And here are US fifth column of these datasets. So here I add a four for this column. As you can see, these column is 12345 and we use for number for it. And I want to show you these data in the figure. So defined. Our first vigor as follows. So plt.figure r1 and PLT dot kilohertz. And here use this number v. I want to show you the first column of this dataset. And I use plt.plot and use PLT dots show. And build your code to see the nasdaq index in the plot object. Okay, as you can see, our nasdaq index has been plotted in figure one and we want to forecast these dataset by using l ASTM neural nets work.
9. Forecast NASDAQ Index using LSTMs and Keras library - Part 2: Okay, to do this, first of all, we need to define some and the lives of these index. I want to use three delays of this index as an input. For example, I want to use these and these NDAs as an input to predict the output of our data. So to do this, we need to define X1, X2, and X3. For this purpose, I use X1 as follows. In here, I define 0 to L minus five, and I define x2. Similar to x1. Bots, I must change in this number and this number, this number must be changed to one, and this number must be changed to four. And here 23. And finally, we need to change them to N3. Then you can add, concatenate these input data to one array. So x equal to np.com. Sorry, can terminate, is really long words. So here use X1, X2, and X3. And here use x is equals to 0. And finally, similar to previous project, we need to transpose x. And then I want to show you the X. So builds your code and see what happened. Okay, as you can see, the three delays of nasdaq index has been curated and we want to use and these three number to forecast next number or this number. So to do this, we need to create a y object as follows. So I only Maslow's another transforms function to Y. And here I copy and paste these arguments here and change it to three and l minus two. Alright, next we need to standardize our data so we must inputs the scalar objects.
10. Forecast NASDAQ Index using LSTMs and Keras library - Part 3: So add a scalar objects to your program. First of all, we need to define a scalar as follows. Scalar equals to mean max, a scalar function. And then we need to fit our data to these objects. So let's yellow dot feet x as input variable and we need to transform x into x scalar data or S time dark, dark data. So X equals two scalar dark transform. And here, write X. These three lines convert x values into the range of minus one to plus one number. So we need to do same procedure for y. So I copy and paste it here and define. Another scalar object is scalar one and feet. Why dataset into it. So here we need to change in these two y and these two scalar one and these two y. All right, after it, before we train and test our neuron network, we need to do one more thing because of the LSD him need to three-dimensional inputs. So we must reshape our x and put into the 3-dimensional array. So define x as follows, or redefine it, reshape. And here writes x. And we need to define three-dimensional object array. So I use x dot shape one. So put a 0 here and then one number and here x dot shape one. These lines create a three-dimensional object for us. And now we are ready to create our model. As I mentioned before, we need to define a test and train and process for our index forecasting when V1 to test our neural network V8, not trained data. So we need to split our data into the train and test data. So define x train and X test and whites rain and y test as follows. When we want to use train test split function as follows. So this function has three arguments, and first two arguments are inputs and output data. And the third argument is test size. We need to use. In number four tests eyes, I want to use 10% or 25% for tears test sites. And this function has split our data into the 75 5-bit, seventy-five percent for training, 25% for test. Now we are ready to create our model similar to reviews and creates a sequential model as follows. So model equals to six.
11. Forecast NASDAQ Index using LSTMs and Keras library - Part 4: Two sequential, and here we need to define LST am input layer for our models. So model dot at LST n and here define 104 units and we, we need to define activation function for our model n. I, use hyperbolic tangents for it. And the most important thing for LSD is input, input size. And here we have one by its three input size because we use three delays of nasdaq Index. So after it, we need to define recurrent activation. And here we use heart sigmoid. Alright? We define first layer of LST neural network with ten units and we'd hyperbolic tangents activation function and one by three input size. And then we need to add an output layer for it's so model dot add. And here I use Dennis for the output end. We use only one output for our model. And so we've created our neural networks, then we must calm pilots. So model.com, comment LU, this for us. And it has m three arguments. Loss function equals to mean squared error. And then we need to use some optimizer for our model. So I use RMS Pro for optimizer and to measure accuracy in each iteration of compile and training and test, we need to define some matrix. And matrix a equals two. Metrics, mean absolute error. All right, after it, convenient to fit our data into the model. So we use this comment. And here we want to fit train data into our model. So write x train and y train. And use some inputs for our training model. And we can use verbose equals to two to see what happened during training process and after it. We can build our models. So save everything and builds your model to see what happened after it v use some visualization plot functions to show you what happened during the training process. And we test our models, we x-test and y test data. Alright, as you can see, we have finished 15 points and the mean absolute error is 20%, 21 percent, and the loss function is equals to 0.06. So for seeing what happened during test process, we need to use some ggplot function to visualize data for user and show him or her the accuracy of our model dealing test process. So we use Mac politely, libraries.
12. Forecast NASDAQ Index using LSTMs and Keras library - Part 5: So come here and first of all, we need to define predict our data. And we want to predict the output based on x-test and we want to compress the predict v2. Why test? To see the error and accuracy between predict and why it is. So here, input x test for predict function and after it and recreate the klutz object for our program. So I create second pellet object, plt.figure too, and we use scatter plots for our PR pose. So here, use whitest and predict for these objects and the four D's Pilloton objects. And here we use PLT dots show function to see our pellet after it, define another plot objects. So here defined plt.figure three and define test and predict the object as follows. So test equals to PLT plots, sorry, plt.plot. And here we use y. This and f4 predict views. Plt.plot predicts. You can use legend, but I skip this for time efficiency and use PLC dots show. And to see bows figure simultaneously. Use Bullock equals to false here and use this object, copy and paste these objects. This line of code here, PLC dots show the luck equals to false. And now we are ready to build error codes again. All right, as we can see, we have created three plots. First plots or first figure is nasdaq Index, and I close this. The second pillar is this ballad. We compare test data of it predict data. And if we have a good accuracy, we have imitates V must have a data in this line. And as you can see, we have some cap here, here, here in peaks and valleys. So to create better accuracy, we can, we can do some, some has. For example, sorry, for example, we can use another number for a Poisson. For example, I use 15 boys and builds my code to see what happened for accuracy. Alright, as you can see, we have a better accuracy. Then peer reviews and we correctly tracked the fluctuation of nasdaq Index in peaks and valleys of this index during test process. And here you can, you can see the power of LSD M neural network, recurrent neural network here. V USE only ten units, lst m, and the US only 50 points. And we have a very, very good accuracy here, is near to 0. Sorry, we have a very good loss function here and mean absolute error here, as you can remember from the previous, we have a 20 person for mean absolute error, and we have only two person for mean absolute error. So this is pervious figure and this is recent figure, VDD, 15 epochs. So in this lecture you have learned how to use LSD neural network to forecast nasdaq Index. And you can use some, some argument to increase the accuracy, decreases the error, or you can use different number for this LST m units. And you can gain a better accuracy by manipulating these argument.
13. Predict New York annual temperature using LSTMs - Part 1: Hello again, welcome to this lecture. In this lecture we want to use New York temperature dataset in less year from July of 2017 to these N8. So in this lecture we want to use minimum, maximum and average temperature of New York City. And we want to forecast the temperature or average temperature based on these two Collins. And lets us start. So so will do is we need to create a blank Python file and start to import a required library. So import numpy as np and import pandas as pd. You can escape the video to the end of this import process. And I want to teach, lease lecture completely for a new person that wash it, for example, first time. So I must import everything and I must add teach everything for new person so you can escape it to the end of the import process. So Imports match pillow to leave. That's PIPA lots as a TLT. And we need to import 3D objects in 2D functions from match blood leaves. So from MPI L2 toolkits, sorry, MPL under lying to kids. The M pellets 3D. Import access to 3D. Then import trend test escalate from SKLearn. Thoughts model. Underlying selection. Imports, train, test, underlying split. As I said in the first lecture, you need to install these library simply by using pip install, for example, a scalar, for example, matplotlib as such. And then we need to import some cross important libraries. So from imports sequential and from Caracas layer import. Then. And we use these functions for output layer from across again layers that we need to import. Lsd am we use at least four input layer and from Caracas import metrics to measure the accuracy during training and test process. We even need to use these and from SKLearn and gain. And we need to imports min-max scalar to make our data is scalar and then between minus one and plus one numbers. And from SKLearn dot preprocessing import minimax, a scaler. All right. I want to import new York City and temperature data sets. So I define a data frame as follows. So defined df equals to p, d or Pandas, and v color read underlying CSV function. And here we need to import and mydata.csv. Sorry. This function simply grab the NYDFS CSV and put it into a data frame.
14. Predict New York annual temperature using LSTMs - Part 2: It's a frame. To read the average, minimum and maximum temperature, we need to define some NumPy array as follows. So defined T average as follows. And the average equals to nip np dot array. And here write df. And we want to read the fifth columns of this dataset because the average temperature put it there. And here I use, Sorry, I use for number for it. And I simply check it again. 1234567. For a TM rage. All we need to use six numbers, sorry for death. After it, we need to define t massing on T maximum is equals two, six columns. And I use five here. And finally, we use t mean as follows. I pasted a gain and change it to four. You have access to these datasets in course materials and you can simply download it and use it in your programs. To test our program, we can print T average, simply and save everything and build it to see the output. Alright, as you can see, we have created the average temperature and simply we have printed in the output common line. For better visualization, we can define a 3D plot object to see what happened for T minimum, maximum and average and dependencies between them. So define feed as follows. So I use plt.figure one to create our figure and defined through the object as follows. Ax equals 250K underlying subplots. And here we only use 3D objects for these subplots. And here use 3D for projection because we want to see at least three dataset in three-dimensional space. So you can use scatterplot for T minimum and maximum. And here write T max, T mean. And finally, T average and V1 to use some marker for it and I select empty circles for it. And finally, we can set some labels for our x, y, and z axis is. So use a x-dot sets underlying xlabel. And here, right? I simply copy and paste it 32 times. And here I use a y label. And here use zed labor and Macs. And finally, average. To see and this plot object, you need to use plt.plot show and ILS your codes to see what happened for this pellets. Alright, as you can see, we have printed the three-dimensional, these three-dimensional dataset based on minimum temperature, maximum and average temperature of New York City. Next, we need to create the input and output dataset for our recurrent neural network or LST.
15. Predict New York annual temperature using LSTMs - Part 3: But before it, I want to show you the average temperature in single 2-dimensional pellet. So simply, we use PFT dots plot. And here we use average temperature. But consider that because this T average is a array, we need to use this comment to read first the lines of this Eric. So plt.plot T average. And finally plt.plot show C8 and b3, your code to see that the average simulator, we need to close these Palade first to see the average. So kilo's these and alright, as you can see, we have a very, very fluctuation in the temperature and V1 to use the power of recurrent neural network or LSD in neural network to drag these fluctuations. So next, we need to create the x and y datasets. So we must concatenate the minimum and maximum temperature T2 V1 array. So empty.com Khan. It's very like overt concatenates. All right? And here we need to concatenate the minimum and maximum temperature into one array. So I use xs equals 0. So do this. And as you remember from our previous lecture, we must transpose these data to make it readable for our neural network. So use np dot transpose and use x here to transpose them. And for Y or output layer, our outskirts datasets and we need to use emptying the transpose. And here the US average temperature four outputs. Alright. We are ready to create our neural network mother. But before it V mass translate, these states are into some data between minus one and plus one numbers. So we must create the scalar object as follows. For example, SC equals to mean max scaler function. And then we need to SC feeds, and here we use x Fourier. And finally sc, sorry, SC dots trends for x equals two. And we must do the same procedure or same lines of code for y. So simply copy and paste these lines of code and rename it to S11 and S21 that feeds into y. And here, change it to y, and here this y1 and y. Ok? As I said before, Delhi thing neural net fair, must has a three-dimensional input. So before creation of our model, we need to create a three-dimensional array for our know, Rodney, it's very model. So NTD reshape and do this for us. So here np.zeros reshape and use x and use X here, a gain, sorry, x dot shape 11, and finally, S shape.
16. Predict New York annual temperature using LSTMs - Part 4: Okay, this line of code created three-dimension out input for Alice team neural network. As you remember from the first lines of our code's views, we import rank test escalates functions. So we want to test our model V8 test data. And the data. Data model does not trained. So here we split our data into the train and test data for time efficiency. I copy and pasted from our previews coat. And I want to use search persons for tests size. And finally, we are ready to create our neural network model. So model equals two sequential function. So after it, if V must create the input layer for our neural network models. So model dot add. And here use LSD am layer for it. I want to use ten units, ten LSD in units here, and activation equals two. Hyperbolic tangent. And the important thing for these is input shape. Then we need to create the outputs layer for our model. Then we need to create the inputs layer for our model. So here rights model dot ads, and here use LST am for our core pose. We use ten units, ten LSS team units for our neural network. N here. In the US. Hyperbolic tangent for activation, function, activation equals to ten H. And the important thing for LSD neural network is input shape. Input shape equals to one by 21 by two, because we use only minimum and maximum temperature columns. So we have a one-by-two input shape. And finally, we must define recurrence activation. And we use hard sigmoids similar to previous lecture. After it, we must define the out-groups layer model dot. At output layer we only use only one unit of DNS layer. Then you must compile the model to track outputs of our function. So we must define loss as follows. I want to use mean square error for the loss function, Mean Squared Error for loss function. And we must define optimizer for aid and optimize it across to our group. And to measure the accuracy during the training process, we need to use some metrics and I use metrics that AMA e. And we must put it a curly brackets. After it. We must fit our data in Neolithic model samadhi that feed. And we use the trained data. So x train and y train must be used for the feats process. And V1 to do this for 15 times 4x equals to, sorry, for 20 times. And I think the, so for C, What's happening during this process, you can use verbose equals to two. Alright, we are ready to run our model, so save everything and builds your model.
17. Predict New York annual temperature using LSTMs - Part 5: Alright, as you can see, we have 28 points. And as we, as we moving forwards into upwards, we have a very, very good in loss. As you see it is decreased. And same thing happened for mean absolute error. So anyway, to continue to measure the accuracy of our model based on test data, we need to do something. So we must predicts output based on X test. So I use model.predict and input X tests into it. And after rates. I want to show you and visualize the output for you. So I defined the figure object as follows. Plt.figure too. We want to scatter plot the predicted data and real data as follows. So we use scatter plot for these purposes. So I use y test in predicts simply. And this plot function comparative a test and predicted data. Degradative data is the output of our neural network model. And test data is a real data. So it's simply added PLT dots show object function here. And I want to use another plot object. So plt.figure, figure three. And here we want to plot the whitest and predict simultaneously. So simply define it as follows. So our rights, and I think there really is better for a rel equals to PLT dots plot of y tests and reuse predicts. And here we use predicts. And finally, to show them in the pellet objects, we use plt.plot show. To see both of these diagrams. We use a equals to false. All rights, beads or code and wait to see plots. Alright, as you can see V, we have a plot that convert the output of our neural network model n real data. And as I mentioned in previous lecture, if we have a very, very good accuracy, we must have a data in this line, the line V8, 45 degrees of Islam. And as you can see, we have the scatter data here because we have a moderate accuracy, the mean absolute error is caused to 11%. And here the concrete, their real data and predicted data insane plots. As you can realize from this plot, we have a bad accuracy in peaks and valleys. So to better this function, I want to increase the number of units in inputs layer of LSD I am. And I want to increase this number into 50. So builds your code again and wait to see what's happened for error and accuracy. Alright, as you can see again, we have a better output compared to Paris Views. And we can compare them in figure number three. I minimize it. Here. We have a very, very good tracking and good forecasting in the peaks and valley. But here we have a back tracking. But in overall, we have it three-person for mean absolute error. As you remember from peer reviews, V8, ten units and 20 posts, we have an 11 person four Mean Absolute Error. Anyway, in this lecture you learn how to use the power of recurrent neural network and l-s-t and neural nets were to forecast the temperature, the average temperature of New York City. And you can use another dataset based on your opinion and imagination to predicts your design dataset. So thanks for watching and as things human. To the next lecture.
18. Forecast New York wind speed using LSTMs and Keras library - Part 1: Hello everyone, welcome to this new lecture. In this lecture, we want to forecast New York wind speed. In this lecture, I used these dataset, these excellent dataset, and you can download it for our course material. So similar to pervious lecture, Create Blank Python file and it starts to import libraries that you need. So first of all, I want to import numpy. So import numpy as np, then we need to import pandas as pd. As I said before, you can escape these parts of video and go to the second part. But I had to learn a new baby how to import the libraries that he needs. So after it, we need to import math pellet leap for pellet proposals. So let's lift up pyplot as PLT. Then. We need to import some curious libraries. So operon forum curious. That's Models, sorry. Dots models imports, sorry, from chaos. Models in quartz. Sequential. Then we need to import Dennis and LST M layers from Caro's ls, TM and from care AS layers in courts. Tennis. After a two, measure accuracy in the training and test propose the Turing test apart, we need to import some metrics. So from corrals, import metrics. And finally to and make the data based on Dart. And we need to make them as scalar. So from SKLearn, the pre-processing imports mean max scale. Alright, v have imported all libraries there's we need. Then we must define our data frame to our program. So df as a dataframe and we want to read our CSV or Excel files. So I call read underline CSV for these proposed. And here I put the name of my dataset, NY WWW dot CSV. As I said before, you have access to this data set in course materials and you can simply download it and use it.
19. Forecast New York wind speed using LSTMs and Keras library - Part 2: Use it freely. So we need to define and measure the length of our DataFrames. So I define L as we learn our DF, I want to use a three delays of the veins speeds and as the inputs and I1 to forecasts fourth, delay by this program and by using recurrent neural network, LSD, ME neural network. So first of all, for some, for better realization, I want to show you the plot of the vintage speed during last year in New York. So before it's been need to define the output of our wings as speed. So I define it. Now Empire array as follows. So I want to select fourth column of our dataset, so 1234, so our U3 for fourth column. Then you can pull out these object, this array as follows, simply PLT dot plots. And here use these lines of code to see there when the speed during the last year. So save everything by Control S and leads your code by, controlled BY and wait to see the wind dagger on. Alright, as you can see, we have a very, very fluctuation fill actuating data. And as you can see, we have a very oscillate, a very, very bad oscillation between 525 meter per second. And we want to forecast these noisy data. So to continue, we need to create F3 delays of these data. So I define excellent as follows. So X1 equals two, this part a Y. So I use 0 to L minus five and do the same thing for X2 and X3. So changed a2, two. And here we want to use 14. In here we use two m, three, and here we use three. After it. We need to concatenate these data into the one x array for input pour pose. So I define x as follows. So numpy.com. And here I put X1, X2, and X3. And we want to concatenate them based on 0 axis or first axis. Then because we want to put this data in our neural nets for Ek b must transpose them to make them better for feed to our Nolan networks. So np dot transpose x create the proper input dataset for.
21. Forecast New York wind speed using LSTMs and Keras library - Part 4: So simply come to here and copy and paste it from pervious lecture. So from scaling dots, model selection, import, train test. All right. Now we are ready to create our model. So model equals two sequential function. And the must create the inputs layer for our model as follows. Model dot add them, LST n. And here we want to use five units of LSD M neural nets ferric, and we use hyperbolic tangent of hyperbolic tangent function. And important thing for us is input shape. And here we have a, we use a three delays of these datasets. So I use one by three array. And finally, we must define an activation function for recurrent. And here I use hard sigmoids. Alright? The first layer of our neural network and we need to define, you can define a hidden layer for your Nora. Let's vary. But because it has a very good, very good accuracy with symptoms single layer and one output layer. We don't use it, but you can define a hidden layer for your model. So I simply added Dennis output layer for this model and I use one dense layer in the output. After it, we need to compile our neural network models. So simply write model that compiled these function has three arguments. We must define a loss function to measure the loss function during training process and the use mean squared arrow for our protocols. And here we use optimizer. And finally, we must define a matrix for calculus, sorry. We must define it outside these quotations. And we must define metrics for measurements, for accurate measurements. So, define metrics as follows. We use Mean Absolute Error. Four metrics. Now we must feeds our train data into the model. So simply write model that feed. And here you must define train, x train, and y train as follows. I must change it to y, capital Y for better visualization. All right. Then we must add the points for this function. I want to iterate this process for ten times or train our model for ten times. And I use verbose equals to two to see what happened during the training process. Now we are ready to run our model, our neural network model, and see the accuracy and error of model. So builds your code and wait to see the output.
22. Forecast New York wind speed using LSTMs and Keras library - Part 5: Alright, as you can see, we have finished the training process, the ten apples. But as you realize for our mean absolute error or we have moderates error here, we have it 12-person 0.37. So we want to decrease the error. So you can define different number four, LST am, and different number for a poison. So builds your code again to see the accuracy and error of your program. We have in 1% in we have a one-person decrease in mean absolute error. So because of our data, is very noisy and it has a heart fluctuation between five meter per second, 25 per second. The veal, we will not have the same accuracy that we have in the previous lectures. So V can do something. We can increase the units number of LST M, we can increase the poets. But for time efficiency, I asked you to do these by yourself. And I simply create the predict output of our model by input to this function, our test data set to eat. And I want to show you the output of our test datasets. So create a new figure. I name it figure number two, and I want to compare it v3 real data. So I scatter plot y underlying tests and predicts this function, create and compare plots for us and we can compare the test for real data and predicted data in one DR. grant. And finally, we must create both dataset in the single bullet and I want to show them simultaneously in one single plot. So define Plot.ly figure three and here plt.plot. Plots lightest equals to real data. And we use test data. Storage, for example, will predict. And here plt.plot predicts. And I want to show them as simultaneously. But before it, lets me add the sum legend to this object, plt.plot legend equals two. Here I use predicates and hear real. And if we can define our legend here, sorry, rights, prayer, period dictates. And here real data. And final write plt.plot show to see what happened and to see these plots simultaneously, right? Block, block equals to false for it. And for this guy, save everything and be in your code. Alright? As you can see, we have these plots, the input dataset or when this week, during the last year, via these pillars, there pellets predicted data and real data simultaneously and competitor them. As you remember from previous lecture, if we have a very good accuracy, V must have data in this line, the line with 45 degrees of slope. And finally, if we look at this plot, we can see v. V are not good at peaks and valleys of our data are because of nature of the convenes a speed in our dataset. We can't correctly tracked the fluctuations of these data. But for the start point, it's very good and we have a moderate accuracy. We have a 18-19 persons for accuracy here. And you can make it better by using different number four LSD him units and appoints. So thanks for watching and I VE shoe have learned a lot of things from these lectures.
23. Theory of MLP Neural Networks: Hello everyone. In this lecture we're going to discuss about basic story behind artificial neural network. Artificial neural networks are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn to perform a task by considering examples. Generally without being programmed with any task is specified rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example image that have been manually labels as a cat or no cat and using the results to identify cats in other images. And a11 is based on collection of connected units or notes call artificial neurons, which loosely modeled on neurons in a biological brain. H connects to like the synapses in a boilers are called brain, can transmit a signal from one artificial neuron to anther. An artificial neuron death receive. A signal can processes and then signal additional artificial neurons connected to it, like this image. The original goal of the a and an approach was to solve problem in the same way the human brain would. However, over time, attention moved to perform a specific task, leading to deviation from biology. An ends have been used on a variety of tasks, including computer region is speech recognition, machine translation, Pauline board and video games and medical diagnosis. And artificial neural network is a connected network of simple elements called artificial neurons. Receive input, changed their internal estate for activation according to that input, and produce output depending on the input and activation. The network forums by connecting the output of certain neurons to the input of other neurons, forming a directed weighted graph. The weights as bloods, the functions that compute the activation can be modified by a process called Learning, which is gotten by a learning group. Components of an artificial neural networks. Neurons and neurons has three main parts. Input or inputs. Output, an activation function, connection and weights. The network consists of an connection. Each connection transferring the output of neuron i to the input of neuron j. I is the producers or of J and J is the successor of i. Each connection is assigned evade Wi j propagation function. The propagation function computes the input PJ of tea, the neuron j from the output or i of t, of producers or neurons and typically has the form of PGA PGE2 equals two sigma or R t multiplied by w IG. Learning rule. The learning rule is a rule or an algorithm which modify the parameters of the neural network in order for a given input to the network to produce a favorite output. These learning process typically amines to modifying the weights and thresholds of the variables between the networks. Learning methods. Training a neural network model essentially means selecting one model from the set of models that minimize the cost. Numerous algorithm are available for training neural network models. Most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Must impulse some form of gradient descent using backpropagation function to compute the actual gradients. This is done by simply taking the derivative of the cost function, negation function with respect to the network parameters or vase. And then changing those powers in gradient unrelated direction. Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative. The gradient of the function at the current points similar to the picture, we Esther v x 0, then in the direction of negative of the gradient is equal to X1, X2 until we go to the local minima. So to team weights of neural network, we must add generated parts by the same to the older weights. To generate new weights, we do it until our network image input to output correctly desired accuracy. But don't worry, the programs and Python do this for us. So stay tuned to the next lecture. Thanks for watching.
24. Make MLP neural network to create Logic Gates: Hello everyone, and welcome to this new lecture. In this lecture we want to discuss about basic neural network model to model XOR gates. To start this tutorial, you need to create a blank, a Python code. You can use any software for editing. And these phi, you can use any text editor software. But in this course I use Sublime Text two. Create an edit Enron, my codes, and I recommend these software to you because it has a very, very user-friendly environment and you can use it easily, and you can use it for other programming languages. And it has very i friendly environment. Also. It has a dark team and you can code. You can use it for coding for a long time. So to start this lecture, create new blank python codes. I name it's one dot, one dot p bar. You can use it any name you want. And to start these tutorial, you'll need to import some useful library from parts on. So first of all, import NumPy as empty. And then in this lecture and two later lectures, if we use NLP classifier from scikit-learn. So from SKLearn, SK layer neural underlying network import MLT classifier reuse these library and this useful function for later to lectures. So we want to model XOR gate. And if you don't know anything about gates on logic gates, I describe it in the later. So XOR gate has two inputs and outputs, and it has a two estates output 0 or one. If two input are same. For example, if the inputs are zeros. Or once the output will be 0 and vice versa, it's, it will be one. So we need to create two array to NumPy array to create the XOR gate. So first i is x and x equal to np dot array. And we created here. So first two inputs are 00, second is 01. All of the inputs are binary outputs and similar to ours boots. Then we use 10 and finally, we add 11 as inputs. This is our inputs variable. Then we need to define outcomes. Output is similar array. And we use NumPy for these. So here is 420 inputs. The output will be 001 equal to 110, equal to 111 equal to 0. So we have created our inputs and output data or data sets. You can print, it's simply write print x and y and press Control and B to C, the output. For the first time, it's a. Take longer to show the output here. Is the x equal to this IRA and y equals two this r i. So we continue to create our model. To do this, we need to define a model. Model equals to sorry, MALP classifier function. We use this function too. A create, a multilayer perceptron. And this function has a lot of arguments, but here we use only few of them. So we need to define some hidden layer and define five neurons, four hidden layers. Then, as I said in the neural net victory, we mostly use activation function for each neuron. Here we use logistic activation function and so activation equals two largest. Then after it, we need to define another arguments for this model. To train and solve these dysfunction or decent neural net where we need the solver. The solver equals to eight A1. And we can define the maximum iteration to train our neural network. So here you can define max it eater a cause to, for example, 20. And alright, we defined our model. It's five neuron in the hidden layers with logistic activation function, we use a atom solvers for solving these problems, these optimization problem. And we want to iterate this process to any times. So next, V must fit x and y through these model. So reuse model dot fields x and y. This function train our multilayer perceptron to track the output of the XOR gate based on inputs you or x. So here you can save all off things and build your codes to see the output. Sorry, here we missed this s. So again, okay, as you can see the codes around and finished and you can, you can see anything. Because we need as some visualization function. So we can use the predict function and you can define a predict parameters to see the output of our model. So use model dot, predict and views x at an input of this function. This function grab 0x and based on the trained model, generate the output. And we can see these by using print predict. So run it again and wait to see the outskirts. As you can see, the output is 000. So. You can use another useful function, the right Prince, muddle, dot score and x and y. This function generate the accuracy or error of our model. As you can see that the real output of our model is 0110 and the generated output is 111. And you can define the accuracy for eight in two. In two out of four, we have the URL and in these two positions, we haven't no error. So we have a 50% accuracy. To increase the accuracy, we can use different argument of MAP classifier to increase the accuracy. All right, our accuracy never increase from 50% because we have a little data from our model. So in this simple example, you learned how to use NLP classifier to model XOR gate. You can use it for model and other gates. For example, you can model AND gate. The and gate has four outputs, a states, and in one of them is the estate is one, and in three of them is zeros. So that AND gate is 0 if first three US states, and it is one in Second Estates. So we use it to terrain and feed our model and build it again to see the outputs. Alright, as you can see, we have a 100% accuracy here. And you can see the anything happened during the training process by using these argument verbose. And I use ten for these are Brahmins. And if you use it again. You can see the loss function during any iteration and you can see it's decrease during the iteration, increased. And as you can see here, as we go further in iteration, the loss is smaller and smaller until the 10000 iteration. And our outputs is same to our output of AND gate. So you can decrease this number. For example, five hidden layer and ten for max iteration. And as you can see, the error increase to 25%, sorry, the accuracy decrease to 25 person. And as you increase the hidden layer or max iteration, for example, as you increase it to 100, the error must be decreased. So run it again. As you can see, we have 75 persons for accuracy there in the port. The purpose of this lecture is, I want to show you the effect of hidden layer size and iteration size on the accuracy of our model. So if you increase the hidden layer sizes and iteration numbers, you will have a better accuracy. And because the model trained and trained a lot of time and generate our desired outputs. So if I use a 54 hidden layer sizes n, 10000, I will have 100% accuracy for our model. So in this lecture, you will learn how to make basic multi-layer perceptron classifier or multilayer perceptron neural network to model your function, your desired function. In the next lecture, we go farther and we use a better datasets to make it better model and a better model V8. Very good accuracy. So stay tuned to the next lecture and thanks for watching.
25. Using MLP to Detect Vehicles Precisely Part 1: Hello again. In this lecture, we want to move forward and make a model to detect a difference between bus in car. So to start, we need to import some useful library from Python, similar to previous lecture being most important non pi as np. And we need to import SKLearn that neural network again. And from its imports. Sorry, we must use from this library imports M, L, T, C, fire. All rights. Similar to pervious lecture. I create a new file and name it the N. N dot 1.2x is blank Python file and you can use Sublime again and any editing software if you want. So I use a different data for our model to train it, to detect the big thing, to detect the, to detect differences between buses and cars. So to start, this dataset views length and weight of cars and boss as the inputs. As you know, the bus is bigger than cars and heavier than cars. So to do this, I defined my inputs with NumPy array. And I use this function to generate our inputs. So I use weight and length of the vehicles for generating these dataset. So I generates our first dataset. These dataset tells us that it is car with 1 thousand kilogram for weights parameter n, 4.5 meters for the length. So we go farther and generate and other inputs. So US 1.24.2 meters, four. Spirometer and one in, for these first three data, our cars and we add the another car to eat 1.654. Then we can define a boss parameters here. For example, I use 5 thousand kilograms for weights. It's very small Boss, and 12 meters for length. And I use 6.5711 meter. And you can define any number as you want. But considered that the bosses are bigger and heavier Dan cars. So I add is some data to eight to make heat sizer and more accurate. So I know there were car to eat and swell. And finally add the plus 28. And here I use 9.72. After it, V must define y as an output. I use 0 for cars and one for bus. So here define another array. And here v must define the 0 N1 numbers for our datasets. So for first, we have a car. Say Can car, car, car and bus, bus, and bus again, and car, bombs and bus. So I content in 1234567891010 outputs here. So here we generated our require datasets. So and we can save it and B it to see the possible error. We have no error here. And then we need to visualize our data's. So front mats politely. Dot py pellets. Imports, PLT, PLT, and from MPL toolkits. Mpl treat the import x's 3D chapter called. We want to 3D printed our datasets. We have a two input and one output. So we have a treaty space to visualize our data. X and Y are weight and length of our vehicle and the z-axis is our type, for example, bus or car. So define Fiqh as follow, PLT. It's Fieger. And we define x as follows. Ax equals 250K. To add onto lines subplots. At these two comments generate the required plot space to pull out our OUT schools. So we need to deeper rejection. So we use per ejection equals to three D. Then we must define weight and length as follows. Weights equals to x. The first columns of X equals the weight. And length is second, sorry. Second columns of X and type. Type equals to y. Then we use a scatter plot to plot these data. We'll use a scatter of rates, length, and type. We need to define the color map with for our 3D plots. So we use type again for these power pose. And we can define a marker for these type of plot. And I use the train Gil sign for this. And you can't define a lay bail for your, your axis. So for excellent label IOUs, wait. For y label. I use length, sorry. I use length. And for Z, we can use type or I use bosses or ours. This finally to show at this plot, we must use pellet dots show. So build it again to see their output.
26. Using MLP to Detect Vehicles Precisely Part 2: Okay, as you can see, we have some data here and doing our cars. And we have a some yellow datasets here and they are bosses. And you can see these weights axis here and length axes are here. So we continue to generate our model to make a classification model based on MALP neuron led to detect cars and buses based on our input data. So similar to peer reviews lecture, we need to define a model by using MALP classifier functions. So I use 104 hidden layer sizes equals to ten. Then we must use again by just activation function for our neurons, activation equals to logistic. Then use ADA M solver again. You can use different solver. I. Use ADA n solver for each. You can use different salt. We're function for your purpose. So I define max iteration equals to 100, and I use a verbose to see the output of our training process. So you can use 104 verbose, and then we need to feed our x and y into the model. So model fits a x and y. This function, as I said before, feed the data into the model and train me to V8. Excellent boy, to generate per hour per output and track our output based on input data. Then AM. And run our codes to see the possible error into our schools. I close it and as you can see. We have a 1100 iteration. And as we go farther and farther, we have L, less error, or less loss function. You can use different hidden layer sizes or different iteration to make it better and better. So here I use 504 iteration and build it again to see the outputs. As you can see, again, we have a very little error here. You can see the output of the model by using the predict function or using model dot S core functions similar to peer reviews lecture. So I use predicts equals to remodel dots. And here I input x 28, and here we can Prince and build it again. Okay, as you can see, we have this output from our model. And I think it's better to Prince why for better compare. So Bill, again to see the y n outputs of data. Okay? As you can see, we have a same predicted and y here. They are same in each elements, so we have 100% accuracy. You can decrease this number, for example, 252 CDA error against here, V mass has, we must have a, yes, of course we have an error here, here and here. And these numbers are 0 here. We have errors here. So you can see if we increase the iteration size or hidden layer sizes. V can improve the accuracy of our model. So you can visualize it by pellets. Output data. You can define score for your model. Example, x and y. Alright, as you can see, we have a 100% for S core of this model. So before we finished this lecture, we can use another data that is not in our datasets to test our model. So we can define a dataset to test our model. So I want to know that if I have a weight scale with 101.1 tones invades and four meters for length. I want to know the type of this vehicle. As you know, it is a car, but it is not in our dataset. And I want to show you the power of the neural networks to interpolates the unknown data. So I use these dataset and name it guess. So. I use a gain model, predicts and use good guess function. It gets to see the output and use premed. This output. V must have 0 in these cells boots. Alright, as you can see, the generated output is 0. Our model work per appellee. You can test it vetoes and other data. So you can't define gears one equals two. I want to define it dataset for bus. And I use 10.5 tons for Vaden and 13.2 for length. And copy and paste this code here. And you guess one and build it again to see their output. The output must be 12. All right, the output is one. Okay? In this lecture, we learn how to use the power of multilayer perceptron neural networks to distinguish between cars and buses by using weights and lengths of these vehicles. So we test our models, V2, unknown data width. They are not in our input data. And as you can see, the error was 0 and you can see the effect of iteration and hidden layers size on error. So thanks for watching and stay tuned to the next lecture.
27. Classify random data using Multilayer Perceptron Part 1: Hello again. In this lecture, we want to moving forward and I want to show you the power of neural network in M contour. Random data at the neuron network has a, I am powerful outputs and accuracy and if you train them properly. So here we need to import some data similar to previous lecture would be needed to import, import numpy as INT01. And then I copy and paste, sorry, copy and paste. And if the classifier from SKLearn neural networks. And then here the need to import another useful library. And these function make random outskirts or random data at slit for us make moons. And then we need another function from SKLearn and again, and it is a train test, split, model, selection and imports, strain, underlain tests. And underline is split. This function escalate data into train and test parts. We use train data to train our model and we evaluate the accuracy of the model, the test date. Then you must import pyplot to. And so savior that gain and be able to see the possible error in our codes. And as you can see, we have no indoors, so we continue to generate our data. So x and y equals to make Moons function. Here we have a lot of arguments you can see in these arguments in Scikit sites, but we use these arguments, number of samples equals two. To hundreds. And we use shuffle to shuffle data. Then to generates noisy dates are we use a noise and add a half to eight. And finally, we can use, these are comments equals to nine. You can see the shape of this data by using and this comment, x shape. Why? She can't build it to see the shape of these output. As you can see, we have 200 rows and two columns, 200 by two in inputs and 200 data in outputs. You can see them by using this comment and build it again. And we can see the matrix of x and y here. Here. They generated inputs. Here. 200 inputs into columns has been generated, and it is the output. The output is one in 0 and define the types Similar to the previous lecture for cars and buses. And here it's generated the same data, 0 and ones binary data. For our purpose, I comment these codes and for better visualization, I use scattered plots. So plt.plot scatter and use the first column and second columns of input data too. So use first column and second column of x. And I'll use colormap based on y data too. We have different colors between type one and type 0. So I use c equals to y, and I use tranquil marketshare for our proposed. So after eight US plt.plot shown to see the outputs and build it again to see the output of ours. As you can see. Our output has been generated and we want to create a model to has it difference between these type, the yellow and purple color. So we use NLP classifier here again. And I want to show you the effect of noise here. I delete the noise parameter and build it again. And you see the very clear data. And these data is not proper for our proposal. We want to see the power of neural network against them on data. But here it is very, very easy to any function to distinguish between yellow and purple. So views noise to creative random and other randomness to our datasets. So we continue to create our models, but before it, V must escalate our data to train and test. So x train. X test. Whites rain tests equals to train test split function. And we use seventh. We use 30% for the test size. So we equalize it two as 0.3. As for test sites. Then we must define our model. So model equals two MLT classifier. And here we use on the LAN layer lines signs and equals to 5. Then we use activation function and use logistic activation function, logistic activation function, and then use same solver. Seminar to previous lecture. You can use different solver, but I use ADM and I use max, iteration and equals to 500. And finally, use variables to see the output in each ten iteration.
28. Classify random data using Multilayer Perceptron Part 2: Then we must fit our data. But here v. V must use training data to fit our model. Then V tested with test data. After it. We use predicates again to see their generated output based on train model. So I use model.predict. And here I use X test. And we can print predicts. So build your codes to see the output. Sorry we missed here. So be a little gain. Okay. They're generated out, food has been created. And as you know, we have a two hundred and two hundred samples of data. And we use 70% of them, 140 data for train propose here and here. And the use 60 of them to test our model. And then we can print the score of this model. And I want to increase the iteration size to 100000 for better accuracy and prevents model.predict score. And use extras and whitest x-test and why. Print these parameters? The output, to see their errors, to see the error of our model, I close it and okay, as you can see, it is 75% and we want to make it better and better. So I increased it to 2 thousand iteration. And I think it's better to increase the hidden layer sizes into hen and build it again to see the output accuracy. It's in, it's decreased again, so I use a game five and use two Townsend. Alright. As you can see, we can't moving forward from 75 persons. I 5 thousand. I tested 5 thousand again to see the increase in the outputs accuracy. Okay, as you can see that our accuracy is increased to 80 persons. And as you increase the iteration as size, you can access to better accuracy. Before moving forward, I want to show you a useful argument that tolerance augment of NLP classifier. You can use it to increase the iteration as int c. Here we have a 5 thousand iteration for max iteration, but Vigo only to 781 because the differences between Lus in two sequence of iteration R group there are smaller than this number, the tolerance number. If the use S smaller tolerance number, for example, I use this number and build our code again. V, v will have a better accuracy, I think. And our train process keep, continue until their 5 thousand iteration. And as you can see, their accuracy is increased to 85 Pearson. So the tolerance fine and the tolerance argument is one of the useful arguments to train your multilayer perceptron neural nets for. So. We continue to push to visualize our output data to predict and real data in to the one plot. So I define the test as follow-up plt.plot, scatter X test, the first column of X test and then lightest. And here I use predicts. So. Predict equal to plt.plot scattered at these function, at these data. And here we use predict. And we want to show both of them into the one kilowatt and we add a legend to our plots. So here we use predicts and test. And we add the required fields here. Here I use periodic took data on and real data. And we have a quotation here. And finally, the masseuse PLC dots show to see their outputs. Then and generate the run your codes to generate the output figures. Wait until 251000 iterations. Alright, our data has been plotted here, and as you can see, the predicted data has an orange color and real data has a blue color in some cases. And we have these type of data the RNs are besides of blue data. Here we have error, but here and here that orange plotted directly on the blue data and real and predicted data are same. And you can see their outputs of our model here and real data also. So. And by using these type of plot, you can test your accuracy and visualize your accuracy. Alright, in this lecture we'll learn how to use multilayer perceptron neural nets for two major model. Two trashed the output of the random data set. We make it by using management's function and retrain and test our model. You can let you learn how to use taller on a tolerance argument to use it for better terrain in process. And you use a scatter plot to scatter plot heretic and test data. So thanks for watching and stay tuned to the next lecture.
29. Using Keras to forecast 1000 data with 100 features in a few seconds Part 1: Hello everyone. In this lecture we want to discuss about the power of curse library against the random of data. So test started creating new blank Python file and import. Following library to eat. There's double import, numpy as np. Then you want to import some pilates library mats. Loved pi belongs as a PLT. Then after it, we need to import some crust luxuries. So from cross dot models imports sequential LA, imports said Quinn shells. And then phi1 need to import some layer model. So from curios layer import dance. And then we need some metric to measure our accuracy. So frog gross imports matrix. So we will take into Sida possible output errors and it's take a few seconds, it takes a few seconds to see the outputs of our file. Then v must generate out data. So I use a random function from numpy to set these random function. I use. Sorry, we have error. You must sequential. Then use random dot seat number 0. And these function generates same random out scoot in each run of each run of our codes. If the comment these code and V, c, The different random number in any run of our program. So after it, you can build it again to see the output of your function or your codes. As you can see, we have no errors now via keep, continue to generate our random input and output. So X equals two epi dot random. Turandot. And here the use of 100 data and 11000 data via 100 columns. You can see the ultimate of these function. By using bring into x dot shape. You can see the output shape of our input data. As you can see, it's 100 by 100. Then I commented and continue to generate, generate random dots, random here, reuse one by one to generate our output data and it is random dates. You can use a print line dot shape to see the output type of output shape of your y. But here I want to create our neural network model, to create a simple model to image our input data to the output data are so model equals to sequentially. And then we want to add a simple layer to our models. So model dot add. And we used dense function too. Add the neurons to our first layer. So we use 32 fully connected neurons. And then v must define activation functions similar to MAP classifiers. So I use AC wave equals to reload. And then we must define input size. Input size equals to one, hundreds. As you see here, we have a 11000 data and they have a 100 features. So V defined input size equals to 100 and activation function equals to real. Ooh. And it is very simple function. You can see the figure of this function in internet. You can search it. In internet. It is very, very simple functions and then add it defined output layer. Again, we use DNS and here we only have one outputs, 100 by one. We only have one output. And then we use a sigmoid activation function, activation equals to sigma. Sigma. Alright? By using these three simple codes, these two realigns codes, you make your first neural network model in the cross in wire moment. And then you can keep continue by using compile your model to create a proper model for your forecasting or neural nets were poor pose application. You must compile your model. So model.com pile. And here we have three parameters and they are loss and the optimizer. And metrics. Here we have a three parameters. We can different and we can define different loss function for our compiled model. But here I use mean squared error for our propose. If you encounter visa classification problem, you can define another another loss function. For example, you can use category or cross entropy or other related functions. So here are rights, mean squared error. And here I use default optimizer truss model. So here RMS, prop. And we use an SRE views Mean Absolute Error four matrix. So here, define matrix a equals two matrix that MAPE mean absolute error.
30. Using Keras to forecast 1000 data with 100 features in a few seconds Part 2: Mean absolute error. Then after it, V must feed our model to fit our model V, model dot fit. And here we must use inputs and outputs of our dataset. Then we must define different number for iPods. It ports is number of iteration that we trained our model. So I use, for example, 100. And here I use batch sides and use it. And use 30 to 40 years and use verbose. Because two to the fifth function has a lot of arguments, but here we only use three of them. And we use airports equals to 100. You can increase it if you have a batter or the batch size is equal to 32. The default number is 324 bytes size in feeds function. The beauty of this argument is that every 32 iteration calculate the gradient and you can add a different number for it. And variables. Show us the output of the model during the training process, you can use a 012 for verbose. And if you set verbose equals to 0, you can see, you can't see anything until the process of training finished. After it. We want to evaluate and the model. So similar to MALP classifier, you can use the predict function to predict the output of your model. So model.predict, and here we use x and again, you can use verbose equals to one to see the output during the prediction. So after they'd save all of the codes and around your codes to see the output. As you can see, we, we are going for the forward and the number of iteration increased. And you can see the losses here and mean absolute error here. As you can see as we're going forward, the error decreased and decrease until the 100th iteration or a poet's. Okay? As you can see, V. 0.01 in Los and mean absolute error equals to 0.21. For better visualization, we can use plot function to see the output in 2D space. So I different pills, plt.figure, sorry, if figure number one. And here I use plt.plot scatter. And here I use y and predicts. And here PLT does show. And run our code again to see the plot. All right, our plot has been generated and you can see useful data here. As you can see in our code V, Consider Y and predict y and predict in these plots. If you have a good error, we must have a single line here. But here we have a lot of miss classified or miss forecast datas. We can use different different EPOs to increase our accuracy. You can make it 140 points. And here we can add another plot function to see the output. So I use pellets 50K, 50k, or number two. And we can use our previous function here. Copy and paste it for time efficiency test or calls to pill up that scatter. Here we must and modify it. X, first column of x and y. And again, do this for this line and predict. And here. We have a predict and test and the legend to aids. So we must use blood equals two phis. So take a few minutes to generate our desired output because r is very big. And so build it again to see the output. So wait to see the output. All right, as you can see, you have a very, very, very porous eyes line here. And as I said before, if V each raise more and more, more and more vk, Xist, decorate C. And here you can see the results. As I said before, if we have a good decorous, they're generated out hoods. And real output must be in the line with 45 degree slope. And another pellets or visualization and figure. Figure to their retail data and predicted data plotted in one pellet and scattered plot. As you can see, we have a lot of data there at Penn directory into each other. But they have VHA and other data that, that misclassified. For example, like these, like these data. And these codes show us the power of the cross library against the random data in the world application, in other application like image processing, like image recognition, like text, pertussis, responses in rehab, patterns. And we have a lot of creative proposes to add. Our network can learn them. In a few cases we have a random data. So if VDD and you use these type of data, non-random data, you can see the effect of a chorus line under there. And you can see the very good accuracy on forecasting or classification of these type of data. So in this lecture, we'll learn how to use carrots library. To classify some random data. You can use an in-order number four approaches for batch size four and then has size at least 32 number two to improve your accuracy. So in the next lecture we will use a real world example. And we use a crust to forecast based on past data. So stay tuned to the next lecture and thanks for watching.
31. Forecasting international airline passengers using keras Part1: Hello everyone. In this lecture we want to use a chorus neural network model to model passengers of airline. And first of all, create new files and name it. And, and dot 2.2x dot p y. It is a blank Python code. And similar to previous lecture, we need to import some useful library. So import numpy as np, and then they mostly use pandas for our data frame. So Pandas has a PD. Then we must import pyplot, so much blood lead. The pie. Plot as pl t. Then from SKLearn Vinny to import lean Marxist scholar to standardize our data. So from SKLearn and it's pretty porous async. Imports mean max scaler. Then we need to import, say, Cohen shells, DNAs and metrics from cross library. I copy and paste them from our previous coats. So paste them here. Alright, we are ready to start our coding. First of all, we need to import our dataset to crust. So I use these dataset and you can download it from course resources. So define a DataFrame, df equals to p d dot read underlying CSV. And here, use the name of our dataset. So copying this name here and added CSV to, it's okay. You can see the output of this dataset as follows. The F and the VC wants to see their second column of the dataset. So. Use this comment, the f dot x and select all columns, sorry, select all rows and second columns. If I use 0 here, I see the first color. So built our code to see the possible error and second column of our DataFrame. Alright, as you can see, we have created our dataset accurately. So I come at this code and we are going to generate our x and y dataset for training and testing our neural networks. So I use this comment to measure the length of our dataset. L equals two len of the f. Then I want to define a numpy array for our x and y. So here I use the range function to create an array. We starting with one and finished in L. So I use range from one to l. Then I define y as follows. Np dot array. Then I use the second column of our DataFrame. So d f dot i x. And here use second column. Okay? Because we have not a number data here, the V must eat these states are, so I use this comment to redefine who y as follows. So here I use 0 for a firm l minus1. I delete these data by using these simple comment. So we keep, continue to pull out our data for better visualization. So I define a new figure, plt.figure, and use palette function then island to use FirstColumn. Of an x and a y as a y. And then we need to use plt.plot show to see the outcome. So build your codes again to see their outputs of our datasets. Alright, as you can see, our dataset has been printed here and it has a gradual increase in number of passengers. And you can see a periodic feature in these function. In some cases it increase and decrease, increase and decrease. We want to use past data to forecast the future data. We want to use three passed data to forecast next data. So we need to create three columns as input data for x, and we use next column of data as a y. So to do this, we define X1 as follows. So here we use X1 equal to Y. And here we use 0 to L minus for I copy and paste this line of code to create next data. So for x two, I use 123 and here minus two and x three. So after it, concatenate these three columns of data to create input data for our dataset. So before it, I want to show you the shape of this data as follows. Succulent X1 dot shape and copy and paste this line, X2 and X3. So here, change it to three and here tangent to two. Builds or codes again to see the outputs. We need to kilo's DES. And I close these and as you can see, we have it three columns of data with 141 rows. So we need to concatenate these states are to create a unique X. So X equals two. Np dot con can donate. And here are use this comment X1, X2, X3. And here we must use x equals to 0. To concatenate the 0 axis are these three columns of data. And then we must transpose these data to make them make the dimension of this theta equals to y. So to see the dimension of X and Y, you can use print x dot shape and the shape. So beauty our codes to see the output of these comments, x dot shape and why that shape. As we can't see, our x shape and Y shape are different because we miss two transpose y. So I use transpose comment, it came to transpose y. So here, use y. And we must select the next column of x, three. So I use three from l minus1 to select the y. So besides your code again, alright, our datasets has the same dimension in rows, so we can continue to next steps. We must make our data as scalar.
32. Forecasting international airline passengers using keras Part 2: Scalar. So define a scalar as follow a scalar equals to minmax x scalar. And then we must fits these scalar function to our data. So as scalar fields, then you X, and then X equals two. It's Keller transform. These three lines of comments make as calor data. The data up now are between 01 and it's very good for our neural network and we can train them easily. So do the same comments for y. So here I define new scalar function and a scalar, one feeds on Y. Then I use Y here, Y here, and one. Okay? After it. Vicente, define our sequential model for our neural network. So defined model equals two sequential. Then. No, we need to define input layer for our model. So define model dot add. Dns is d 532 fully connected neurons and activation equals to reload. So for input layer we need to define input dimensions. So input underlying time equals to three because we have it three columns of impulse for x. Then in this lecture I'm going to define hidden layer. So again, I define modal dot add equals to minus 32. And here I define real activation. The main feature of and one of the useful features of the cross is that it define automatically the dimension of hidden layer and you don't need to define its BY your soul. And finally, I define the output layer. So again, dance. And here I want to use only one. And I know around here and here I use sigmoid function C. Alright, our model has been created properly. No, we need to compile the smart they'll simulate or to pervious lecture model.com. And here we must to define a loss function. And I use mean squared arrow. Four last Manson. Then we need to define optimizer. Optimizer equals to RMS rope. And to measure the accuracy, we need to define some metrics. So I use metrics dots, MAE. After compile, we need to fit our data to this model so defined model dot feeds. So here we need to input x and y to the model and defined number of k points equals two, for example, 100 and defined some best size for it, right? So it's equals to delta T2. And to see the outskirts, I use weirdos equals to two. All right, we can know, builds our code again to see the output of our model. As you can see, we started with a poses long and we've finished in 100, the loss function decreased dramatically. And you can see the mean absolute error is decreased. Similar to last function. So this to seek their outputs results. So we need to use the predict function, so defined periodic equals to model predicts and here use x and use variables like close to one. Then you can clean them and predict and build it again to see their Y N periods. As you can see, the why and predicts has been printed here. This is predictor and this is why for better visualization we need to use some figures again. So use plt.figure, figure to Lynn. Plt.plot scatter plots. And here use y and predict. Then use plt.plot. Show to see the bows and Figueres. I use block equals to false. So build your codes again to see their outputs and figure. Okay, similar to previously shared, we want to make a simple line here and our generated output and real data must plotted in this line we'd 45-degree slope. So if I increase the Poisson number, we can do this. So before do similar instruction for this lecture, I want to use another plot and similar to the previous lecture I used to this. Here I use first column of x and y and use predicts similar to peer reviews. And I use 5 thousand for a purchase. So i b and again, my cause before it, I must use the lock files to see the three figures simultaneously. So beads your cause again to see the output. Okay, as you can see, our generated data and know it has a better error. And here you can see the real end predicted data. They have a better accuracy. And peer reviews a voice. The peer reviews the Postal was 100. If you increase this number, you can access better accuracy. Alright, in this lecture, we learn how to use past data to forecast future data. And we use Passenger of airlines and number of passenger of airline to Louise you can use and other datasets for using in these type of application. So stay tuned to the next lecture and thanks for watching.
33. Los Angeles Temperature Forecasting Part 1: Hello everyone. In this lecture we want to use a Los Angeles and temperature for forecasting purposes. We want to use it as the input data and forecast a temperature in Los Angeles. In this lecture, we use cross similar to pervious lecture. So create a new blank Python file and then you have access to these datasets in courses, in lecture resources and you can download it. And in this lecture we want to use maximum and minimum temperature as the inputs and use average temperature as an art school. So we use maximum and minimum temperature to forecasts average temperature of Los Angeles. So to start this lecture, when most import Psalm required library from Python, so important non pi as np then imports. And us as a PDU can skip this part and go to next part of this lecture. If you don't like to see these parts and bots, and I think it's maybe useful for you to learn what's libraries we need to import. So for a Matplotlib dot-product Palade, import it as a PLT, so rides PLT Here. Then we must use 3D plot to visualize our data. So from MPL under Line tool kits that M payload 3D PN imports access 3D. We use this function to visualize our TLT data, so reimport access to 3D. And then we must use some useful function from SKLearn. So from SKLearn, dot pre processing imports mean max scaler, mean x scalar. Similar to previous lecture, this function make scalar data and standardized or normalized our data between 01 and it is very useful for our neural network models. So another useful library from SKLearn A's train test split. So from SKLearn dot model selection. Imports, a train test, splits. Then finally the most import some cross-function slow from Kuros. Dots models imports sequential slit. And then Burn layer forum. The layers imports Dennis. And finally from gross imports matrix x. Alright, now we want to import our dataset, the CSV dataset or Los Angeles temperature data to our Python code. So define a DataFrame as follows. So df equals to p d dot, really the underlying CSV. This function read any CSV file. And here I use the name of this file as a read or the CSV. I paste it, but don't forget to add quotation to use it. Then to see what happen, you can use print dot shape. Now I built my code to see the possible error in my coat. Alright, as you can see, we have a no error here. And the keep continue to make our model. As you can see, again, we have a DataFrame with 365 rows of data and six columns of data. These data is for 2017 year. And you can see that this six columns of these datasets. So we only need the T maximum temperature, minimum temperature, and average temperatures. So I use a T maximum and creates a numpy array for this dataset. And here v must import our required columns from our data frame. So I select fourths or fifths columns of our data for maximum temperature. I showed this here. As you can see, 12345 V USE fifth columns for maximum temperature. So we continue to make minimum temperature empty dot array. And I copy these parts here and change it to five. For minimum temperature, we use a seeks column. Our data. Finally, I use C average across to copy and paste these lines and change and these 23. Alright. We gather our required dates are, and you can see the states are, for example, you can see T average Dutch shape for validation of our Cokes. Alright, as you can see, we have one caught one rows of data V 365 columns. So after it, V01 loss. Visualize these data into 3D models. As you know, we have a two, input. Temperature, minimum temperature, maximum temperature, and the want to use them as an input. And we have average temperature as an output. So we need a 3D plot or 3D object to visualize them.
34. Los Angeles Temperature Forecasting Part 2: Alright, so to create a 3D pellet, the need to create a figure objects. So figure, pellet, figure, I added a number for it, so figure one. And we must import AX 3D objects here. So here 50K dots add underlying soft allot for 3D model of our palate. So here, use this number and use projection I cause to 3D. And then after this function V must define our scatter plot. So a x-dot scatter. And here we use c maximum as x, t minimum as a Y, and T minimum as the average as, as it access. So we need to add is some marker for our pilots. So here add them. Circle type marker for your plotted. And after it you can either some label to your plot. So here, add a x-dot, sets x label, and we use the maximum temperature for x-axis. So here write max temperature. After it. You can copy and paste this line of code for another axes. So here, paste it here, and here. So change it to y label and change it to zed labor. And here rights minimum and temperature, and here rights average temperature. So to show this to the user, you need use pellets dot show function. So we are ready to see our allots. It really pill out objects for better visualization. So vague to see these 3D object. All right, as you can see, we have a minimum temperature here, maximum temperature in y-axis and average temperature in zed axis. And as you can see, we have a very good data from one yard in Los Angeles City. This is the forearm Los Angeles airports weather station. And you can download it for lecture resources. Soak. Keep, continue to create our inputs and outputs datasets for our MAP neural network models. So we must use the minimum or maximum temperature for our model. So you must mascot Tonight's maximum or minimum temperature. So use MP.com can con to end aids. And here we define T maximum and minimum as a inputs. So here we must define axis function to merge these two arrays into one array based on as 0 excess or first axis of this data. After it. You can print shape of the temp of objects. 4-bits. Her understanding of the dimension of temp dataset. Kilo's these. And as you can see, we have this dimension for temp object or temp array. So we need to transpose these data for better the application of our model. So I transpose the temp objects. So temp equals two np dot D_trans goes attempt and four output object, the average temperature when we must do this again for these data. So here, copy and paste the transpose function here. And here, right, see average. Or as. As you can see, our input and date input and output data said RAD for creating and training our neural network models. So to normalize these theta or v must use scalar objects. So create a scalar. It calls to mean likes a scalar function and feeds temp data to these objects. And finally, transformed data. By using these function a scalar, the trans form. And here use temperature again at these line, three lines of code normalize our temperature data and we must use three lines for output datasets. So here, create a new object is scalar one and Scala Iran that feed to T average and the average equals to a scalar that transform T average. Alright, for better coding. I'll use these line here to separate our quotes from top parts. And after it vMiles escalates. Our dates are for train and test. We use 30% of our datasets for test protocols and we use 70% of it for training purpose. So here we use train test, split function. So I write extremely widespread x-test and y train. And finally, why underlying test here, and it's equals to train test escalates. And here we use temp as the inputs or x and average temperature as output. And we must define a test size for our model. So here are you 30% of our data Azim, test size. Then we can make our model by using and trust a functions. So here model equals two sequential function and we want to use three estates MAP neural network. The first layer is input layer, second hidden layer, and finally output layer. So we must use model dot add. And here we use dance function for it. We need to import, we need to use territory to fully connected neurons in the inputs and as stages. So we use an activation function. And here I use real uhm activation function. And here v must see US inputs dimension and it's equals to two because we have it two columns of data of first column is maximum temperature and the second column is minimum temperature. Then we do this again for hidden layer I. Copy and paste this line of code for time efficiency here. And we don't need to use these subparts because the chaos select dimension of hidden layer automatically and you don't need to do this. Finally, we must add an output layer for our models so we can paste it again and we need to delete this. And but here we must use sigmoid function for activation. And here we have only one output. We only have one column of data or average temperature, so we add a one here. After it, we must compile our model. So brights model.com. And in this function we must define loss function. And here I use mean squared. Error is squared. And after it, you need to define optimizer for it up to miser. And it's equals to R M S. Probe. To measure the accuracy of your model unit to use some metrics. So a matrix equals two. These guys. So metrics mean absolute error. So the composite function is ready to use an after and before it. V must feed our train data into our models. So model that fits x train. Whites rain. And we must be 5n number of inputs here. And I want to use 100. And we use 32 for Best size. At these arguments. Bet size equals two thirsty to measure the accuracy of model in each 32 or 32 iterations. So to see what happened during the training, we must use verbose and v equals two to number to see what's happened during training. So we are ready to train our MLK neural network model. So builds your code to see the outputs.
35. Los Angeles Temperature Forecasting Part 3: All right, as you can see, our model has been trained 100 reports. And as you can see, we have these number four mean absolute error and this number for loss function. And as you can see, these loss function and mean absolute error decrease dramatically during our training. Parenthesis. So V1 to test our model, to see what and what do our model against the data. He or she near c again. So we must use predicts object here. So I define predict equals sign model.predict. And I use X test here for prediction propose. And as you see during this lecture, we don't use any data from test, x-test and lightest. So here we can see the power full depositor of cross and neural network against unknown data and data. It's never seen before. So here, we can use some visualization objects again to show the user what happened during our prediction purpose. So here, IOUs verbose equal to one and define another pellets objects here for visualization per pose. So I add a plt.figure two. And here you can ESC catheter plot, plt.plot as Canter. And here we use why tests against predicts. And we use plt.plot show for c, this plot to Sibos plot. Simultaneously, we use the luck to false. All right, build your code to see what happened during production process. Okay, as you can see, we have a very good accuracy here because our data plotted in this line and VT, 45-degree slope. And if you increase the number of epochs, you can't see better accuracy here. So we can use another pellet object for our model. For time efficiency. I use it from less lecture. Copy and paste it here. And plt.figure numbered three plt.plot. Here we use plt.plot, plot to see the single line for each object. And finally, we use y test here and predict here. And I think we are ready to create the third pellets of our model. But before it, I want to copy and paste this line of code here to see these three pilot objects simultaneously. And I use 500 for a purpose. And I cut this video and show you the final resorts for time efficiency. So EIP yields my code and wait to see what happened. Alright, as you can see, we trained our model 500 times and as you can see in this plot, figure two, we have a better accuracy. And the main part, I think is here, because we pull out the real data that blue lines and predicted dates are simultaneously. And as you can see, we have a very good tracking of output. The output, as you knew from the first of this lecture, is average temperature v, correctly predict the average temperature. Spoke. In this lecture, you learn how to use the power of neural network, the power of Chrome library to predict the temperature of any city, to predict the generator and Los Angeles, and you can use it for any CT. And you want. So in this course, I try and I do my best to show you the power of multilayer perceptron networks. And I try and I do my best to update discourse VT. Another useful and practical examples of use of neural network in the real world problem. So stay tuned and thanks for watching and tanks for your buying.
36. Theory of k Nearest Neighbors Classification Method: Hello everyone. In this lecture we want to discuss about K-Nearest Neighbors classification method. In pattern recognition, the k-nearest neighbors algorithm, or kNN, is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space that output depends on whether kNN is used for classification or regression. In KNN classification, the output is a class membership, and our object is classified by a majority vote of its neighbors, with the object being assigned to the most common among its k nearest neighbors. If k x equals to one, then the objects simply assign to the class of that single nearest neighbor. In k-NN regression, the output is the perceived value for the object. This value is the average of the values of its k nearest neighbors. Knn, is a type of instance-based learning or learning where the function is only approximated locally and all computation is deferred until classification. The kNN algorithm is among the simplest of all matching learning algorithms. Let training examples are vectors in a multidimensional feature space H visa class label. The training phase of the algorithm consist only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user defined constant, for example, 35 and other number, and an unlabeled vector or query or test point. For example, the green suture LA hair is classified by assigning the label which is the most frequent mRNA decay training samples near it, near risk to that query pints. A commonly used distance metric for continuous variable is Euclidean distance. Though Kuhnian distance between Heinz pq and q is the length of the line segment connecting them, for example, in two dimension V. However, this formula for Euclidean distance v must calculate differences between X sub 2. And why are these two points and, and put them into this formula and calculate the distance. After calculation of these stances for the test point to the K-nearest points. We must sort them based on their distances. Finally, we select must come on class label of nearest neighbor near to the test spine. For better understanding, let me show you an example. For example, if this select k equals to three, the star point classified as a class B, because we have a two class B, the purple Seroquel, N1 class a, the red circle. So we consider S star point as a class B. But if the candy there, K equal to six star point classified as a class a, because we have a for her yellow circle near to the star point, but we have only two purple circle near it. So we classified it as a class a. So I think it's enough. So let's writing some code.
37. Use k Nearest Neighbors Classification Method to classify random dataset Part 1: Hello everyone. In this lecture we want to discuss about k-nearest neighbor method to classify some prebuilt datasets and in Python, so for first as they've been, need to import some useful library for an SK layer. So from SKLearn imports Negroes and datasets and cure us validation. We need to use neighbors for K nearest neighbor classification algorithm, we need to use datasets to load the probate datasets and generate our required data on n. Finally, we need to use cross-validation to load, test, and train escalate function. So next week. Next, we must import, train and test escalates. So from SKLearn, imports from SKLearn dots, model selection, import, train, test, split. This long Shen escalates in our dataset and made train your test parts for our data set. We train part, we trained our classification algorithm. And in the test part, we test our algorithm and calculate the accuracy of the classification algorithms. So after it from SKLearn dot dataset, we must import useful function to generate our required major classification function. This function generator random data set for classification approach or purpose. After it had been neat too. Use S DON Dart function to standardize our input data. So from SKLearn, prayer or assessing imports, S TEN, Dart. And then finally, we want to measure the accuracy of classification method, especially k-nearest neighbor methods. So. To do this, we must load some metrics or some measurement from SKLearn. So from SKLearn dot matrix imports, classification report, classification report, and confusion metrics. We must use a small c. It is case centered, sensitive, then confusion. Alright, we import the libraries that we need for our SKLearn. Now, we must import to library one R and one from maximality lean to pill out, out 8R. Matplotlib, pyplot imported as Appeal team. And from useful library ML extant dot plotting imports. Sorry, from this library imports Palade underlying this seizure region. So this function create the decision regions for us and we can see what's happened exactly. Then V classified our dataset and data are classified correctly or misclassified. So and we create our dataset, we'd make a classification functions, alright, x and y equal two main classification. And then v must set some parameter for our generated datasets. So first of all, we need to define number of features for our dataset. Leave on to classify, to collapse data sets. So I define number of features equals to two and I don't like to have a read on dent. So defining it as a 0 and equality with 0, number and number of informative parameter. And we need to informative and we want to create a random datasets. So random estate must be one. And finally, we want to define number of cluster per class. So, right, number 30 hours per plus equal to one. That's all we need then. And the must use train test split function to generate terrain and test dataset to train and measure the accuracy of our metal. So V must define x underlined strain Ex underlying test. Then why under length rain? And who tests and equal these data to cross-validation and low train test split function, then v must call our input data to dysfunction. I call x and y. And I must define a test size. For this function, the test size defined amount of data we want to Kanzi there, we want to be consider as a test parts. So I define test part as a 15 persons. Then they must standardise our data. So define an object SC equals to a standard scalar. And then we must fit our data to this object to make them as dawn dark. Then I call each strain, and then I need to assign dart them by using transformed capability of these functions. So I define x Andra lines train underlying S don dot equals to sc dot transform. And here we must define extreme transform and use the same code for X test. And here load x-test. Alright. These lines of code. First, in the first line of these lines of code, we create a standard scalar object and assign it to the sc. Then the feet x train to these objects. And finally, we transform a strain in X test based on what happened in the fit function. We assign them to extreme SUN Dart and x-test estar. And then we can build our code to see the output and possible error. So press, build and wait to see the outskirts. Alright, as you can see, our courts has annoy roar and the acute continue to create classification object and classify these Renzong dataset.
38. Use k Nearest Neighbors Classification Method to classify random dataset Part 2: All right, now we must build our codes again to see the possible error. And before he must add it cross-validation to here. And then go to Tools and builds your code. The way to this way to see their response or rights we have in noir or here. And then finally, we need to define our classification or our classifier object here. So right, CLF has two neighbors, dots, k neighbor, kill, S5. Alright, these codes creates the k-nearest neighbor classifier for us. And these function K neighbors classifier and has a lot of arguments. And here we only use and the wolf dam. And first of all, we must define the K or number of neighbors that we want to consider for one data. And we want to consider, for example, five neighbors for one data. And here we can define it. And after it, we can see the effect of an increase or decrease of the number of neighbors on the classification error. And are there things? So here we must define, for example, ten neighbors for our data and use weights equals to uniform. Add these, these common weights across to uniform, add the uniform and values to any neighbors. So after eighth V mosfets our data to these classifiers. So rights classifier or CLF dot fleets. And here, right, SFrame, Islam, Dart and y train. Afterwards, after creation of our classifier here and after feeds our data and train our classifier. And we need to test our classifier method. So. Here we must predict some data based on these classifiers. So define y. Four, it was two CLF dots prayed. And here we must inputs X test a standard, this line of code predict outputs based on testing puts and based on our classifier. After these lines of code, we have a and we have to measure accuracy or error of these metals. So we must define confusion matrix for our mental. And these matrix generated for each class and show the corrected and misclassified objects or data. So here US confusion metrics and use y test and why. Red. And then we can Prince confusion metrics. After it, we can build our call to see the output and possible error. All right, as you see from the first of this lecture, we want to classify to class data. So we must have a two-by-two matrix or confusion marts matrix for these data. And what these metrics set. And this minister said 23 hours later, has classified correctly in the class. 10 of them are misclassified and four kill us too. We have a one misclassified data and we can create a classification report for our method. So call classification Report, function and inputs v. These two argument similar to previews, function confusion metrics. The influence them. Purines, confusion. Sorry, classification report. And you can build your codes again to see their outputs. All rights after it, because we generate a random data, say that dataset, sorry, v have a various accuracy. So if I use these codes again, I will be c and other accuracy. In the peer reviews execution. We have a 20 to end to any FOR I think here. And for example, I venture and build it again. I can see another combination of two glass declassification report shows the accuracy of the method. And as you see, we have a 0 misclassified. So in each class we have one for accuracy or 100%. And after all of them, and we must see their outs boots in the pull out or in visual. So here we must use plot decision regions too, show the user's outputs of our metals. So here, use the pallet decision regions and input following our women's. First of all, we need to input X test, that's done Dart and why? Bread. Then we must define the classifier of our metal. And here, our classifier equals to ClF, or k-nearest neighbor classifier, and define some resolution for these plots and equally to 0.02. And finally, we need to add a psalm legend for lease plot. And we can define some title for these plots as follows. So we can write here two class classification, sorry, to kill us and kill en. Simplification and using k theorist neighbors. And here you can define k was 210. And you can't define x label y label n. Something like them. Bots. For time efficiency, I only writes plot dot show and build my code to see their outputs. Alright, as you can see, our susan regions has been created here and we have a two class and we have a 0 arrow. So they don't see, for example, these are red square here or this blue triangle here, because we have a 0 error. In the next lecture, we want to go further and make the good classification algorithm to classify iris and datasets. So I stay tuned to the next lecture. Thank you.
39. Learn How to Use k Nearest Neighbors Classification for IRIS Dataset: Hello again. In this lecture, we want to classify iris data sets. For better efficiency. I copy and paste our needed library, our required logb library from first program or first coat. The iris dataset has a four inputs, four HTy pop, iris flower and has a three in class. Therefore input is a sepal length, say Paulo VIF, petal length and petal width. And we have a three classes for high-risk flower in this dataset. And they are named iris setosa, wears a color and virginica. So we want to classify and this data set and every classification method that has been created and, and newly introduced taste vt least dataset with Iris dataset. And it is one of the famous classification dataset. So I think it is helpful for you to know about it and how to use in this dataset. So to start this program, LV must load the these datasets. So I create iris objects and here I use datasets function, that is a Bayesian function in SKLearn and load, I erase. This line, I'll call creates a Iris dataset for us. So after it we need to define our x and y. So here writes, I erase dots data. And we want to select the second and force inputs. So here writes this code one in three. And here Harvey is select first, third columns in the iris inputs. So here is select sepal length and petal length. So after it, we must import targets or the classes three Colossus. So write Iris dot targets. Alright, after it, they must split the data into train and test parts. So use and these codes, x Rain, X test and why it's rain. And why tests. And we are, we must use turning test this split objects or function. So cross-validation, train test split. And here rides X and the Y. And then we must define test size. And this size. I think there are T person is goes and after rates, we can standardize our data similar to amuse. So rights SC equals two under this scalar function. Here, rights sc dot fit. And here writes x train and writes x-ray EST on Dorothy equals to c dot transform x train. And copy and paste this code for X test. Alright. After eight, V must define our classification methods. And as you see me using this lecture, K nearest neighbor classification methods. So defined CLF equals two neighbors, dots, k neighbors classifier. And here we must define the K value. So I define tan and use a uniform weight for this methods. So here write unit four. Alright, and next we must feed our train data through this classifier. So write simply CLF dot fleets. And here, right, x train asunder. And Why train to fit the training data into there and classifier. And finally, we need to generate output based on our classifier. So write a y predicts as per addiction values of y. So rights CLF dot predict and use X test a standard to generate predicted values. These function and generated predicted value based on our classifier model and our test input data. So V can cooperate latent why tests to see their possible error or see the accuracy of our classification method. So to see their outputs and error of these Smith or you can use where you see a metrics. But here we use Confusion, mysteries and classification reports. So you can write C underlying m and n equals two confusion matrix. And here, rights, why test and widespread. And you can print, sorry, you can put in it this value to see the output of this methods. So builds your code's. All right. As we can see, we have three class here, three classes here. C2a, where's the color virginica. So our confusion matrix must be in three by three matrix. So as you can see again, the first-class is has no roar, and second, third has a little error. So to see the better point of these output, V can use classification or report to see their output of each class and accuracy of each collapsed. So write this code and classification report and here, right, and y test and why print and print it to the outside world to see the classification report based on each glass and build your code to see the outputs. All right, as you can see, we have no error in glass one and we have a 6% error in class number two, and we have a no error again in class three. And overall, accuracy is 98%. And it is a very awesome accuracy for a classification problem. So after it, we can plot decision regions. By using this function. We can see the output of our method and in the visual space. So right, extra says standard and to see them, and to see them in the plot. And we must introduce classifier to this function. And finally, we must define some resolution for this object and latent. After Rita awaken, add some title for our plot objects. So here you can write three careless classification. And for our iris flowers data sets. And finally PL theater. Show to see the output of this plot objects, then builds the scope to see their outputs. Okay? As you can see you, we have another error in this iteration because of their random, because of randomness of these programs, these type of programs. And you can see there some datasets go to the blue regions. And because we have some error, and you can test the, this method by various number of Kay? So for example, I want to see there what happen if I use a K equals one. So builds your code and wait to see the output. As you can seen, the regions and smaller than peer reviews and our errors and decreased and sorry, are our increased from the previous execution. And this number, the k number or number of neighbors. And depends, the error depends on this number. So if you increase this lambda, for example, to 15, movie will have air and better error, I think. Alright, as you can see, we have a better error. And you can change this number to find the optimal number. For example, and test this method various number of J. You can start from 12, for example, 20 and see when you have a largest number or inaccuracy and lost number in error. So in this lecture, we learn how to use k-nearest neighbor classifier to classify iris flowers. And in the next lecture we want to go deeper and arrive these cold by ourselves. So stay tuned to the next lecture and thank you.
40. Write k Nearest Neighbors Classification Method by yourself Part 1: Hello again. In this lecture we want to create K nearest neighbor method by ourself and civil rights, all of the required functions by ourself. And you can see the what behind the k-nearest neighbors algorithm. First of all, we need to import SOM and required library. So right, import numpy as np. And then we need to load arteries datasets. You can use any datasets for these lecture. So for better output, we can and for bitter comparison with pervious lecture, we can use Iris dataset. So from SKLearn, we must import data sets and we must use cross-validation. So we call it here. And we must use train test split. So from SKLearn and model selection, selection imports and train under Line Test onto a line escalates. And finally, we need to see the accuracy of our methods. So from SKLearn dots, metrics, accuracy score. Alright. And finally, the must call contract because we needed during this lecture. So from collections, import contour. Alright. Now we must import artists datasets. So Iris equalss two datasets, Dodd's load onto her lying iris. And then we must define our well-known excellent why Iris dot data. And here we want so as select first third columns of these data sets. And finally, Beam Us. Load Iris dot targets. And then similar to previous way mus and escalate our dataset into the train and test. And here you can write, sorry, its train and test. Copy and paste the 4y and changed into y. And here we must use cross validation function and for Amit mainly to train test split. And here input x, y and test size. Test size a cause to authority, persons and parenthesis. So we are ready to define our required function for k-nearest neighbor classifier allegory thems. As you see that this method is a lazy method and there is no need for trained. So you can define its rain function as follows. So you can write x underlying strain and why it's rain here and call in and return something. Then we need to define predicting thought for our classification. And this method has for argument. So you can write x string, y strain, Ex test, and k. First level only need to define some empty matrices. So I write the standards and targets. A equals two. And then v must write for two. And to calculate these tenses. And by base code for i in range of, sorry, in range of length of x train. All rights ends here. We want to find and calculate the n. They stands. So defined this stance equals two np dot skew RTO of sum of x, n and x train some of distance between X test then x train. So here rides np dot psalm. And here I write np dot square of x train and X test X underline test minus x under lying train. And that's it. And after I calculate this distance, the limos added to or app ended in the, these tenses. Empty mars matrix. So write this ten says append this tense and I. Alright? And then we need to sort these tenses so you can't ride distances equals to source. These tenses. Are these called sorts, our distances matrix or array after it. And you need an make a lease for k nearest neighbor based on and k value. For example, if k equals to five, you must make five. You must make a list of five nearest neighbors.
41. KNN3 2Write k Nearest Neighbors Classification Method by yourself Part 2: And then the Vin must define a four in range of k to find k nearest neighbor. And add the column here and define as some index equals to the stands. And define i and one here selects am I elements in these stanzas and first element in ice elements in distances. So after eating, we must happened and these values two targets. So targets that app and, and, and add them to the wide strain and define index here. Alright, after all, we must return, must common targets or k nearest neighbors. And here we use contour objects. So contour targets. And we must select the most common. And here we must select first elements in first array. After eight, we must define K nearest neighbor methods. So here, right? And k and n and define x train for it and why it's rain. And X test. And finally, meaning mosque define prediction and K. So rights prediction and k column and come here. And first you can train emitted. But as I said at the first of these lecture, this classification algorithm is a lazy algorithm, ends and train. You can train it at once. And it is a one of the weakness of this middles. You must train these algorithms to predict some Neil inputs. And here training aids. And then you must for i in range of x-test. Range of X test. And then, and you must appends and the predicts predicted number or predicted value to predictions. So prediction dot app and called the predict function here and inputs it width, x train, whites Rain, X test. I am colon, and finally the k. All right, afterwards, we must return per addiction. And you're ready to test our methods and to see the hotspots. So first of all, we must define a empty prediction, RA or metrics. And I define k equals two cli. And here I call kNN method and to see the output. So here you must extreme forum iris data sets and wide strain from least datasets and x-test. And finally, prediction and k. And then you must use np dot array to make them make the predictions to the array. So rights NP and data as an array. It is a one of the useful NumPy function and input prediction here. And finally, we must see the accuracy of our metal accuracy to accuracy as floor of y test and prediction. Finally purines, sorry, queen