Deep Learning Profits: Top 4 Python Deep Learning Libraries You Must Learn in 2021 | Python Profits | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Deep Learning Profits: Top 4 Python Deep Learning Libraries You Must Learn in 2021

teacher avatar Python Profits, Master Python & Accelerate Your Profits

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

92 Lessons (3h 11m)
    • 1. Tutor introduction

      1:02
    • 2. Deep learning course objective and benefits

      2:00
    • 3. Deep learning overall course blueprint

      3:43
    • 4. Deep learning course methodology

      2:24
    • 5. Deep learning big picture

      4:53
    • 6. Tools and requirements

      1:54
    • 7. TensorFlow course objective

      1:33
    • 8. TensorFlow course methodology

      1:55
    • 9. TensorFlow modules and API

      1:06
    • 10. TensoFlow changes and concepts

      2:36
    • 11. TensorFlow data pipeline

      1:49
    • 12. TensorFlow tf data code walk through

      3:24
    • 13. TensorFlow data augmentation

      1:15
    • 14. TensorFlow Keras walk through

      2:05
    • 15. TensorFlow fully connected nn model

      1:44
    • 16. TensorFlow fully connected model with data pipeline code walk through

      6:05
    • 17. TensorFlow CNN model steps walk through

      1:52
    • 18. TensorFlow CNN model code walk through

      4:21
    • 19. TensorFlow RNN based sequence models

      1:48
    • 20. TensorFlow RNN code walk through

      1:25
    • 21. ADVANCED: TensorFlow transfer learning walk through

      2:13
    • 22. ADVANCED: TensorFlow entire workflow with transfer learning code advance walk through

      6:56
    • 23. TensorFlow Quiz

      1:59
    • 24. TensorFlow exercise tasks

      1:12
    • 25. TensorFlow exercise solution walk through

      1:53
    • 26. TensorFlow exercise 2

      0:43
    • 27. TensorFlow exercise 2 solution walk through

      1:52
    • 28. TensorFlow course summary

      1:20
    • 29. MXNet introduction and course benefit

      1:36
    • 30. MXnet course coverage methodology

      1:15
    • 31. MXNet modules and APIs

      2:32
    • 32. MXNet NDArray

      0:57
    • 33. MXNet data augumentation and tranformation

      1:48
    • 34. MXNet data pipeline transformation code walk through

      2:00
    • 35. MXNet deep learning model building steps

      2:00
    • 36. MXNet deep learning FCN code walk through

      2:53
    • 37. MXNet CNN model building steps

      1:53
    • 38. MXNet deep learning CNN model code walk through

      4:37
    • 39. MXNet RNN model steps

      1:43
    • 40. MXNet RNN code walk through

      5:29
    • 41. ADVANCED: MXNet transfer learning steps

      1:37
    • 42. ADVANCED: MXnet transfer learning code advance walk through

      3:41
    • 43. MXnet Quiz

      1:18
    • 44. MXNet exercise

      0:55
    • 45. MXNet exercise solution code walk through

      3:29
    • 46. MXNet exercise 2 overview

      0:45
    • 47. MXNet exercise 2 solution walk through

      1:02
    • 48. MXNet course summary

      1:11
    • 49. PyTorch course introduction

      1:54
    • 50. PyTorch course coverage methology

      1:17
    • 51. PyTorch installation procedure

      1:54
    • 52. PyTorch modules and concepts

      3:25
    • 53. PyTorch Torch API code walk through

      2:33
    • 54. PyTorch data pipeline

      1:09
    • 55. PyTorch data transformation

      1:22
    • 56. PyTorch data pipeline and tranformation code walk through

      1:29
    • 57. PyTorch torchnn for configuring the deep learning models

      1:55
    • 58. PyTorch FCN code walk through

      5:58
    • 59. PyTorch steps for CNN model

      1:40
    • 60. PyTorch CNN code walk through

      4:37
    • 61. PyTorch RNN model construction walk through

      1:44
    • 62. PyTorch RNN code walk through

      5:03
    • 63. PyTorch transfer learning using TorchVision

      1:48
    • 64. ADVANCED: PyTorch transfer learning code advance walk through

      5:09
    • 65. PyTorch Quiz

      1:51
    • 66. PyTorch exercise

      1:11
    • 67. PyTorch exercise solution walk through

      2:04
    • 68. PyTorch exercise 2 overview

      0:52
    • 69. PyTorch exercise 2 solution walk through

      1:56
    • 70. PyTorch course summary

      1:29
    • 71. OpenCV introduction and course benefits 1

      1:29
    • 72. OpenCV course coverage methodology 1

      1:17
    • 73. OpenCV accessing image properties 1

      1:12
    • 74. OpenCV reading image and coverting back

      1:18
    • 75. OpenCV basic operations code walk through

      5:25
    • 76. OpenCV image prcocessing 1

      1:10
    • 77. OpenCV image transformation code walk through

      7:27
    • 78. ADVANCED: OpenCV feature detection

      1:03
    • 79. ADVANCED: OpenCV feature detection code advance walk through

      4:50
    • 80. OpenCV Quiz

      2:01
    • 81. OpenCV exercise

      0:59
    • 82. OpenCV exercise solution

      1:33
    • 83. OpenCV exercise 2 overview

      0:31
    • 84. OpenCV exercise 2 solution walk through

      2:08
    • 85. OpenCV course summary

      1:08
    • 86. Big secret MXNet Numpy interface

      2:23
    • 87. Big secret using TensorFlow graphics

      1:42
    • 88. Big secret using tfds dataset

      2:03
    • 89. Capstone project deep learning crash course

      1:46
    • 90. Capstone project solution walk through

      5:19
    • 91. Frequently asked questions

      3:04
    • 92. Additional resources for learning

      2:57
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

63

Students

--

Projects

About This Class

Want To Become A Top-Notch Deep Learning Developer That Big Corporations Will Always Scout?

Learn the secrets that helped hundreds of deep learning developers improve their deep learning development skills without sacrificing too much time and money.

The demand for deep learning developers is rising. In just a few years, more opportunities will open.

Soon, more people will start to pay attention to this trend and many will try to learn and improve as much as they can to become a better Deep learning developer than others.

This means that you will have more competitors than ever…

… And if you don’t improve your skill, you will be left behind and be stuck in the middle of the pack where you’re undervalued and underpaid by companies.

So, if you’re feeling stuck and don’t seem to improve is not because you aren’t talented and fit in this field.

The reason for this is because…

You’re Probably Relying on Free Information Found in Forums and Search Engines!

To be honest, this is common practice for most people.

After all, you can find tons of information online just by using the right words…

However, some information online can be misleading and cause confusion and contradiction.

And if you think about it, trade secrets and important information aren’t just given away by industry experts for free.

This is why the free information that you get isn’t reliable.

Truth be told, Deep learning development is a lucrative career.

You can earn tons of money because of your possible contributions to a business’s development.

But, if you fail to improve and become better, you won’t be able to maximize your growth in this field and…

You Will Be Stuck in Mediocrity and Never Be Able to Maximize Your Potential Earnings!

Companies like Amazon and Google pay professional Deep learning developers around $160,000 to $240,000 annually.

As you can see, this is a lucrative field so it isn’t surprising to see that more people are trying to learn deep learning and get hired by some businesses.

Your True Journey Toward Improving Your Deep Learning Development Skill Starts Here…

You will never have to search every corner of the internet to find your way to improve your Deep learning development skills.

This is why we are here to help people like you take the next step and become a top-notch professional.

We will share with you tons of information and secrets that only the industry experts know.

Our Course is for:

  • ASPIRING DEVELOPERS - who want to improve their skills without wasting so much time searching for answers on internet.
  • BUSINESS ANALYSTS - who want to become better in making data-driven decisions.
  • STARTUP TECHNOPRENEURS - who want to become better in machine learning and data science.

If you’re any of these, then this course is designed to help you in the easiest and most efficient way possible.

Pre-requisite:

  • Basic Python programming experience.

Here’s What You’ll Learn Through Our Course:

  • Introduction to the Top Deep learning modules, APIs and installation:
    • Tensorflow 2.0, PyTorch, MXNet and OpenCV
  • Perform Data Pipeline Transformation
    • using Tensorflow 2.0, PyTorch and MXNet
  • Build Convolutional Neural Network CNN models
    • using Tensorflow 2.0, PyTorch and MXNet
  • Build Recurrent Neural Network RNN models
    • using Tensorflow 2.0, PyTorch and MXNet
  • Build Fully Connected Network FCN models
    • using Tensorflow 2.0, PyTorch and MXNet
  • Implement Transfer Learning
    • using Tensorflow 2.0, PyTorch and MXNet
  • Execute Image Transformation Operations using OpenCV
  • Execute Feature Extraction and Detection using OpenCV
  • Action steps after every module that is similar to real-life projects
  • Advanced lessons that are not included in most deep learning courses out there
  • Apply your new-found knowledge through the Capstone project
  • Download Jupyter files that contain live codes, simulations and visualizations that experts use.

You also get these exciting FREE bonuses !

Bonus #1: Big Insider Secrets
These are industry secrets that most experts don’t share without getting paid for thousands of dollars. These include how they successfully debug and fix projects that are usually dead-end, or how they successfully launch a deep-learning program.

Bonus #2: 5 Advanced Lessons
We will teach you the advanced lessons that are not included in most deep learning courses out there. It contains shortcuts and programming “hacks” that will make your life as a deep learning developer easier.

Bonus #3: Solved Capstone Project
You will be given access to apply your new-found knowledge through the capstone project. This ensures that both your mind and body will remember all the things that you’ve learned. After all, experience is the best teacher.

Bonus #4: 20+ Jupyter Code Notebooks 
You’ll be able to download files that contain live codes, narrative text, numerical simulations, visualizations, and equations that you most experts use to create their own projects. This can help you come up with better codes that you can use to innovate within this industry.

Meet Your Teacher

Teacher Profile Image

Python Profits

Master Python & Accelerate Your Profits

Teacher

We are Python Profits, who have a goal to help people like you become more prepared for future opportunities in Data Science using Python.

The amount of data collected by businesses exploded in the past 20 years. But, the human skills to study and decode them have not caught up with that speed.

It is our goal to make sure that we are not left behind in terms of analyzing these pieces of information for our future.

This is why throughout the years, we’ve studied methods and hired experts in Data Science to create training courses that will help those who seek the power to become better in this field.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Tutor introduction: Hi there, my name is Raj, a trainer here at Python profits. I'm one of the data science enthusiasts and practitioners as part of the Python profits team that's here to teach you all there is to know about coding and Python and data signs. There is a big world out there, and every day the world is evolving with the changing times, society is becoming more data-driven, technology advanced and research focused. Data science is a way to go in the new normal. Here at point and profits, we believe that Python can take you anywhere you want to go. We are here to guide you on your journey to become a data scientist or data engineer or anything you want to achieve using Python. See you in our lesson. 2. Deep learning course objective and benefits: Welcome to the crash course on deep learning frameworks. As part of this course, we will cover some of the important deep learning frameworks like TensorFlow, Pivot4J, and MDX net. This course also covers a computer vision framework which is open CV. The main objective is to impart the knowledge and know how to implement various deep learning models using these frameworks. Now, you will learn about the various modules, APIs, and commands. Part of these frameworks. There are various deep learning models, such as fully-connected network, CNN, and RNN. So we will use these deep learning frameworks to implement these deep learning models so that you will understand what are the steps involved, how to perform hyperparameter tuning, etcetera. As part of the cost-benefit, you will understand the various modules and APIs of the deep learning frameworks. So we will go into each of the deep learning frameworks such as tensor flow, pride, Todd, endemics, net, and understand the various APIs and how to invoke these APIs for a particular purpose, we will use these APIs to implement that deep learning models. By the end of this course, you will learn how to implement deep learning models using these frameworks and also learn important concepts as part of this course. All the best. 3. Deep learning overall course blueprint: Welcome to the crash course on deep learning framework. Let us look into the overall course structure. As part of this crash course, we are going to look into different frameworks such as TensorFlow, Python, amex, Ned, and Open CV. These frameworks are typically used in deep learning model building activities. First of all, we will look into TensorFlow. As part of TensorFlow, we will see the overall interaction of this package followed by the API overview. Then we will cover the data pipeline and we will use cares to see how to build deep learning models. We will explore the ways to build different deep learning architecture using TensorFlow, followed by quizzes and exercise. In the next step, we will be covering the python as part of Python, but we will see the overall interaction of the Python framework followed by the API Overview of Pivot4J. Then we will look into torch. It is an important core component of pipe. Then we will see how to build deep learning models using NN framework, which is part of Python. But we will also see taught vision, which is a collection of various datasets, pre-trained models, et cetera, followed by grizzly and exercise. To test your understanding in Python each module. Next step, we will cover one more important framework and mix net, which is released by Apache. As part of this course, we will look into the introduction of MX net, followed by API Overview. Next step, we will see the NDI array, which is the core of a makes net the base of a maximum. And then gluon package, which can be used to build layers and models. We will see the various implementations of Amex net, followed by Quiz and exercise. Next step, we will look into open CV, which is the computer vision package. As part of this one, we will see the overall introduction of open CV followed by an API Overview. Next step, we will use Open CV to perform image processing and feature detections. Finally, you will have cuisine exercise. After completing all these four crash causes. The next step is for you to complete the Capstone Projects. And overall target will be provided to you. You can use any one of these frameworks to build a deep learning model, tested and then do a prediction using the deep learning model. So this is the overall structure of the course. Start with TensorFlow, Python's MX net, OpenCV. And finally, you will complete a capstone project. All the best. 4. Deep learning course methodology: Let us look into the overall methodology of the coast coverage. As part of this course, we will take one particular deep learning framework and covered the various modules and APIs and the different components of that particular deep learning framework. For example, let's say we take a TensorFlow package. We will see what are the important APIs and how to implement them. Using these APIs, we will implement the algorithm. For example, we can implement CNN model using TensorFlow API or Python API or an MX.NET API. We will cover the procedure as part of this course. We will implement with an example. For example, you may have a case study like image classification. We can use that particular case study. And we can use one of these frameworks to implement that. Every cause, we'll have a short quiz and exercises. This is to test your understanding on the concepts and methodologies of the deep learning framework that was covered. Finally, we will have a summary of the modules that were covered. This will act as a recap for you to understand what are the modules we have covered and how we can use these APIs to implement deep learning models. After covering this crash courses, you will have a final capstone project which you need to complete. Task will be to build a deep learning model. So you can use any one of the frameworks. If you prefer Python. You can use a Python package. Or if you prefer the TensorFlow, you can use the TensorFlow package to implement that particular task. So this is the overall methodology on the coast, courage all the best for completing these causes. 5. Deep learning big picture: Let us understand what is the big picture of deep learning. So deeply enables us to analyze a dataset which is of very high volume, like the dataset that is coming out of support systems, image-based data, time series data, etc. These are high-volume data and deep learning models can analyze these high-volume data and give us the patent. We can use deep learning to analyze various data types, such as image-based data types, sequence-based datatypes, etc. Nowadays, deep learning are also used to generate new images using generative models. The idea is that the deep learning is an architecture similar to the brain. The four, we have various neurons, nodes, etc, which are in the brain. Similar to that, we can find a similar architecture in deep learning. So you have layers, nodes, et cetera. Each of them will dissect the incoming information into patterns, is analyzed with subsequent layers, consolidated and then use for prediction. So this helps us to dissect and analyze the data very similar to the functioning of the brain. Now, there are a lot of various advancements achieved in the recent years. Starting from 2014. It is considered as a golden age for deep learning. And we can see a lot of advancements in terms of deep learning, model, architectures, infrastructure, hardware, and support from the media technology players. By looking at all these kinds of advantages, we can infer that deep learning is going to stay in the industry for quite a long time. And this is going to openness a lot of benefits in terms of job market and advancement. That technology, advancement in deep learning is one of the important factors that is contributing to the group of deep learning. There is wide application of using a deep learning model. Nowadays, we can see there are a lot of frameworks that are coming into play, which is tensor flow, Pitot, MX, net, and OpenCV. These are advanced deep learning frameworks which can be used to build varied architectures in deep learning beat, CNN, RNN, et cetera. As I said before, hardware and infrastructure advancement are driving the innovation in deep learning. It is also an open source ecosystem. So we are not dependent on any proprietary based Tod's, all open source or open to the market. Anybody can work on each of these frameworks and production eyes their product. So this provides a lot of flexibility in the industry and it is not dependent on any one particular company's products. There is a wide support from the technology chains such as Google, Microsoft, et cetera, that TensorFlow is a product of Google, which open sources the complete package to the public. So anybody can build a deep learning model using tensorflow. Now, all these advancements and opportunities are provided as a wide platform for upskilling and bringing ourselves to the job market. So there is an extensively for deep learning specialists across the spectrum of the industries. For example, you can find deep learning specialists are being employed in oil and gas industry, ICT-based industry, manufacturing industry, etcetera. As of now. However, there is one leave a small number of trained resources in the deep learning industry. Therefore, there is an enormous opportunity for the specialist to thrive in the deep learning market for decades to come. It is one of the highest paid job profiles and it is constantly on demand from 2017 onward. Finally, this gardener phrase summarizes it all. Through 2020, more than 75% of organizations will use deep learning based models for various use cases compared to the classical ML techniques. 6. Tools and requirements: In this session, we will cover the tools and requirements for this course. You will need to install the latest version of Python, preferably from Anaconda distribution, because it consists of the Jupyter Notebook, which is easier for you to run the code and implement the model. So as part of an economic distribution, it comes with the Data Science Library ecosystem such as pandas, numpy, etc. So these are also required for the model building steps. This comes as part of Anaconda distribution. And if adult ADHD package is required, you would need to install those packages to. Now, we are covering the deep learning frameworks. And you need to install the deep learning frameworks which we are covering, such as TensorFlow, pie chart, and mix net and Open CV. As part of each of these crash causes, the installation procedures are provided which you can follow to install these packages. Now, these are deep learning frameworks and they require GPU instance. You might not have a laptop with a high end GPU or a workstation with the high-end DPU. In that case, you can use Google collab, which is a platform that is provided by Google. You can use it for free and implement these models using these frameworks. 7. TensorFlow course objective: Welcome to the crash course on TensorFlow. The version we are going to see is TensorFlow 2. Let us see what is the course objective. The tensor flow is an important deep learning framework that is available for building deep learning models. As part of this course, we are going to cover the tensor flow 2 framework. The important components are for building deep learning models such as CNN, CNN, RNN, et cetera. We will also understand the various modules and APIs as part of TensorFlow 2 package, such as what is carries water estimators, what is Data Pipeline, et cetera. Also, you will understand the steps involved in building and evaluating the deep learning models using TensorFlow 2. We will also see various utilities as part of this package. For example, utilities for building data, augmentation steps, etcetera. Now, let us see the cost-benefit. After completing this particular cause, you will have a knowledge on how to implement the deep learning models using this particular framework. You will also understand some of the deep learning architectures we will be covering as part of this course. 8. TensorFlow course methodology: Now, let us look into the coast methodology. As part of this course, we will provide a basic overview of the modules. For example, TensorFlow consists of various modules, and each of these modules have various functionalities. So we'll cover the basic overview of all these modules and their functionality. Next step is to build deep learning models. Basically, you will have different architectures like fully connected network, convolutional neural networks. So we will see the procedure to build these models using TensorFlow. Now, we will look into the various components and various utilities that are required for building these deep learning models. And we will implement these models. For example, let's say we are implementing an image classification model, which is based on CNN architecture. We will implement that end-to-end using a TensorFlow package. We will have short quizzes to cover some of the topics that are learned here. So that you will understand what are the topics here and what are the important concepts here. We will have an exercise which you have to complete and there will be a task. The task is fully based on the package that is covered here, which is TensorFlow. You will use the TensorFlow to build a model and provide the model outputs. We will also have a solution for this exercise, which comes as part of an attachment. After you have built and clear this exercise, you can refer the solution for this exercise. Now, this is the methodology of covering this crash course on TensorFlow package. All the best. 9. TensorFlow modules and API: Let us cover some of the important modules of TensorFlow 2. They are carriers and estimators, which are used mainly for model building. These are high-level APIs. And as part of these APIs, you will have building blocks which can be used for configuring our deep learning model. Using TensorFlow 2, you can perform customization of the models and also build data pipelines. The data will come from various sources. We need to build a pipeline to extract these data and create batches for model training using the tf dot data module, you can build data pipelines. There are also other utilities as part of TensorFlow 2. For example, data augmentation, computer vision, etc. All these modules will form the core of TensorFlow 2 package. 10. TensoFlow changes and concepts: This session let us cover some of the important concepts and changes in tensor flow 2. The tensor flow 2 is a consolidation of various concepts that has made it a more robust framework and easy-to-use compared to the earlier version. The first thing is the eager execution. As part of the earlier Tensor Flow versions, you need to create a session and train the model, which was a cumbersome procedure. Now that is eliminator in TensorFlow 2, where eager execution has been made the mainstream. You can run the model and train it as it is, like what you can see in other packages. The next important thing is the full integration of carriers framework. Now carers is fully adapted and integrated into the tensor flow 2. So that makes our job easier in building the model. Because carriers as various components that are required for building the deep learning model. The next important thing is the consolidation of the modules. If you see in the earlier version, there will be different and duplicate APIs which can be used to achieve the same task. For example, contribute estimators, carries, etcetera. This used to cause confusion to the data scientists as to which one to use and in what situation. That's a lot of confusion. So to avoid that, the tensor flow 2 has been consolidated to have won a certain types of modules which are useful. For example, as I said earlier, Cara's has been made fully integrated with the tensor flow 2. Similarly, various unwanted or duplicate modules have been eliminated in this version. Finally, the data pipelines. Now with the tensor flow 2, it is easier for you to build data pipelines because there is a dedicated framework for a module which can be used for building data pipelines. It helps us build a complex input pipeline from various sources. And it also helps us to handle large volumes of data formats, etc. So these are the important changes as well as the concepts as part of TensorFlow to OneNote. 11. TensorFlow data pipeline: Let us learn what is a data pipeline and how we can use tf dot data to make a data pipeline. The data pipeline is an important part of your deep learning model training process. Using the data pipeline, you are dividing the data into different batches and sending it to your model for training purposes. This Data API is used to create reusable data pipelines, which can be used for training the models. And it makes it possible to handle large volumes of data. When you're training deep learning models, it is not possible to send huge dataset into your model for training. This may end up creating a lot of issues or accuracy may drop. In that case, you need to create it as batches and send it for model training, where you're selling it for model training, you can also incorporate complex transformation processes such as data augmentation is also used to create batch files for data modal training. This is typically implemented for all deep learning model training processes. The steps include identifying the datasource, creating a data pipeline, and incorporating the data transformation methods, if any. For example, resizing or changing the grayscale, Etc. Finally, sending it as batches to the model training. So these are the steps involved in building a data pipeline. And we will see that using the code walkthrough. 12. TensorFlow tf data code walk through: Let us see the steps involved in using tf dot data. Tf dot data is an important module as part of TensorFlow 2, which can be used for building a data pipeline for training your deep learning models. Now here, I am using the collab platform. In collab platform when you're importing the tensor flow, it imports the latest version of TensorFlow. Therefore, the data module will be part of this version of the tensor flow. So here you can see that I'm using this command, which is tf dot data dot dataset from tensor slices. If I provide a list or a numpy array or tensor, It will be converted into batch of data. So here's an example. I'm providing a list of numbers. After that. If you check the output, you can see this is displayed as tensors sliced dataset to access the value inside that, you need to loop it across. So here I'm using a for loop to print what is there inside this particular variable. Now, you can see that it has been divided into different batch of data, where each number is for me, a batch. This is very important for creating a training badge for deep learning model training process. The second example here, I'm generating a set of numbers from NumPy array, the function called NumPy arrange, which will generate numbers based on the values we provide and reshaping into three into three matrix. After running this function, it is told as tenses, like what we have seen earlier. You can try to access this using the for loop. And you can see that each of the rows of for me a bad. Now, this can be used for training your deep learning model by sending the data batch by batch. Now, if you explore this particular tensor, again, it is nothing but an iterator. And you can use the next command to iterate across the rows, that is across the batches. In the case where you are creating a generated function, which will generate the new batches every time you run it. You can provide this generator function into the tf dot data dot dataset dot from underscore generator and make it as a batch wise data. Now here you can see that I'm using for loop in this particular tensor dataset. I'm using your repeat bat size, etcetera. Now based on the batch size, it's going to give you the batches. Now, in each of the batches, there's going to be ten numbers which we are providing here. Now, if you see this, this is a very easy mechanism to create a batch dataset which can be used for training your deep learning models. 13. TensorFlow data augmentation: Let us understand how we can perform data augmentation using TensorFlow 2. Data augmentation is an important step where you will enrich the dataset using the current dataset by changing some of the properties. For example, let's say you have an image-based data. You might need to enrich this dataset by changing the color scale or by changing the orientations. By this way, you are increasing the number of data that is available for training that dataset. In tensor flow 2, you can add this function as part of the data pipeline. The options we have include grayscale changing image, resizing an image orientations. So these are some of the methods of data augmentation. After performing data augmentation, you can feed it into the model and train the model with enriched dataset. Right along. You can use it for evaluation and also for. 14. TensorFlow Keras walk through: Session, we will look into the carers module in TensorFlow 2. Carriers is an important module and easy to use module for building deep learning models. The carriers earlier was a separate library, has part of the tensor flow 2. It is now fully integrated into the TensorFlow framework. Now, you can use tf dot cameras to build deep learning models and deployed. It is a high level API to build and train a model. Cara's also come to the functional APIs, which I used to build flexible models compared to carriers console model APIs. As part of the building process, we will have the data pipelines, which taken the input data and feed it into the carriers based model. As part of the Keller's model, you can build various deep learning models, such as fully-connected network, convolutional neural network, an RNN based sequence model. It provides options for building all these types of deep learning models. You would also have hyperparameters, which you can too improve the accuracy after providing all these configurations, the next step is to train and compile. The compiler is also part of the carriers where you can set up the loss functions and also provide optimizes. For example, item is an optimizer and there are also various choices you can select and apply. After that, you can compile the model and train it with the data. The next step is to evaluate and predict. So, as I said, carries is an easy to use module for building a deep learning model. 15. TensorFlow fully connected nn model: Let us understand the steps involved in building a fully connected neural network using TensorFlow 2. In fully connected network, you have all the nodes of a layer connected to all the nodes of the subsequent layer. For example, the nodes between the input layer and denotes that the hidden layers are fully connected. So this is a dense network and it may not be suitable for image-based classification, but it can be suitable for regression or classification problem. So typical process involves the data source. From the data source, we can perform the preprocessing steps like we would normally do for building a conventional ML model. The next step is to build the model configurations. So we can use carers for building a fully connected neural network. We will provide. 16. TensorFlow fully connected model with data pipeline code walk through: In this video, we are going to see how we can construct a deep learning model that is based on fully connected network and the data line that is based on tf dot data. Now here we are going to generate a random distribution. So these random distributions are going to form the features and the target. So this is for demonstration purpose, and you can also use a real-time dataset for this purpose. So if you are trying to replicate the set of codes that then you can use a real time dataset as well. The first step is to import the module. When you're importing in a collab, you are going to get the latest version. After that, generate a random distribution. This is basically for the features and the target based on the dimension you want. You can provide the dimension for that. Here, I'm going to generate a 100 rows of data, which is going to have three features and a 100 corresponding labels for that. After that, we are going to create batches of data. Now, we are going to provide train x, which is the features and the y, which is the label. And pack them in tuple and provided in the TFT dot-dot-dot dataset from tensor slices. Now, this is going to slice these data and create batches. After you assign this to a variable, you can use these variable dot and provide batch where you can provide the bad size. And you can change the bad size based on how you want the patches to be, and then you can shuffle it. These are the steps involved in creating a simple data pipeline. After that, you can explore the data by using the for loop and run through that and see how the batches of being formed. The next step is to create a fully connected deep learning moral. For that, we are going to use tf dot Cara's df.columns comes from TensorFlow in that you are going to have a sequential model. Now, this sequential is used for creating models based on fully-connected CNN or whichever models which is going to have a sequential flow of data. Now inside that sequential provide the square bracket and create the layers. Now here we are going to create a fully connected layer. The first layer is going to have three as the node, because we are going to send three features and the activation as red, blue. Now, you can keep on adding the layers as much as you want based on the requirement. Before that, what you have to do is to replicate the statements df.count, list.length, read.table within this square bracket. When you are doing this, this will be added to the sequential model. And then architectures for the final layer is going to be the dense one output. Because we are going to have how to predict a number for that. You can provide an activation function, or you do not need to provide an activation function that's based on the choice of the architecture you are going to have. Now, this set of code is what constitutes the model architecture, the layers and the nodes. May I remind you, this is a fully connected network, therefore, you are finding only the dense layers. The next step is to compile the model. For compiler. You need an optimizer and loss function. Here. I'm providing the optimizer as Adam. Now, there are lists of various optimizes as part of TensorFlow, and you can explore that in their respective pages. Adam is suited for a wide range of deep learning models. And fully connected is one of them. The loss function here is mean absolute error, which is coming from tf dot dot losses. Since this is a regression-based model where you are going to predict the numbers. You are going to provide the loss function as MAE, which is mean absolute error. The compiler is done. Next step is to train the model. For training the model you are going to provide the dataset batch that you have created and provided here as a first entry. The next thing is to provide the number of Tupac's the model is going to run. The next step is to provide the step per epoch. These are important parameters which you need to provide for model training. After that, when you execute this, it is going to run and train the model Epoch by epoch. You can see that the loss is displayed here. Since we are getting the data batch from the random numbers, you are not going to see a drastic change in the loss. But if you're using your real-time dataset and the moral is getting trained, you would see that losses decreasing. After that, you can explore the model, get the modal summary, etcetera. So these are the steps involved in building a simple fully connected network and providing a data pipeline using. 17. TensorFlow CNN model steps walk through: In this session, let us cover the steps involved in building a CNN model using tensorflow 2.0. The tens of flow Cara's API allows you to build convolutional neural network for image-based classification problems. As part of the CNN network, you would need to add convolution layers, pooling layers, etc. Which is quite different from the typical fully-connected network architecture. Using carriers, you can also add and test various optimizes as part of the model compiler nation. So you'd need data. The data comes in the form of image-based data. You would need to create our data pipeline with the data augmentation steps. The next thing is the preprocessing. In addition to the data augmentation, you would like to do any preprocessing steps. You can also incorporate that as part of the data pipeline. After that, we will feed it into a CNN model. Cnn model is configured using the carriers based API. And we will provide all the layers that are required for building a CNN model. Next step is to train and compile the model with optimizes. It can also experiment on selecting different optimizers. Finally, we will do that prediction for the image and you can also evaluate the models. So these are the steps involved in building a CNN model using tensorflow 2. Let's go to the code walkthrough for this. 18. TensorFlow CNN model code walk through: Let's see the code walkthrough for TensorFlow for constructing a convolution neural network. As a first step, input the tensor flow and all the necessary modules. You would need a dataset for creating data pipeline layers, for constructing the model architecture layers and the sequential model. After that, the next step should be to import the dataset and build a pipeline. So this is a CFR ten dataset. And while importing, you can split it into train and test and train label and test level. After that, you can perform the normalization. If you divide it by 255, it will normalize the data. Now you have completed these steps. The next step is to see the data. Before that, you can construct a simple map plot lib, take the set of data and then visualize that using I'm show. So this will display the data that is part of the dataset that we downloaded. Next step is to build the CNN layers. So this is a CNN, it's a sequential model. And as we had seen, these are the steps that are involved. So we need to construct the convolution blocks. In the convolution block con 2D, you need to provide the number of filters, the kernel size, the activation, and the incoming image shape. Now, this has to be repeated. This is the first convolution block, followed by the second convolution block, et cetera. We can also introduce dropout layer in between, that is up to you. If you want more regularization, you can introduce dropout layer. A typical architecture involves a convolution 2D followed by a max pooling layer. Now, you can see that the number of filters is gradually increasing to 3264 and you can also increase it in the subsequent layer. Now, this is the summary of the model. After that, you need to flatten the data that is coming out of the convolution 2D using the laziest dot flat. After that, you are passing it into a dense layer and activation red, blue. And you can also change the configuration. Instead of 64. You can have more numbers or less numbers based on the choice you prefer. Finally, you should have the ten, because our dataset as ten labels, therefore, the final dense layer should have the ten nodes. This is the architecture Sundman. Now, the next step is to compile and train the model. For modal compilation. You need to provide the optimizer, the loss function. Since we are doing a multi-class classification, we need to provide a categorical cross entropy loss function and the metrics. Then you can train the model. Now, you can store it in a variable called History and then use it to plot the model performance. Now, while fitting the model, you need to provide the train image. Train label the number of epochs you want. And if you are validating while training, you can provide that tab validation dataset. Now, you can see that the model accuracy is gradually increasing from 50% to 70%. After that, you can evaluate from the history that which you have taken and evaluate it with accuracy and validation accuracy. I can see the model performance like this. Now, these are the steps involved in constructing a convolution neural network model using TensorFlow. 19. TensorFlow RNN based sequence models: We can use TensorFlow to 0.04 building RNN based sequence models. Now, similar to the way we define a convolutional neural network or dense layers, we can use the tf dot carries dot layers for importing RNN based layers such as Ls, TM, or GRE. These layers can be integrated with embedding layers to construct an RNN base sequence model, which can analyze the sequence based data such as text paragraphs. We can also use RNN based model for time series data. This involves our data pipeline. So typically you will have a text-based data which we need to convert into tokens. Or we may have the time-series data, which we need to convert into a sequence. The preprocessing steps involved creation of a training sequence. The next step is the moral configurations. As part of the moral configurations, you would add layers such as RNN, base sequence, LST, M layers or the GR YOU layers. And then the training and the compiler isn't happens. The next steps are the predictions. And after that, you can do the evaluations. So these are the steps involved in building RNN based sequence models using TensorFlow 2. Now, let's look into the code walkthrough for this. 20. TensorFlow RNN code walk through: Let's see how we can set up an RNN model architecture. So here, you need to set up an architecture which is going to analyze the sequential data. After importing all the necessary packages. You are trying to build an RNN architecture. You can't see that you are providing the embedding layers. It is going to analyze and split it into word embeddings, which transforms the normal words into embeddings. After that, the output that you see is fed into the LST M layer, which observes the embedding and uses it to train the model using LST M layer. Now, you can add a dropout layer in between or a batch normalization layer. After that, you get a passing to the dense. Since the output is ten, it provided the dense output as ten nodes. Now, based on the number of outputs you require, you can provide the number of nodes based on that. So this is a simple way to structure an RNN architecture. 21. ADVANCED: TensorFlow transfer learning walk through: Transfer learning is an important part of deep learning model training process. So we can liberate transfer learning to use a pre-trained model for our predictions. Or we can build a model from scratch. This is based on the scenario and the use cases we are actually working on. As I said, transfer learning is an important option that we can utilize. It will, it involves a pre-trained model. For example, mobile net order resonate. These models are already trained and the weights and the biases are already configured. As part of transfer learning. We are importing these models and using these trained weights for the predictions. You have an option to retrain those layers, but it is a costly operation. Instead of that, we can cut off one or two layers. We can incorporate our own layers and train that. So this is transfer learning process. In tensor flow, we have options like TF hub or tf dot carries dot applications to import the trick pre-trained models. In Tf hub or.carousel dot applications, we can import the required pre-trained model and then select the parameter whether it is trainable or not trainable. If it is not trainable, you are going to use the features as it is for the predictions. If it is trainable, you are going to retrain the model. The option is yours. Based on that, whether you retrain the layers or you can't ask US asset is when you're using it as it is, you can incorporate one of the final layers and train one lead those layers. Finally, after incorporating the pre-trained models, you can compile the model and train it. So these are the steps involved in incorporating transfer learning using TF 2. 22. ADVANCED: TensorFlow entire workflow with transfer learning code advance walk through: In this video, we are going to see how to leverage transfer learning for model training purposes. The first step is to import the necessary package. After doing that, we are going to import the dataset. Now in this particular exercise, we're going to import Oxford underscore flowers 1-0, two dataset that is available as part of t f ds, which is a module as part of TensorFlow. After that, we are splitting that into training set, validation set, and test set. This is an important exercise because we are going to train the model, validate the model, and test the model. Now, you can explore the data set using dataset underscore info. It can check whether this particular dataset is coming from. There is a link to this dataset which you can explore. You can see that it's been split into test, training and validation. And you can see the number of items in each of these datasets. You can also see the labels as part of it. After performing the necessary exploration, you can also see what is the image and what is the corresponding label to this. There are around two or one labeled. These are different flowers and there are one or two different labels for this. You can extract it and see what is the particular label and the corresponding flow for that Lib. You can also extract the names for these labels. Now here as part of the dataset, you also have label underscore map dot JSON, which can be used to extract the data, that is the labels. Now here you can see that each number is assigned to a flower. There are around one or two flowers as part of this dataset. The next step is to create image transformation. The pipeline that we are going to create is going to have image transformation, passing it into a particular D-type and resizing it to the shape we want. That is the uniform shape that we want. And then we have to normalize it. For normalizing, we are going to divide by 255. This particular function is going to give us the normalized image and the corresponding label for that. After creating this function, we are going to put that in the data pipeline and then create training batches, validation batches, and the testing Bach's. The next step is to import the model that we are going to use as transfer learning. Now these are pre-trained models via URLs for that, the URL can be taken from TF hub. So TF hub is a platform where you will have different pre-trained models. Now, you can extract this by providing the URL in this particular function, which is hub dot Cara's layer. And it can also provide the input shape where the incoming image will be reshaped into these inputs shapes. If this is the shape that is maintained in that particular model, you can provide that input shape. After extracting this pre-trained model, he can't set that trainable as false. This is to leverage the already trained model. If we set it to true, then the entire model architecture will be retrained based on the dataset you provide. Now here, as part of this exercise, we are going to leverage the transfer learning. The four, we are going to set it as false. The next step is to add a layer on top of it as part of it. We are going to add a dropout layer, which is for regularization. And then we are going to add a dense layer. So this is going to give us the softmax probability or one or two different labels. Then you can see the model summary of it and then compile the model based on the optimizer you provide. Now here it is a multi-class prediction problem. Therefore, we have provided the loss function as spars, underscored categorical, underscore, cross entropy. Then you said the number of epochs and train the model. After training for 20 a box, you can see that the validation accuracy is gradually increasing towards 0.875%. Now, this is being stored as history. When you're providing like this is tree is equal to model train. Now, all the information that is coming out of this is stored as a string. You can extract that and then use it for visualizing how the model has been trained. Now, you can see that the training accuracy is increasing over the validation accuracy. There is a gap between training and the validation. This could be the case of overfitting in the model, which can be corrected by introducing regularization layers mode and also providing some better optimization methodology. You can check the model accuracy on the test data. Now, the test_data accuracy is around 78.24%. After that, you can save the model and reload the model using these commands. Now, when you are saving, you can provide the format that you are going to save, which is HDF. A good format for saving and reloading deep learning models. After rebuilding the model, you can proceed with the same exercise or use it for predictions. So these are the workflow steps that are part of using transfer learning. As part of this session, you have learned how to extract the data, create the pipeline, how to provide a normalization function, and then use the transfer learning methodology where you are importing a pre-trained model, adding a layer to the top of it, and then train the model. 23. TensorFlow Quiz: Let's check your understanding on TensorFlow to point O. Name some of the important changes incorporated as part of TensorFlow 2. These include eager execution. It's been fully adopted into TensorFlow 2, integration with carers, direction of a data module and consolidation and removal of redundant APIs. Going on to the next question. What is the use of tf dot data module? Is it for adding layers to your deep learning models? Is it for data transformation? Is it for building a data pipeline? Or is it for transfer learning lands, or is it is for building a data pipeline? Moving onto the next question, which module can be used for transfer learning? We can use TF hub for transfer learning. As part of this module, you will have lots of pre-trained models which you can use for transfer learning. Moving on to the next question, name the layer useful building an image classification model. Think about the steps we are followed to build a CNN model. What type of layer we were adding their lands or is it is a convolution layer. So from carriers dot layers, you can import the convolution layers. Which part of the moral code we will provide the optimizer. We will provide it as part of modal compiler. 24. TensorFlow exercise tasks: Now it is time to do an exercise on what we have learned. As part of this course exercise, you would construct a CNN model as an image classifier for classifying images of flowers based on the label names. The dataset that we are going to use is Oxford flowers 102, and it contains images of various flavors. A link to that data set is available here. You can explore this dataset as part of this exercise. The exercise task includes input image dataset and create a data pipeline and a function for image transformation. Import a pre-trained model of your choice. And said that trainable parameter, two folds at a single dense layer as part of transfer learning and train the model. Finally, evaluate the model and predicted for the test data. So this is the task for this exercise. All the best. 25. TensorFlow exercise solution walk through: Let's look at the solution for the exercise where you have to build a fully connected network for a regression problem. Import the necessary packages from TensorFlow. Next, have the link to the dataset using the characterised dot ET ILS dot git file, you can directly import this from the webpage to a variable. This will download as a data file. From that, you can convert it into a Pandas dataframe. Provide the necessary column names for that. Next step is to explore the data if there is a missing value or not, and then clean the data. After that, split the dataset into a training and test set. You can also visualize this and check whether it is trained or not. This is an optional step. The next step is to print the label from the datasets. The label is the mpg. That is what you are going to predict. Create the train label and the text label. And if you need to normalize the data, you can perform a normalization function. After that. Construct a model, which is a sequential model, which is a fully connected one, provide the necessary architecture like dense layers, and then provide the optimizer. You can choose any optimizer you want and then compile the model. After that, fit the model and predict the outcomes. Now, this is the solution for the exercise on how to use TensorFlow fully connected network for a regression problem. 26. TensorFlow exercise 2: In this exercise, you are going to construct a fully connected network for a regression problem. That dataset is auto MPG. The link is provided when you access the link, you can download the data. Now, let's look at the task. You should import that data using TensorFlow. Utilities, split the features and the target. The target here is you have to predict the mpg values, construct a fully connected network, find the model and predict the values for MPG. All the best. 27. TensorFlow exercise 2 solution walk through: Let's look at the solution for the exercise where you have to build a fully connected network for a regression problem. Import the necessary packages from TensorFlow. Next, you have the link to the dataset using the carriers dot ET ILS dot git file. You can directly import this from the webpage to a variable. This will download as a data file. From that, you can convert it into a Pandas dataframe. Provide the necessary column names for that. Next step is to explore the data if there is a missing value or not, and then clean the data. After that, split the dataset into a training and test set. You can also visualize this and check whether it is trained or not. This is an optional step. The next step is to print the label from the datasets. The label is the mpg. That is what you are going to predict. Create the train label and the text label. And if you need to normalize the data, you can perform a normalization function. After that. Construct a model, which is a sequential model, which is a fully connected one, provide the necessary architecture like dense layers, and then provide the optimizer. You can choose any optimizer you want and then compile the model. After that, fit the model and predict the outcomes. Now, this is the solution for the exercise on how to use TensorFlow fully connected network for a regression problem. 28. TensorFlow course summary: Well done on completing the crash course on TensorFlow. Now let's summarize what we have seen so far. Introduction to TensorFlow 2, the basic overview of the package. We have seen what are the different modules and APIs as part of TensorFlow. The major changes that have happened in the new release, which is TensorFlow 2. Next, we looked into the data pipeline, overview of the data module, intensive flow 2, and how to perform data transformation. Next step, we move into the deep learning model building. We explored Cara's module, which is an important module in TensorFlow, constructed fully connected network using IT. Also, we had looked into CNN and sequence models. We're also learn how to leverage the transfer learning by downloading the pre-trained model from TF hub. After that, we moved into the quiz and exercises where your understanding in this package was tested. And you are asked to build a CNN model using tensorflow 2. 29. MXNet introduction and course benefit: Welcome to the crash goes on a makes net. Amex, MIT is one of the famous deep learning frameworks. Data's available for building deep learning models. Let us see the cause objective. As part of this course, we will cover the important components of Amex.NET framework for building deep learning models. Understand the various modules and APIs as part of a Mexican framework, such as gluon, ND array, etcetera. So these are the building blocks for building a deep learning model using a makes net framework. Understand the steps involved in configuring a deep learning model. We will see different architectures and we will see how we can configure that using a makes net and also various utilities as part of this package. Now, what is the cost-benefit? After the completion of the course? You will understand the various modules and APIs in amex net. You will understand how to build various deep learning models using a makes net modules. And you will implement a deep learning model using a Mxit. 30. MXnet course coverage methodology: Let us look into the course methodology. As part of this course, we will cover the basic overview of the module. We will look into the various module APIs that are required for building deep learning models. A makes net helps us to build various deep learning models, such as fully-connected convolutional neural networks, CNN, and also sequence-based recurrent neural networks, RNN. We'll try to cover all these algorithms by implementing them using amex.NET APIs. After implementation and understanding of various concepts and APIs. As part of this package, you will have a short quiz where you have to answer some of the important questions. After that, you would need to complete and exercise. This would be an end-to-end exercise where you have to use an amex.NET framework for building a deep learning model. After completing the exercise, you will have an option to review your solution for the exercise. 31. MXNet modules and APIs: Let us look into the, some of the important molecules in Emacs net. I mix net is an end-to-end deep learning framework. Using amex net, you can perform task with data such as data manipulation. Also, you can build layers and configure the deep learning networks and also evaluate them. Therefore, it is an end-to-end deep learning framework. Now, we will look into some of the important molecules as part of this framework. The first one is the blown. The gluon is an imperative interface for Python for building deep learning model components. As part of grown, you would have various APIs to build a deep learning network, such as layers for dense networks, layers for convolutional neural network, CNN, and layers for recurrent neural networks, RNN, etcetera. It is designed in such a way that it will be flexible in Pareto dynamic and it provides high-performance. The most important is the AND array API, which forms the core of Amex net package. This particular API is very similar to a numpy array. When you use NumPy can perform various tasks like generating arrays, manipulating these arrays, and also performing operations on these areas. Similar to that. And makes net leads a core data structure. And indeed, array provides that core data structure. Using MD array, you can manipulate the data similar to the Numpy array and you can also perform various operations. Ultimately, this AND array will provide the tensor format for the deep learning models. The next important API is autograph API. This forms the core of training your deep learning model. We need a differentiation mechanism to backpropagate the losses across the network. And using auto grad API, you can perform that. So this is the core component of deep learning model training process. In addition to these modules, there are also other utilities which you can explore using Emax net. 32. MXNet NDArray: In DRA is one of the core modules of MX net. This is used for data manipulation and Md RA is very similar to NumPy package. Using ND array, you can manipulate the data, such as creating Arrays based on various dimensions, creating Arrays based on random number, distribution, etcetera. These operations resemble very similar to the NumPy package. In addition to that, you can also perform operations like our array multiplication, addition, etcetera. You can also slice in India very similar to the Numpy array. So this forms a core data structure of MX package. 33. MXNet data augumentation and tranformation: It'll augmentation is an important part of deep learning model training activities. The more data, the better the model gets trained. As part of deep learning frameworks, there are various utilities that are available to augment the data. There are also external packages like Open CV is only used to augment the data. In addition to that, there'll be an in-built feature in all the deep learning frameworks. And mix net also has an in-built feature for data augmentation that emits data can be transformed in a manner that began very the grayscale, the color scale, the resizing of the image and orientation of the image. These are some of the data augmentation methods that are available as part of amex nav package. After you are provided the logic for this, we can use the MXML transform to convert these images into a modified image and enrich your data that is required for building and training your models. With the data augmented, you can send it to the model training steps, trained the model, make it a better model and evaluate and predicted. So this is a usual cycle for deep learning model training. And typically, when we are working on convolutional neural networks for image-based deep learning models, we would typically end up doing data augmentation step. And am X-Men provides the transform API to perform this data augmentation stamps. 34. MXNet data pipeline transformation code walk through: In this video, let us see how we can perform the transformation on the incoming data. So let's import the necessary packages from ImageNet. We are going to import ND array, etcetera. So these are for model training. But for transformation you can use ND. So in ND, you have some of the important operations which can perform the transformation of the dataset. And you can incorporate all as part of gluon dot-dot-dot data loader. Now here we are sitting up this transformation function. As part of it. We are going to provide the data and the labels. And this is going to return and any dot transpose is going to return the dataset as batches to the train data and test data. As part of the train data and test data, you are going to import the dataset directly from data.dat vision, which is the MNIST dataset which we are going to try to import. Now here in the logic, you can also provide that transform function, the transform function which we are created. You can provide that. Now as part of this transform function, you are going to set the data into np dot float and also normalized the data by dividing it by 255, which is the grayscale value. Now, will these transformations are applied to the incoming data and get transformed, loaded into a train data and test data. Now, this is also based on the batch size which you are provided here. Therefore, this is a more efficient way to create training and test batches for training your model. 35. MXNet deep learning model building steps: Let us look into the steps involved in building a deep learning model using a Muslim. As part of a makes net, we have a specific module called gluon, which is a high-level API for configuring model architectures. It has APIs for various layers. Using the layers, you can define the model blocks. For example, in gluon, You can find layers for dense network, find layers for convolution neural network, and also for recurrent neural network. In addition to this, there are also various other layers which you can incorporate as part of deep learning architecture. They can also be structured in a reusable format as well. If you are using a class structure, typically as part of the deep learning model trainings, we would need to build a data pipeline. That will be the first step as part of the deep learning model building process. After that, you'll need to configure the models. So in this particular step, we can use the gluon API for performing that. As part of the gluon module, you will have various layers which can be used to configure the modern architecture. The next step will be to train your model. There is an API called autograph API, which can be used for training the model. Such that we can provide expressions for lost computation, backpropagation and updating the parameters which formed the core steps of our model training process. After completing all this, you will have a trained model which can be used for evaluation and predictions. So these are the steps involved in building a deep learning model using amex met. 36. MXNet deep learning FCN code walk through: See how we can build a fully connected network using amex network. First of all, in collab, you need to install MX net. After that, import the necessary packages MX net and from ImageNet ND DRA, autocrat gluon. Gluon is used for constructing your model architecture. Autocrat is for training your model, AND array is for the core data structure. Now, you can provide a context based on the context your model will be trained in GPU or CPU. You can experiment this by changing the runtime and set it as GPU, and try it with the context set as CPU. Now here we are providing CPU as the context, and then we are initializing with the required variables and required parameters. Now, the next step is to create the data loaded. So this is to create the data batches based on the dataset we have. So we have to generate random dataset which needs to be provided into the data loader. And based on the batch size, it will create the train data. Next is construct a very simple, fully-connected dense network. So here we are setting up a dense network using gluon dot CNN. Next step is to initialize the parameters for the model. Here we can use a normal initializer or xavier initializer. There are various options as part of it. After that, we train the model by providing the necessary steps like loss function, an optimizer for compiling, etc. Then we create logic for setting up the trading process, which includes sending the batches of data into the model and then using the autograph to train it. So this typically involves calculating the loss and backpropagating it. So you can see from autograph, you are calculating and brac, propagating the loss and the parameters will be updated. So this is an important process based on this model, will get trained. You can see that the loss keeps on reducing in each of the epoch. Now, this is a simple program for constructing a fully connected network. We are using a dense layer. So it can also increase the number of dense layers and create a sequential model which is available as part of your own model. 37. MXNet CNN model building steps: Understand the steps involved in building a CNN model using a makes net and makes meant gluon nn module as all the layers that are required for building convolutional neural network, such as convolution layers, max pooling layers, etc. As part of the CNN network, you would need to add convolution layers, pooling layers, Dropout layers, etc, to form a convolution block as part of the architecture. This is quite different from the fully-connected network architecture. Using glue on, it can also add various parameters such as optimizes. You can select Adam, the variance of Adam, and then use it for modal compile lesion. If you see the workflow, the initial step is to build a data pipeline, which is the image and from the Image dataset you are building a data pipeline. Preprocess the steps, such as normalizing the image, resizing the image as part of the dataset. Then we need to build the model configuration. Basically this involves configuring, convolution layers, pooling lays, Dropout layers, etc. Then Training and compiled version where you can select the optimizes that are required for compiling the model and training the model. The last step is to test and predict them using the model that is trained. So these are the steps involved in building a CNN model using amex met. 38. MXNet deep learning CNN model code walk through: Just understand how to constrict CNN layers with MX met. First of all, create the data pipeline, as we had discussed before, setup all the required parameters and create a function for transform. In this function, you are going to transform the incoming dataset, provides the necessary data type you want, and also normalize the data. So you can add this transformation function as part of the data pipeline and create the train and the test dataset. Let us look into the CNN layers. How to construct the CNN layer? Gluon is an important package from where you can import the necessary layers. From gluon dot NN import the sequential layer. This is necessary to construct a sequential CNN network. Now, having set this net in the name scope, you can add the layers and create the CNN blocks. For example, here we are adding this particular layer and creating a CNN block. This will be the first CNN block, which consists of CNN 2D. On the channels, we provide the necessary kernel size and the activation function, which is set as really followed by the convolution 2D layer. The next step is to add the max pooling layer. This aligns with the CNN network architecture that we are required to build, followed by convolutional layer. Followed by that, you will have a max pooling or an average pooling layer. So all these layers are available as part of gluon dot NN after importing max pooling provide the pool size and the strides it has to take after that create another block of CNN with the same convolution 2D, provide increased the number of channels, create the max pooling after that, and create the second block. These are two blocks of CNN layers. We cannot take the CNN outputs directly for the prediction, so you need to flatten that and send it into the dense network and then use that for the actual prediction. To do that, the output from this particular layer is sent into the flattened layer. So this will flatten the output from the last max pooling layer. After that, it is passed on to the dense network. And finally, it will pass into the final dense layer, which is used for the final prediction. This is the way to construct an architecture for CNN network. You can change the number of blocks you want. And you can also add a denser network at the end. These are the overall steps that are involved in constructing the dense layer. After that, we can initialize the parameter and said the optimizers. The optimizer can be Adam or stochastic gradient descent or variance of Edom. So various optimizes are supported by MX knit. This step is to initialize the parameters for the network. The second step is to provide the optimizers along with the learning rate. You can adjust the learning rate, increase the learning rate, or decrease the learning rate. The other important component is the loss function, which is going to calculate the loss, which is very important for the model training. Then you can define the function for computing the accuracy and then provide everything into a training logic as part of autograph. Now, as part of undergrad training logic, you are going to compute the loss, backpropagated, update the parameters, and do this again and again until the epoch ends. So these are the steps involved in constructing a CNN layer. To recap, construct the data pipeline, construct the model with the CNN layers, initialize the parameters, set up the optimizers and the loss functions, and finally, train it with the training logic using autograph. 39. MXNet RNN model steps: This session, we are going to look into the steps involved in constructing RNN based sequence model using amex met. Similar to the way we defined CNN or dense layer. Using a makes net gluon NN model, we can import the RNN layers such as Ls, TM, or GRE for constructing a recurrent neural network. These layers can be integrated with embedded layers to construct an RNN based sequence model. If you are working on a text data or if you're working on a time series data, you can use the LSD or Jie Ru network layers. If you see the typical flow, we will need to construct the data pipeline and preprocess the data. The next step is to configure the model. So here we will use the gluon nn module to configure the layers, such as using LST M, RG. The next step is to train and compile the model. As part of an RNN model, you can use the optimizers such as RMS prop, which works well for a time series based data. You can use these kinds of Optimizer for compiling the model and training the model. Then you can evaluate the model and use the model for predictions. So these are the steps involved in constructing RNN based sequence model using MX net. 40. MXNet RNN code walk through: Let's see how we can build an online model using a makes net. As part of this session, we are going to see how we can build RNN model using MX meant for stock price prediction. First of all, you need to import the necessary packages. The gluon is an important package from where you are going to take the layers for building the network. Next thing is to import the dataset. As part of the dataset, we are going to import the stock price of Apple shares. It is going to have open, high, low, etcetera. We are going to predict the average price. Now the average price is computed from open, high, low. Then we are also doing the data transformation steps like tracking the years, months, etcetera. Now we are reforming the dataset which is going to have the date, open price, etc, which we are going to remove. We're also going to have only the average price and the date. After doing all that necessary transformations, we are going to have the following prices, which is nothing but the average price of a day. Here we are going to predict the next price using the RNN model, using MX met. First, we need to resize the dataset. So we need to transform them in a form so that the RNN model can understand. So RNN model will predict the one timestamp in future. So therefore, you need to build a dataset in such a format that you are placing the model in a way to predict the next timestamp. Before that, we need to do some data extraction and modification so that you will create the train data and the labelled. The level is nothing but the next timestamp value of the current timestamp. After doing all the necessary transformations of the dataset, we are going to build the RNN network. Before that, you can set the context. It can be a GPU or CPU context based on the environment you are running the model. The next step is to build a network. As part of the RNN network. We are going to import the LSD am network, which helps us to remember the long-term, short-term memory, which is also helpful to predicting these kinds of datasets. We can also keep bi directional True or False. Bidirectional is nothing by analyzing both ways of the time series data. Followed by the LST m, you can have the dense network. The choice is yours. You can keep it as two layers, three layers or less. It is because it all depends on the network complexity you want to construct. Now, you have the network, you need to initialize the values. For that, we can use the initialize API and provide the xavier initializer. You can also use normal initializer, but Xavier is most suited for RNN based networks. Next, we need to send the model optimizer. Here we are providing undergrad. There are also options for you to provide Adam or RMS prop and set the learning rate and the loss functions. So these are the different components, the networks followed by initialization, optimizer and the loss function. After that, obviously, you need to evaluate the performance of the model. You need to construct the evaluation function. So you, you take the actual value and the predicted value compared between them, and you will return the average loss. So these are the important components. The next step is the building of the data iterator and providing the training logic. So in the training logic, the values are going to be sent by twice, and the model is going to be trained based on the loss value. The moral is going to be trained. Now, you can see that the epoch training happens. Finally, you can also see the training scores. Gradually the loss is decreasing and you can also check the validation loss. Finally, you can get the output that is a prediction of the next time stone using the RNN LST am based model. So these are the steps involved in constructing an RNN model using amex nav. 41. ADVANCED: MXNet transfer learning steps: Let us understand how to perform transfer learning using MX met. So in MX med glue on, there is an API called modal x2, which consists of various pre-trained models. The transfer learning is an important option for moral training and prediction. And almost all deep learning frameworks will have a particular API from where you can download a pre-trained model and use it for transfer learning. As part of MX net, we have this one called modal zoo model. Zoo consists of various pre-trained models, the famous ones which you can import and use for transfer learning. The first step is to import them using a particular API and set the parameter whether it's trainable or not. If it is trainable, then retrain the layers. If it is not tradable, then at the final layers where if you want to train the final layer 1D based on the custom dataset that you have. After that, you can use the model for compiling, training and prediction. So these are the steps involved in implementing transfer learning using a maximum. 42. ADVANCED: MXnet transfer learning code advance walk through: Let us see the code walkthrough for transfer learning in amex met. As part of the steps. You need to install a makes net and gluons CV in your laptop. Or if you're using a collab, you need to run these commands before working on these models. After that, you need to download the dataset using the URL that is provided here. This is part of the github page, so you can use this URL to download the dataset. After the step is done, you can provide the necessary hyperparameters after importing all the necessary packages. Something like number of epochs, learning rate, etcetera. You can define it later also. But if you want to define it at the ischial stages, you can provide the list of hyperparameters. Now, set the context whether it's a GPU or CPU. If you are working on CPU, you can provide the CPU context. The next step is to provide the data augmentation. As part of this, we are using the transformed.com and stacking all the different transformation steps like resizing, flipping the image, color, jitters, lightning, etcetera. So these are optional steps. You can skip them. If you do not want them, then you can convert it into a tensor. So it's a numpy shape and you need to convert it into a amex net tensor and then normalize the data. Similarly, you can do that for the test data as well. So these are the data augmentation steps. After that, you are setting the data loader. So we are calling the data augmentation functions. You're loading the data and creating training data, validation data, and test data. So these are the data preprocessing and preparation steps. The next step is the model building. As part of the model building, we are calling rest net 50, which is the pre-trained model for transfer learning. After calling the model, we are providing the initialization parameters and then setting the training algorithm where we will provide the optimizers learning great momentum, et cetera. We're also setting the loss function as softmax because this is a multi-class classification problem. Next step, we need to define a function to validate and check the accuracy for that. You are setting up a function here. It's going to take the labels and compare it with the actual, that is the predicted and the actual and giving us the accuracy metric. The next step is the final step, is the training loop, where you are going to provide the logic for training the model. So here we need to provide the backpropagation. We are calling the batches, providing the transformation steps, and then computing the loss. This loss has to be backpropagated using l dot backward. Now, this is going to update the parameters in our model. So these are the steps that are required for constructing a transfer learning using amex net. This code is shared as part of the attachment. 43. MXnet Quiz: Stick your understanding of MX fit. What is the code data module of a Mexican? It is India ray. Moving onto the next question. What is the purpose of the module gluon? Is it for adding layers to dn models, data transformation, data pipeline, or transfer learning? The answer is adding layers to the DL models. The next question, can we perform transfer learning using a mixed it? Yes, of course, using the model zoo API. Moving onto the next question, how do you import convolution layer for constructing a CNN model? The answer is gluon dot CNN dot COV 2D. The next question, how do we import transform function? The answer is from MX dot gluon dot data dot vision. We will import transforms. 44. MXNet exercise: Let us see what is the exercise for this crash course on MAX met. As part of this exercise, you need to construct a deep learning model based image classifier using amazement. You need to construct the data pipelines. The dataset will be the fashion amnesty, downloaded and create a data pipeline. Use the image transformation as necessary. Builders CNN layers from a makes net gluon and build the architecture. Train it using autograph module, evaluate the model, and use it for predicting the test data. So these are the steps as part of this exercise. All the best. 45. MXNet exercise solution code walk through: Let's see the solution for the exercise in amex network. So here, as part of the exercise, you will need to download the MNIST dataset and create the CNN model for that. As first step, input all the necessary packages which include NumPy, from MAX AND undergrad, gluon, etc. Set context to the CPU or a GPU based on the environment you're working. If you're working using collab, you can go to the runtime change runtime type and set it as a GPU or CPU whichever way you want. The next step is to transform and data load. So here you need to import the MNIST dataset that is available as part of gluon dot-dot-dot vision, where you can download this directly. When you are downloading it, you can provide the transform function, which is going to transform a normalized the image. Now, after the step is done, you will need to construct the moral architecture that CNN. You can construct various blocks of CNN and add CNN layer, max pooling layer. And you can also introduce dropout layer as part of the interior architecture. So this is your model architecture, followed by initializing the parameter. You can use Xavier normal, or you can construct your own initializer setup. The optimizer provide any type of optimizer that is suitable for CNN, such as item, SGD, etc. Also provide the learning rate. You can choose the optimum range for all of these based on various experiments. You might not only stop it running one cycle, one experiment on this, you need to experiment with different combinations of hyperparameters to identify the best combination. So this involves a lot of iteration after that, construct the loss functions. And then you are going to evaluate accuracy. Before that, construct a model which is going to take the laws, make the prediction, and compile it. It will give you the accuracy. Now, after that, put everything together inside your training logic, which is going to run based on the number of epochs you are providing. And they're just going to train your model using autograph, compute the loss, backpropagated, and update the parameters. This is going to run until the epoch ends. Finally, you will get the model accuracy. After that, you can take the train model. I'll use it for prediction. So these are the steps involved. You can improvise the steps. This is only a highlight on the walkthrough of how to achieve the solution. But again, you can improvise on this. You can add more layers, construct more in-depth architecture to near model better and get a better accuracy. All the best. 46. MXNet exercise 2 overview: As part of this exercise, you're going to construct a single layer, a dense layer using a magnet and initialize it. The task includes construct a single dense layer with the number of nodes of your choice, initialized layer, and use a random normal distribution for initialization. You can also use uniform distribution if it is required. Check the layer weight. This exercise is to test how you are able to initialize the layers. All the best. 47. MXNet exercise 2 solution walk through: Let us look at the solution on MAX met for constructing a single layer. First, we need to install a makes net to appear in the collab. After that important necessary modules, the dense layer is coming from nn modules, therefore imported from blue on. Now provide the necessary nodes. You can choose the number of nodes you want and then initialize the layer. Now, you can initialize that with any kind of distribution, the normal, uniform or any other distribution you would like to initialize. Typically we will use normal or Xavier Initialization. After initialization, check the layer by providing the x, which has the normal distribution. After that said the layer vague dot data. All the best. 48. MXNet course summary: Well done on completing the crash course on MX net. Now let us summarize what we have learned so far. As part of this course, we are explored the egg, FET and seen the basic overview of the package, the modules and APIs as part of MX net. Next, we explored how to perform data manipulation, specifically the core package module called ND array for creation of arrays and tensors perform basic operations on arrays. This is to demonstrate how the AND array module works. Next, removed into building deep learning models, explored the gluon module, which is used for constructing the model architectures. We constructed the model based on different architectures, and then we train the model using auto grad principle. Lastly, an exercise was given to build a CNN moral using M X-Men congrats again. 49. PyTorch course introduction: Welcome to the crash course on Python. Python is an important deep learning framework. It is widely used by the research community as well as enterprises want to deploy deep learning models. Let's see the course objectives. First. As part of this course, we are going to cover the Python framework, the important components, and how to build deep learning model using the Python framework. Second, we will understand the various modules and APIs which come as part of Python package. These modules and APIs are used to build deep learning models, construct them with the layers, and provide optimization features. You can also use it for various other utilities where you can import these utilities for data augmentation and also building the data. Bach's third, we will understand the steps involved in building and evaluating the deep learning models. Using Python. For will be an overview of the various utilities. As I said, Python's comes with various utilities for data augmentation, etc. We will have a look at what these utilities are. Now, let us look into the cost benefit. After completing this course, you will understand the following. What are the modules and EPA's of Python? How to implement and build them using it. And understand the deep learning framework architecture. And how can we use Python to build these deep learning frameworks and also tune the hyperparameters. 50. PyTorch course coverage methology: Let's see, the coast methodology will start by giving the overview of the Python modules and APIs. We will see what are the important modules and how to call them and implemented. We will also cover the algorithms. The algorithms we will cover include deep learning frameworks that include artificial neural networks, a nn, convolutional neural networks, CNN, and recurrent neural networks, RNN. We will implement these algorithms with examples. For example, will create an image classifier using the pythons framework. After completing all these, we will have quizzes to understand how you have learned these topics followed by exercises. The exercise will be an into an exercise where you will be using Python to implement a deep learning model. On the other hand, the solution for exercise is also provided as part of the attachment. Finally, after you have successfully completed the exercise, you can go ahead and reveal the solution. 51. PyTorch installation procedure: Let's look into the installation procedure for Python. First, check the workstation infrastructure, whether there is a GPU, ran capacity, etc. Also check what operating system configuration that you have installed in your laptop or workstation. Now, visit this particular URL. This is part of the landing page in Python where you can't check the different installation EXE that are available. Now identify the installation configuration that is required. For example, let's say you have a Windows ten and you have a laptop which supports GPU. If that is the case, go and identify the configuration that is required. After doing that, you can download the EAC dot EXE file and install it. This is the screenshot of this particular URL. Will you visit this URL? You will have this particular feature where you can go and select the version you want to install. For example, Python's built where you want a stable version or the new version. The OS, where you can select whether Mac, Windows or Linux based on the OSU have installed in your workstation. The installation packages, you can mainly go for conda or PIP language. On the other hand, you can select Python. If you have a GPU as part of your laptop, then you can go for the cuda aversion. Else, you can go for none. After that. You can download that dot EXE file and run that to install the Python package in your laptop or workstation. 52. PyTorch modules and concepts: In this section, we are going to see what are the important concepts and modules in Python. The first thing is Python HBase. As part of this module, you will have all the building blocks for building your deep learning frameworks, such as tensors, variables, hand out to modify the tenses, how to perform computations, etcetera. This forms the building block of pi torch deep learning framework. These components are combined together to form the layers in the different modules. So this is the basic fundamental of Python's. The next thing is the core concept which is autocrat, as part of deep learning framework backpropagation, where the model is actually learning from the data and training itself. You need a partial differentiation mechanism to identify the gradients and propagate that through these layers. Now, this automated differentiation feature is covered as part of this particular module. Now, the autocrat package provides automatic differentiation for all operations of the tenses. It is defined by R1 framework. That is unique as part of deep learning framework. The next important concept of the module is Python neural network or Torch n n. In this module API, you will see various components like layers, which can be combined together to form the deep learning model. For example, if you want to build a CNN layer, you could find all the building blocks of this layer in this particular module. This is an aggregation facility where you can build different layers and group them together to form a deep learning module. The torch nn is the module used for building the layers. And it depends on autograph to define and differentiate the model. All these things are interlinked together. Finally, we have something called Torch vision. You likely have heard the concepts like tenser learning and the predefined models. Taught vision is a package, a module, or a repository, where you will have all the predefined deep learning frameworks or pre-trained models like mobile net, VGG, etc. Now, you can use this module to call these particular models which are already trained and use that for transfer learning. The torch vision package consists of popular datasets, model architectures, and also pre-trained models, which can be used for making transfer learning. It also has some of that transformation features for computer vision. So these are the important modules. And through these modules we have seen some of the important concepts covered as part of pi towards deep learning framework. 53. PyTorch Torch API code walk through: Let's see how to construct a simple array-like structure using taught. First, import the taught after importing the torch, you can see that there are a lot of commands which can be used to construct arrays or tensors. Now, torch DOT grant can be used to construct random numbers with the size of three comma three, which looks like NumPy. Actually, if you check, it will be called as taught dot tensors. You can convert this into NumPy array by using the command numpy. And you can convert this x, which is actually a torched dataset into a numpy dataset. Similarly, you can convert a numpy dataset into a torch dataset. These kinds of operations can be performed using Python's. After constructing the torch tensors, you can multiply it with NumPy array. The end result will be torch tenses. So this x, which we have generated is a tensor and the Y we are generating is numbered. When you multiply both of them, you will get a tenser output. Similarly, you can perform various other operations, like identifying the arg max in the torch tensors, and also performing various other operations like mat, matrix multiplication, etcetera. Now we know that x is a tensor and why is a numpy array? When you perform a numpy operation of math multiplication, the outcome will be a tensor. Therefore, the tensor will be forecasted and everything will be converted into a torch tensor. The torch also as operations very similar to NumPy, where you can create random integers, create an array of ones or tensors of one's, a tensor of zeros. And reshape it like what you have done in the NumPy package. You can also set up the context, whether it needs to be trained and worked in a device with GPU or CPU. So here you can see the simple code. Run it on a CPU or a GPU. 54. PyTorch data pipeline: Let's look at the steps involved in building a data pipeline using Python data loaded. Data loader is an important utility. As part of Python, which can be used for building our data pipeline. This combines the data sources with the transformation function and creates a data pipeline for model training. It makes it possible to handle large amount of data, read from different data formats and perform complex transformation operations. Now, this can be used to create data batches, which can be used for model training activities. Part of the step involves having a data source. From that, you need to prepare a data pipeline. This can be done by using Python's data loader, which is part of the utilities. After that, you can apply the data transformation functions and use it for model training purposes. 55. PyTorch data transformation: Let's see how we can perform data transformation and augmentation using touch vision. As part of touch vision. There are various utilities which you can use to perform data augmentation and transformation of your datasets, such as image-based data. Now, the image-based data can be transformed using the grayscale. In addition, it can resize the image or change the image orientations. So these are part of data augmentation method. And this can be achieved and build a pipeline using transform operation, which is part of titration. Also, in touch vision, you have a composed function which can be joined together to perform various transformation operations. After performing these operations, we can use it for model training. Since we have enriched the dataset, we can use the dataset to train the model. After that, we can evaluate and predict the model. So this is a typical pipeline involving data transformation, and this can be achieved using touch vision. 56. PyTorch data pipeline and tranformation code walk through: Let's explore how to build a data pipeline using Python. The first step is to import the necessary packages. First, input thought and taught vision. From taut vision, you have a dataset which can be access. If you want to combine various transformation operations, you can import that transforms. For example, you would want to convert the image into a tensor and then normalize it. For that, we can perform transform.py compose and provide all the necessary transformation functions as part of it. This can be created as one separate function which can be added to the data pipeline. The next step is to import the dataset. So using the torch vision dot datasets, you can import the necessary dataset based on which you are going to build a model. After that, you can build the data pipeline using data loader, which is available as part of the utilities. Now, this will create a data pipeline where you can provide the batch size. You can also shuffle it. Similarly, you can create the test dataset. So this is how you need to create a data pipeline, incorporate transformation operations as part of it. 57. PyTorch torchnn for configuring the deep learning models: Let's see what is taught. Dot nn module as part of Python. Touch dot NN is an important module as part of Python, which can be used to configure the modal architecture. This is a high-level API which can be used for constructing your model. It consists of various layers, loss functions and activation functions, which are the building blocks of your model. Using this particular module, you can construct deep learning models, such as fully-connected CNN, sequence based model and hybrid models. Whichever models you want to build, you can build using Touch dot nn module in Python. Now, let's look into the model building steps. The initial thing is to configure the data pipeline from where the input data is going into the model. The next step is to configure the model. This is where the torch dot NN comes into play. Using Touch dot NN, you can configure the layer's activation and loss functions that are required for modal architecture. The next step is to train the model. If you have configured that architecture, then the next step is to train it using the autocrat functionalities. After that, you will evaluate and predicted. So this is the typical process involved in constructing and training the deep learning model and Touch dot n and is used for configuring the model architecture. 58. PyTorch FCN code walk through: Let's see how we can build a fully connected network using Python's first, import all the necessary packages and then upload the data. The dataset is part of the attachment. You can use an imported from the Google Drive into the collab. You'll need to mount it and then import it. After that, you can read and store it as a dataframe. Now, this consists of reservoir and it is also a time series based one. Here. You can also use any other dataset where you are going to predict a quantity. Does typically behaves like a regression problem where you build a model using XD boost or something for the same kind of problem statement. Here, you are going to predict a quantity. Now, there are a set of features based on that. You are going to predict. Some contain the dataset and train it yourself. After that, do all the basic data feature extraction, transformation steps. And then let's focus on the model-building. After that, you need to create the batches. So you can use the tensor dataset and then you can transform it into tensors. So basically, Python understands only the tensor. Therefore, you need to convert that into a tensor based dataset. After that, you can provide that into a data load up. Basically, what it does is that the data is split into various batches based on the batch size you are providing. It will be sent as a batch format to the model training process. So this is very useful for training the model. Instead of dumping all the data into a running process or splitting the data into a different batch and then sending to the model as part of moral training. So this is the modal architecture. This is simple model architecture. You need to open up a sequential based model inside that you can provide the structure. So this is a fully connected network. Therefore, you would need a dense layer. In Python, you have A-linear. So NN dot linear acts as a dense layer, which is used to construct a dense network or a fully connected network. Here we are taking three features. In underscore features is set as three because our features, our tree, we can also change it based on the dataset you have. The output feature is basically the nodes of the hidden layers. It is said as 16. Similarly, if you are constructing more layers, then you need to provide the inner scope features and out underscore features. So you're going out as 16. That is receiving the next linear layer as an integer, and it is going out as eight. Similarly, if you are having one more linear layer, then you have to provide eight as in features and one as our features. In this way, you will construct the network. You need to provide the activation function between the linear or fully-connected layers. Without that, it will be a general task. So therefore, you need to provide an activation function. This is your network. The other important constraint is the loss function. So since this is the quantity, you need to use, mean squared error or mean absolute error, which is available as part of nn module. Then the optimizer Adam, adequate. You can choose the optimizer whichever you want and add it as an optimizer. So this is your feature. Next is the training process. So imagine all the deep learning model gets trained. There will be a forward propagation losses computed and back-propagation happens. In similar way. You need to provide the logic. You start with the epochs and then the data is passed as x and y. X being the features and y being the label. So based on x, which is the features, your model will provide a predicted value, which is compared with the y, the actual. So that is happening in this step. After that, the losses computed and has to be backpropagated. Doing backpropagation, optimization happened. So we are providing optimization step and then the loss is computer. In that way, you are setting up the training logic. So this is for a for loop. If you remember, this is based on the number we are providing. After that, the model gets drained and the losses computer. So this is a time series based one, based on whatever dataset you are taking. You can't take a typical regression-based dataset and use it for training the model. And the loser will gradually reduce. And then you can't use it for validation. And then check the accuracy. If you want to change something in the model network, you can go ahead and add or remove. So this is the way for constructing a fully connected network. 59. PyTorch steps for CNN model: Let's see what are the steps involved in constructing a convolutional neural network using Python. Python is an extensive framework. And using this framework, you can build all the required components that are necessary for building and training a deep learning model. Touch dot NN is one of the modules that you can use for configuring the layers, for the models, and also for configuring the activation functions, loss functions, etcetera. Similar to that, you have various components as part of Python's, which can be configured, such as data loader for building a data pipeline. Transformation functions as part of touch vision and then defining the model using torch dot NN. As part of CNN, we need layers like convolutional layers, pooling layers, and also Dropout layers, which are all available as part of dot-dot-dot nn modules. Finally, using these layers can configure CNN network. Once that is done, the next step is to train the model. You will use it for prediction and evaluation of the model. So these are the steps involved in building a CNN model using Python framework. Now, let's see the code walkthrough for this. 60. PyTorch CNN code walk through: Let's see how we can build a CNN model using Python. The important packages you want to import include taught touch vision for dataset and taught vision dot transform for performing transformations on the dataset. Now here we are going to use a pre-built dataset, which is Safari ten. It's going to have ten different images. And therefore, we are going to predict ten different images. So you will have ten classes. Here. We need to provide the function for transformation. So as part of the transformation, we are going to normalize the data. First of all, you need to transform it into a tensor so that Python works and performed the normalization. The next step is to import the dataset, which is the SFA dataset, and use Tower tuition dot datasets dot safer ten. After that, you can perform a transformation function, which you have built here and performed all the transformation and store it as a train loader and test loader. Now, the classes information can be obtained from suffered ten. You can search for its website. You can also get all the ten classes there. Now, these ten classes are what your model is going to be predicting after you run this thing. This will connect to this particular website and download the dataset. The next thing is to visualize what we're downloading. After that, you can call this one of the images. Expect one of the images. Try to visualize that. You can see. They are different images. You can see shape, which is one of the classes here. You can also see animals like this one here, a deal. You can also see other animals. The next step is to build a network. Here you can have the class construction import nn module. After that, you need convolutional network as part of CNN. Therefore, you are going to import these different constituents. For corn one, you are going to have to convolution layers, the max pooling as part of the network and the fully-connected network. The different configurations can be provided here. It can also change these configurations, but it has to be aligned with the image size that's coming up. Now, the next step is the forward logic. So here you will set the forward logic. So it should be integrated. You have convolution which is coming out and sent to remove from there, it is going to the pooling layer. These kind of constructions have to be made and then you have the network. The next step is to create the loss function. Here, we are going to have a multi-class. Therefore, we will have cross entropy loss optimizer as stochastic gradient descent. You can also choose Adam or RMS prop. The choice is yours, whichever optimizer you can choose an experiment with. The next step is a training logic. So this basically follows the backpropagation principle. You need to construct the four loop based on the number of epochs and then inputs a scent, which are the features and the labels. Losses computed and backpropagated optimizer is implemented. So these steps continue. You can construct a set of statements to print a loss in each epoch. You can see that in each epoch, you can see the loss is gradually getting reduced. After that, you can use it for predictions and check with the ground truth whether it is actually matching with the actual. So this is the way for you to construct a CNN network using Python. 61. PyTorch RNN model construction walk through: Let us see the steps involved in building a recurrent neural network using Python. Similar to CNN, pyplot provides all the necessary layers that can be used for configuring recurrent neural network. All the necessary components of RNN can be built using Python's modules. Meanwhile, the Touch dot n And module provides an option for you to configure various RNN networks, such as layers like LST, Jie, Ru, et cetera. On the other hand, if you noticed the components, data loader is also part of Python's. The next one is the transformation. So here we are transforming the data that is required for the sequence-based models. Next step is to configure them modal architecture, which can be done using Touch dot nn module. As part of the module, you can configure ls, TM, or GR units, embedding layers and Dropout layers. After this is done, the next thing is to train the model. The training can be done using autocrat module. Finally, you will use that for prediction and evaluation. So there you have it. These are the steps involved in constructing the sequence model at a high level. Now, let us see the code walkthrough for this. 62. PyTorch RNN code walk through: Let's see how to build an RNN network using Python. This is the link to the dataset, or you can use any time series based dataset. The libraries are taught the base module for Python and Touch dot nl, the nn module as various network layers. Torch dot autograph that is used for training the model, providing the training logic. Other packages if you want to perform transformation, scaling. So after ingesting this dataset, you can visualize that and you can see a trend, increasing trend for this particular dataset. This is a time series. Now we need to modify the data in such a way that it gets a window. For example, RNN predicts in such a way that it will train the model in subsequent intervals and predict the next timestamp. So we need to create that sliding window. For example, 0 to ten will be one window. And based on that window, 11th day value will be calculated so that sliding window will keep on moving in the dataset. Before that, the logic is provided. After we have created the sliding window, you can scale the data. Scaling the data normally works well for RNN based model. After scaling the data and having the train and test data ready, the next step is to build the network. So you can provide this in a class structure where you can import all the modules constrained from nn module. After that, you can use the superclass to call the constituent classes. After that, you can refer them, for example, number of classes, number of layers, etc. So this is an LSD and network, the long short-term memory. Therefore, you need to provide NN dot LST m. What is the input size? What is the hidden layer size? The number of layers you want to be part of LST am, and also batch first. So these are some of the constituents you will need to provide. Similarly, after the LSD M, you would have to connect it to the dense network or fully connected network part of Python, but we have n and dot linear. So this has a hidden size and the number of classes. Then we have to construct the forward logic. It is going to have the LST m followed by the fully connected network. Now this forward logic is provided after that is a training phase. So you have the network, then we have to provide the logic for the training. You can set the number of epoxy and the learning rate and various other parameters. And then you can start, you need the loss function for computing the loss. So this is the quantity based one. We have to use a loss like mean squared error or mean absolute. In this case, we are using mean squared error loss. After that, the next important constituent would be the optimizer. Here we are choosing Adam and also the learning rate is provided. But as part of Adam, It starts with this learning rate and keeps on refining it. It's an adaptive learning optimizer. Therefore, you do not have to worry much about the optimal choice for the learning rate because it will choose by himself. You can also use RMS, prop, or stochastic gradient. So there are various options that you can choose for optimizing. Then you have to build a training logic. Now here you need to create a for loop starting with a book. And what is the output that is coming out of the LST m that we have selected. Then after that, you would need to clear the optimizer to clear any Junk values that are available. Then the losses computed, then it is backpropagated. So this is the logic behind the training of the model. So this is the training logic. After that, if you run the model, then you will have the epoxy and the loss per epoch. Then it can create or evaluate the model and check how you're more or less performed. And you can plot this and see which one is the predicted and actual and do a comparative analysis whether your model perform well or not. So these are the steps for building LST am RNN network using Python. 63. PyTorch transfer learning using TorchVision: Transfer learning is an important process as part of model building activities. We can leverage the pre-trained model for building image classifiers or sequence-based models. This transfer learning is made possible using touch vision package that is part of Python. Now, as part of touch vision, we have various pre-trained models. Some of the famous ones are VGG and mobile net. Now, these pre-trained models can be called and used as part of transfer learning. These pre-trained models can be combined by adding an additional layer to train over the top of custom dataset. It is a repository of extensive pre-trained models. The most popular ones are part of that and you can download it whenever you need to use transfer learning. The pre-trained model is one important component. As part of transfer learning. You can combine that with an additional layer. For example, you can retrain the weights and configurations of a pre-trained model and add an additional layer on top of it based on the custom dataset. You can combine these two as part of transfer learning. You can keep the parameters frozen for the pre-trained models, habit tradeable for the additional layers, and leverage the combination of both these components to have a successful transfer learning. 64. ADVANCED: PyTorch transfer learning code advance walk through: Let's see the steps involved in transfer learning for Python. First, you need to import the necessary packages such as taut, taut dot NN, optimum, etcetera. The torch vision is where you can't see the dataset models and also the transform operations. Next step, download the data. For this exercise, you have a link here that you can use to download the data and place it in the current folder. After that, reference it using the data directory. Now, this should contain the downloaded data. Now here we are providing a transformation function where you can't see the train and validation. It is placed inside the composed function from the transform, which is going to perform these operations like resizing horizontal flip, two tenses and normalization. These are the steps we are going to include as part of data transformation. The next step is to create the datasets. You can create a data pipeline for the Image dataset using the data loader. So these are the steps for creating the data pipeline. You can also set that device. If cuda is available, it will run in cuda. If not, it will run in CPU. Now, we have the data, we have the data transformation functions, and we have loaded it into the data loader. The next step is to train the model. For training the model, you need to create a logic for that. This logic is going to have the information like the Dataset information, how the moral is going to be trained, how it is going to back propagate, etcetera. Now you are importing the dataset as a training and validation. And then you are sending it into the loop, which is going to iterate across the inputs and the labels from the training dataset. And it is going to train the model. Now, you can't put the device, which you have already said. If cuda is available, it will train in the GPU. If not, it will train in the CPU. Now, we are providing the 0 and us go grad, clearing the gradient if it's already there. And then training the model. Once the model trains, the gradient is generated and this needs to be backpropagated using this particular expression, which is called Lost dot backward. This loss is from the comparison between the actual and the predictor. After this loss is computed, you are going to backpropagate it and update all the weights. Now, this is going to run til the size of the dataset that you are providing. This is a function which you need to define as part of training. After training this function. The next step is to call the pre-trained model as part of transfer learning. Now, we are going to use a rest net model, resonate 18 and set this parameter As equal to true. Which is going to say that this is a trade trained model. And we are going to use all the weight configuration of this pre-trained model. In the next step, you can append the dense layer to this pre-trained model based on the number of classes you have. After that, you are passing it into the training function. But before that, you need to provide optimizers. Here we are using stochastic gradient descent optimizer with the learning rate of 0.001. You can also use optimum dot-dot-dot adam. After providing all these necessary components, we are sending it into the train model function, which we have already defined. So this is going to have the model criterion optimizer should dealer and the number of epoch. We're selling all these parameters into the model. After that, we can see the result coming out of the model. All in all, these are the steps required to build answer learning using Python. 65. PyTorch Quiz: To check your understanding of this module Python. Let's have a quiz. Which module is used for constructing the model architecture is a tart vision data loader or torched dot CNN. The answer is dot-dot-dot NN. In this particular module, you have various options to construct layers, loss functions, and also activation functions which form the model architecture. Moving on to the second question, name the function we use to combine various transformation operations on the dataset. The answer is compose. It's part of transformation. Moving onto the next question, which module contains the pre-trained model, which can be used for transfer learning? The answer is taught vision. Moving onto the next question, what is the purpose of Torch dot optimum module? Think about the model training process and how we compare the module. The torch dot optimum is a package implementing various optimization algorithms. The final question, what is the use of Python autograph? One clue is that the torch dot optimum plays a major role in it. It has for automatic differentiation, which is used for model training, the core of moral training process. 66. PyTorch exercise: Well done on completing the theory and the code walkthrough as part of this crash course on Python. Now, let's see the exercise requirements. Here. You have to construct a deep learning based image classifier using pi taught as part of the model, you first need to construct data pipelines, used the dataset fashion amnesty, downloaded from touch vision and create a pipeline. Next, use image transformation as part of the pipeline. Built CNN layers so that you can configure shallow layers or deep layers. Remember, the accuracy should be good for the model. Next, train it using auto grad module and evaluate the model. Also use it for predicting the test data. So these are the steps which you have to follow as part of this exercise. All the best. 67. PyTorch exercise solution walk through: Let's see the solution for the exercise based on Python framework. First, import all the necessary packages and then create the data pipelines. As part of the data pipeline, you need to provide the transform function and create the data loader for training as well as the test set. Next, you can explore the data and see what the images are there. Next, the next step, you need to construct the model architecture. So this is going to have convolution layers and max pooling layers, which is forming one single block of the convolution block. Similarly, you can add more convolution blocks, add dropout and dense layers. The final output will be a softmax output. The next thing is to provide all the hyper-parameters and also to configure the optimizer. Here, I'm using the stochastic gradient descent. You can work on Adam or Adam, x, etcetera. Now, create list to capture the losses that are coming out for train test, etcetera. After that, configure that train function. So here you are providing the main logic. That is the backpropagation. Now the model train and the gradient is computed. It is backprop propagated and all the parameters are updated. So this forms the train function. After that, construct one more function for evaluating it. Now, this is going to compare the prediction and the ground truth gives you the loss metrics, the average loss metrics accuracy, et cetera. After that, run the model and check how the accuracy on the loss is computed. 68. PyTorch exercise 2 overview: In this exercise, you're going to build the anterior model training and the backpropagation logic using Python's core building blocks such as the tensors task includes the following. Provide the number of nodes for the input dimension, hidden dimension, and hard put dimension provide the number of batches. Having this information create the input and output data using random distribution. Initialize the weights, provide the learning rate. Lastly, construct the forward and backward pass and update the weights based on the gradient descent principle. 69. PyTorch exercise 2 solution walk through: Let's see the solution for the exercise on constructing the forward pass and the backward pass from scratch. First, input torch and provide the device. This is an optional step. After that, assign the number of batches, the input dimension, hidden dimension, and the output dimension. The value is arbitrary. It can provide any values here. Now, based on this, considering this as a layer, you are creating a random input and output, y being the label and x being the feature. Based on that, you're initializing the weights by providing input dimension, output dimension, etcetera, and then setting the learning rate. So these are the initialization steps. The next step is the forward pass. Forward pass is nothing but sending the input value across the layers in the forward direction. Provide the logic for it, and also provide an activation function like Rayleigh, etcetera. Based on that, you will get the prediction, store it in a variable, and compute the loss. Now, this loss as to be backpropagated based on the principle of gradient. For that, compute the gradient. That is nothing but the difference between the lost, the actual and the predicted. From that, we are treating them based on the number of nodes we have. We're backpropagating it. After the backpropagation, update the weights based on the gradient that is computed. Now, if you run this code, you can see that the model would get trained. 70. PyTorch course summary: Well done on completing the Python crash course. Now, let's quickly summarize on what we have seen so far. As part of this course, we had seen the introduction of the Python package, mainly an overview of the package, the modules, and the various APIs that can be used. Next, we move into the data manipulation. We learn the creation of arrays and tensors using Python. Learn how to perform basic operations, etc. Next, we moved into the deep learning model, building steps. You also explored Touch dot nn module, which is an important module for setting up layer's activation functions and loss functions, which are the core components of deep learning model. We also explored how to construct various deep learning architectures, CNN sequence models, etcetera. In addition, we had seen how to set up transfer learning using torched vision. Lastly, we took a short quiz and did some exercises based on the Python package, as well as exercises about how to build a CNN model. 71. OpenCV introduction and course benefits 1: Welcome to the crash course on Open CV. Let's look at the cause objective. Open CV is a computer vision package. We are using Open CV to perform various operations over image, like transforming the image, feature detection, et cetera. Not only that, it works on images, it also can be used to work on videos. As part of the scope for this particular course, we will look what are the modifications we can perform over an image. Next, understand the various modules, various operations and functions of OpenCV backends. Next, understand the steps involved in performing image processing and feature extraction using Open CV. We will also give a general overview about the utilities available as part of this package. After completion of this course, you will understand various functionalities of OpenCV. Understand the important modules and APIs for image processing, feature extraction, as well as implementation knowledge on OpenCV over image data. 72. OpenCV course coverage methodology 1: Let's look into the cause coverage methodology. In this course, we will look into the general overview of the package. As part of Open CV. They will check the important operations, such as the basic operation of converting an image to a numpy array towards the core operation. And we are also check what are the algorithms covered. So algorithms here, like feature detection algorithms, can be implemented using OpenCV. We will also see the implementation within example, both image transformation and feature extraction. After that, we will have a short quiz where we will test your understanding of your learning in this particular course. This will be followed by an exercise where we have to implement an open CV transformation process. In addition, you will also have access to the solution for the exercise, which you can refer after completing the exercise. Or the best. 73. OpenCV accessing image properties 1: Now we can check the image properties using an open CV. We are converting an image into a NumPy array. After that, we can check the image properties such as shape, size and the D types. So this is a basic operation and we will use IM read function to convert Android image into NumPy array. Once that is done, you can check the shape of the array and the size and also the D type. By taking the shape, you can understand the tunnel properties, whether it's black and white or a colored image by only checking the Numpy array shaped. Similarly, you can identify the size of the image. So these are basic operations. And after you have identified these properties, you have an option to resize these images or blur an image by just simply working on the Numpy array values. 74. OpenCV reading image and coverting back: Let's see how we can read an image using OpenCV. Using an open CV platform, you can read any type of image like JPEG, PNG, etc. By reading an image, the OpenCV function converts it into a NumPy array, which is basically based on the grayscale. Now the image is sent to an OpenCV function, which is, I am read that converts it into a NumPy array. So this has the dimensions like length and breadth of the image and also the channel information, whether it is a black and white or a colored image. The value is seeing the Numpy array is the grayscale. Now, if you want to see the image back, you can use the matplotlib. You can use matplotlib to convert the Numpy array and recreate the image. Also from NumPy, as you can see the image back. So this is the general workflow of reading an image and converting it into an umpiring. 75. OpenCV basic operations code walk through: Let us look into some of the basics of OpenCV operations. First of all, let's import the package. Once you have installed it using the pip install CB2 command, you can import it. Now, since the CV to convert an image into a NumPy array, you will also need a numpy package to be important. Now, when we are bringing the Numpy array back into image, we would need a map plot lib. So these are the three important packages, as long as we are working with Cv T2. Once this is done, we will read an image using CB2. Now, CB2 as a lot of commands, if you press dot and tab, you can see that there are a lot of functions that are part of it. That's why it is one of the comprehensive computer vision packages that is currently available. You can perform various image transformation feature detection tasks by using OpenCV. Now we are going to read an image and provide a part in this particular command, which is IM rate. It can provide the part of the image. Now, you can see that after it reads the image, it converted into a NumPy array. You can store it as a variable and can check the dimensions of it. It shows that it has three dimensions. One of the particular dimensions which shows three is the channel. So effort is a colored image. You'll have three as a number. If it is a gray image, you will have one. These are the dimensions of it, which is the x axis and y axis of the image. Now, you have converted it into a NumPy array and it can check its shape. Next step is to check the D-type. Now, if it check the D-type, it will show the integer value. This is because the image which is stored as a NumPy array can see that there are various numbers. These are nothing but the grayscale of that image. It also depends on the channel. So each one of the channels will be the intensity of that particular color, which is RGB. So it will be red intensity of the red and green intensity of the green, followed by blue intensity of the blue. So this is how it is structured. Now we have explored the Numpy array. Let's convert it back into a picture. For that, we can use plot P, LT. From there, you can use IM Show, which is from Matplotlib. Now here, if you provide the array, you can see the image. You can also maximize the figure size. Now you can see the image. The problem here is, that is CB, its RGB in matplotlib, it's the inverse of that. So we need to convert it back before we use matplotlib to displayed as an image. Before that, there is a command available and CB2 to perform that particular operation. Now, in plt.plot IM Show, you can invert this particular command, which is CVD color, which is to convert the colors, provide the brackets and provide the array that we have, which is the image IMG variable. Then we can use the CV to command. Meanwhile, there's a particular command, color underscore BGP to RGB. So you can see here this is the conversion from BGI to RGB, that is the inverse of that. If you display that, you can see the correct image colors. Now, we can see the correct image color. So this is the basic operation. You can read any image, PNG, JPEG, or any other format, and then converted into a NumPy array. After that, you can explore the Numpy array, even if you want to do some operations, you can perform that own the Numpy array. Then you can actually reverse it back and displayed as an image using matplotlib. While doing that, we need to convert that using this function, which is BGP to RGB. And then you can display the correct colors in the image. 76. OpenCV image prcocessing 1: Open CVs and important package for performing image processing. It has various operations to perform image processing, such as color space transformation, orientation, geometrical transformations, smoothing, and more. There are a lot of operations which you can perform over the image and transform it into a different image. So this is basically used for data augmentation as part of deep learning model training activity. Typically, you will feed a royal image into an open CV transformation function. After that, you can change the color of the image, geometric transformation and image gradient change. These are some of the transformations. There are a lot of transformations as part of Open CV. This is a very good package for performing data augmentation and it can be used for deep learning model training. 77. OpenCV image transformation code walk through: Let's see some of the image transformation operations. But first, why do we need image transformation? As part of the deep learning model training? We may need to augment the database we have. As part of the image collections. We might have collected only a 100 images. However, if you want to augment the data by making some orientation, color, and gradient changes, we can augment more data, let's say 500 data and use it for training our deep learning models. By doing this, you are model will be able to see different spaces or orientation of the image. It helps to train your model better compared to the base data. So let's see some of the image transformation operations. As we had seen in the basic steps, we need to read an image and converted into a NumPy array. For that, we are going to use CV do for reading it. And also it is the main package from CB2. We have IM red, which could be used to read an image and converted into a NumPy array. Matplotlib, On the other hand, is to display and bring it back to its original image or modified image from the NumPy array. Now here, the first task is for us to change the image gradient. We have an image from which you can bring it into the notebook through the part you can provide here, or store the image in the base folder and call it here. Now, you can use the IM read to read this image and converted into a NumPy array. After doing that, you can change the threshold of the image. Now here, this particular command from c v2 dot threshold. You can pass the image, provide the threshold limit, and provide the operation that needs to be done like threshold, binary, inverse, truncated, etcetera. So there are various threshold operations when you type CB2, You can check threshold and various operations that you can perform. So in CV to all the operations have been joined like this. So whichever operation you want to perform, you can see that particular name beginning in access, our aghast feature director or border isolation, et cetera. So you can actually remember these names and call it whenever these are required. For threshold, you type the initial spelling in a capital letter and then call the particular operations. So yeah, we are going to change the threshold. We are using some of the threshold chaining operations, storing it as an image, converting it, and displaying it using matplotlib. While doing that, we need to convert and uses operation because of BGI are two RGB. So CBT works in BGI and we need to convert it into RGB so that it will display the correct image properly. Now, let's do that. Let's run this command. Here. You can see that the original image, which isn't a grayscale. And you can see the binary conversions inverse of that and various other conversions. By doing this, you are actually trying to see different colors and gradients of the image. You can collect these images and then use it for deep learning, model training. It can also change the orientation of the image. In the second example, we are going to change the orientation, maybe tilt the image in a particular direction, focus or zoom it in a particular direction. Before that, we had this operation called gate affine transform, where you can provide the points where you need to actually do the transformation. When you provide all these commands, you can see that this is the original image and it has been tilted to a certain degree and then display. So why is this important? Because in deep learning models, they will need to identify an object irrespective of the position it has placed. So that is a robust model. To build a robust model, you need to feed in the data in such a manner that you will have varying rotations and image color so that your model will get trained in those images and become a robust model. Now, this is the geometric transformation operation. After that, you can also perform similar operations. You can see here, focus on a particular area, zooming and zooming up. These operations can also be performed. In addition to the geometric orientation changes, you can also apply some filters. So Cv T2 comes with rich filters available to change the image, such as laplacian, Sobel, etcetera, which you cannot find in any other deep learning frameworks, which will have only limited application of these ones. Wherein in Cv T2, you can find the extensive list of filters that are available. You can use these filters to change the image orientation or image gradients. Now here you can see that this is the original image. After you have applied the Laplacian filter, this will be the image Sobel X and Sobel y. So these images are, after applying the filters, are widely used in the deep learning model training. Mainly the image that comes from medical area like x-ray images, et cetera. When new apply these filters, the edges of that X-ray and important areas will be highlighted out. And then those areas will be focused by your model. After that, your model will get trained and can be used for medical applications. So these are some of the wide usage of using an open CV. And the data augmentation can be done using OpenCV. 78. ADVANCED: OpenCV feature detection: Let's look into the steps involved in implementing a feature detection algorithm using OpenCV. Opencv support various feature detection algorithms. It is an important application of OpenCV library. In open CV, you can identify various different feature detection algorithms which can be used for various scenarios. They can detect the edges of an object, the edges of a phase. And more typically, you will send a raw image into an open CV implementation or feature detection. And then you will get an image which has various edges that will be detected and highlighted in the image. So this is the simple overview of the segment. Now, let us look into the code walkthrough. 79. ADVANCED: OpenCV feature detection code advance walk through: In this session, we are going to look into some of the corner and feature detection algorithms as part of Open CV and Open CV as incorporated a lot of algorithms on feature detection. You can implement these algorithms by simply calling them with an API. First of all, let's import all the necessary supporting package. Next, we are going to read it using the IM read and converted into a NumPy array. You need to remember that open CV works on the Numpy array and do all the modifications. Therefore, I am rate will convert any image that you are processing into a NumPy array. After that, we need to convert it into the required grayscale. Whether it is a color or grayscale, whichever you want, you need to convert it. Here as part of the step, we are converting it into a grayscale and then converting it into a float. After that, we can call the corner Harris algorithm. This is available as part of CB2 as an API. In addition, there are also other algorithms which you can try and evaluate. Here we are implementing this particular Harris corner detection algorithm. In this, you have certain parameters which you have provided, like the source of the image, that is gray, and then the block size, etcetera. So this is to specify the area where this particular algorithm has to be worked on and habits features identified. The next step is to dilate the image and then change the threshold to the optimum value. After that, you can display the image with the features that are detected. Now, let's run this set of codes and see what happens. Now. Here, this is the original Apple image. On top of it, you can see the red dots, these red dot or the features that have been detected. You can also see that this leaf has been detected Well, and even the veins of the leaves have been detected. So this is one of the very important applications of feature detection algorithms. You can use the variants of that to improve the feature detection. So after detecting that, you can actually use it for applications like real-time object detection, etcetera. So this is one of the variants. You can also test it with different images. Let's say the bus image. You can also see that it has detected the leaves, the edges of the bus and some of the alphabet in the bus exedra. Similarly, you can test it to a face image. See what happens. As you can see, it has detected the eyelid. This may not be suitable for the phase because this particular algorithm works best for objects which has defined corners. While the face doesn't. Therefore, you're seeing some of the edges that are not detected. Now, you can try out different algorithms to check whether it is detecting the phase well enough, such as this one. Aside from this, there are also other implementations. The only thing you have to do is that you need to import the API, provide the source of the image and the block where you need to search it. After that, you can implement this. As you can see, it is better than the previous one where it has detected some of the edges in the face as well. So these are the feature detection algorithms which you can implement using CB2, which actually augments the process of building and object detection model. 80. OpenCV Quiz: Let's understand how you have learned OpenCV. First, what is the core functionality of Open CV? They said for deep learning model building, computer vision, or for visualization. The answer is, it is an extensive package for computer vision. Next, Lin, the action that can be used for data augmentation as part of Open CV. The answer is image filtering and transformation. So there are a lot of image filtering options that are available for Open CV, which can be used for data augmentation steps. Next is OpenCV similar to TensorFlow. And TR is not exactly. Open. Cv is focused on computer vision, whereas TensorFlow focus is on building deep learning models. However, OpenCV can leverage some of its functionalities to build a better model by augmenting the data and using it for model training as part of TensorFlow. Next, what functionality of Open CV can be used for object detection? The answer is, there are various algorithms that can be used for feature and corner detection. So this can be used for object detection purpose. Next question. When open CV, I am read function is applied on an image. What will be the output? The answer is NumPy Eric. When we read using I'm read function, the raw image is converted into a NumPy array. 81. OpenCV exercise: Now let's do some exercises as part of this crash course on Open CV. In this exercise, you need to apply some of the image transformation and feature detection function. So in a given set of images, you need to apply three image transformation operations based on color or gradient change. The next thing is to apply to geometric transformation operations on the same set of images. Third, you need to apply Sobel filter on the image. Finally, apply corner detection algorithms and check whether it detects that corner in the image. So these are the steps that you need to perform as part of this exercise. All the best. 82. OpenCV exercise solution: As part of the solution for the exercise, you can implement various transformations on the image. First of all, read the image and check how it is converting into a NumPy array. Then you can explore the CB2 And apply various filters. The choice is yours. You can explore and apply whatever filters you want to apply, such as filtering 2D or using Gaussian blur. You can also apply image transformation, such as applying median filtering or bilateral filtering. You can also perform various transformation like erosion and dilation, etcetera. These are the optional steps which are also shared as part of the solution. Now, the other part of the exercise is to detect the edges. You can use various filters, like Sobel filter to detect various hidden edges in the images. Also, you can implement various other feature detection algorithms as part of Open CV. Now, this is an open-ended assignment where you can explore the CB2 and implement different transformations. 83. OpenCV exercise 2 overview: As part of this exercise, you are going to perform certain operations using Open CV on the images. The task includes importing the immediacy of your choice and applying image transformation functions such as image blurring, median filtering, and bilateral filtering. 84. OpenCV exercise 2 solution walk through: Let's look at the solution for OpenCV exercise, particularly for image blurring, median filtering, and bilateral filtering. First, import the image using plt.plot. I'm read, take note, the image can be anything of your choice. After that. For this particular blurring function in Cv T2, there is an option called filtered 2D in that just provide the image and also the kernel. The kernel can be generated from numpy. Using NumPy 0.1t five comma five. You can normalize that by dividing it by 25. After that compared the image. So this will be the original image, and after applying CB2 dot filtered to D, you will get a blurred image. Similarly, for median filtering, you can import the image of your choice. Meanwhile, here we do not need any kernels. You can directly use Cv T2.me blur and provide the image. After that, use matplotlib to plot the image and see the difference. Now, this Median filtering is used for de-noising. Here you have a noisy data which is de-noised. You're here using median filtering. A similar approach is using a bilateral filtering. On the other hand, in CB2, you have bilateral filter option where you provide the image and also the various parameters. After that, you can see that the noise has been reduced using bilateral filtering. As you can see, it is better than the median filtering. So this is the solution for three types of image transformation. As part of this exercise. 85. OpenCV course summary: Well done. You have completed the crash course on Open CV. Now let's summarize what we have learned so far. So we had seen the general introduction to the OpenCV package. We also saw some of the basic operations of OpenCV, such as conversion of an image to a numpy array, taking the image properties, etcetera. After that, we had checked some of the core operations, such as image transformation, feature detection, and x fraction. Open CV as various implementations of feature detection algorithms. As a matter of fact, we have implemented some of those algorithms. We also then proceeded to the quizzes and exercises to understand what you have learned in this crash course. Now, you can use these learnings to implement and experiment on OpenCV operations. All the best. 86. Big secret MXNet Numpy interface: Let's see one of the secrets, leasing MX net. Using MX net in collab, We need to pip installed that will be available as part of the collab platform. After that, you can import it. The secret is that not many of us know that MX net hasn't NumPy interface. As you see here in the code, you have a Num Py interface to access the core functionality of the NumPy package. After imported, you can enter a makes net dot search for Numpy and then import this particular interface. So this is an addition to ND array, which is available as part of MX net. By the way, ND is the core that is built for MX nature. Whereas a makes net interface to NumPy is where you can import the NumPy and perform the numpy operations. So here I'm importing the numpy interface and generating random numbers. Now you can see it has generated NumPy array. If you do the same operation using n DRA, you can see that it has generated the same values, but comes with the message that this is ND array. Also using MX net num pi, you can perform various other operations. For example, if I press dot and tab, you can see that there is a list of various operations which you can perform. This exactly coincides with the actual numpy package, which is available. Now, using this num pi r interface, you can perform various operations like matrix multiplication, Random Number Generation, et cetera. Many of the practitioners do not know this because this is the latest introduction to the maxmin package. So that's it. This is one of the big secrets we have for you. 87. Big secret using TensorFlow graphics: Do you guys know that TensorFlow can also be used for rendering graphical objects? There is a specific module called TensorFlow graphics, which can be used for visualizing 3D objects. Now, this, along with the try mesh, you can use and visualize various objects. Meanwhile, these are the different formats which can be used. Now you can load the object is in try mesh dot load and provide the vertices, faces and also all the required parameters. In addition, you can visualize the 3D objects. This has wide level of applications in real-world scenarios where we can render images and visualize them using TensorFlow. Now, there is a dedicated package that is available in TensorFlow for TensorFlow graphics, which also has wide applications listed on various scenarios as well. So you can see that these are the scenarios where you can take an image, use a neural network, and use it for rendering. So this will have a wide application in real-time scenarios, where you can convert a 2D image into a 3D image. So this is our big secret for you. You can try to implement this using TensorFlow graphics. 88. Big secret using tfds dataset: One of the important secrets in retrieving the dataset for training your model is to use TensorFlow package. In TensorFlow, you can download different datasets using tf Ts. So in TFT is you have all the important datasets that can be used for training your model, which is not available as part of other deep learning packages. So you can see here you, they have categorized the dataset into different categories based on the usability like audio, image classification, object detection, text, etc. Even if you open the image classification, you can see that there are a lot of datasets available. Now, you can import these datasets using TF ds like this. You can provide the dataset name set it has supervised or not, split it into training, validation, and test set, and use it for any kind of deep learning framework. For example, you can import the dataset using tf Ts and use it for training your Python model. Because of the extensibility and the number of various datasets that are available. This becomes very important for your model training. So you can import by just providing these commands, instead of extensively going to the URL, downloading the dataset and using that for model train. So this is an easy step for you to extract the dataset. Moving on after you store it as a train validation or test said, you can use any type of deep learning framework and train that using these datasets. So this is a useful secret which you can apply for training your deep learning model. 89. Capstone project deep learning crash course: Let's look into the requirements for the capstone project. As part of the capstone project, you are going to build a CNN model by leveraging transfer learning. First, the dataset will be CFR 50, which can be accessed from the URL shown on the screen. So in the model building step, you will need to use the inbuilt pipeline to download this particular dataset. The dataset looks like this. It is going to have various labels. For example, 50 labels will be there for different objects as part of this dataset. Second, download the dataset using the package extension. Third, visualize the image in the dataset to understand all the objects that are there for creating a function to resize and normalize the image. First, incorporate and create a data pipeline with dataset for train and test validation. Six, import a pre-trained model. Seventh, add an additional layer. In addition, you can incorporate CNN layers as world. Lastly, train and evaluate the model. Choosing which deep learning package is up to you. You can use TensorFlow, Python or MX net. So these are the steps you need to complete as part of this capstone project. All the best. 90. Capstone project solution walk through: Let's look at the solution for the capstone project. As first step, we need to import all the necessary packages. When you import TensorFlow in collab, you would get the recent package, which is the tens of float 2.2. after that, you need to import TensorFlow underscore hub, TF ds from where you are going to get the dataset. Now, the dataset here as CFR for that, you can use TF dS dot load and provide the CFR dataset and set it as supervised because we are going to build an image classifier which is a supervised model. Meanwhile, the width underscore info set it as true so that you can get the info of the dataset. Then if you explore the data set, you would have two different sets, which is the train and the test set. You can slice it to get the train and the test set. The next step is to explore the dataset. If you provide the dataset info that you got from TFT is you can get that dataset info such as where it's been downloaded, and also get the various features like what is the size of the test that trained, et cetera. You can also print the number of classes. Since this is the far a 100, you would have a 100 classes. You can get each one of them and take the shape of the image. The images 32 by 32 into 33 being the colored image. Now, you can visualize the image using the set of code. You can also see that this is an image pertaining to the class 67. Next step is to build the data pipeline. As part of data pipeline. Provide the batch size and the image size you want. Also, if you want to perform the normalization, you can perform the normalization. After that. As part of the data pipeline. Provide this particular function and map it to the data that is coming from the dataset. This way, you could create the data batches, which is the training and the testing batches. The next step is the transfer learning, which is one of the objectives of our project. You can go to tf dot hub and identify the URL or the pre-trained model. It could be any pre-trained model. You can provide the URL and use the hub dot careless layer to extract the features. Now, you can set the input shape, whichever shape you would require. Since we are leveraging the transfer learning, you can provide feature underscore extracted dot trainable equal to false, so that we can retrain the trained weights and the parameters. Next, you can put a layer to this retrain model. You can set up a dropout and dense layer, or you can even add one more CNN layer. And then after that provide the dense layer. Having provided the model configuration, the next step is to compile the model. For that provide the optimizer as Adam. Since this is a multi-class classification provided as a sparse categorical cross entropy. Next, provide the early stopping mechanism and fit the model. Also provides a number of epoxies for that. Based on the airport, the training will happen and you can see the validation accuracy, training accuracy, etcetera. The next step is to see how it has performed. For that, you can use the matplotlib to visualize how the training accuracy and validation accuracy is be improved in each epoch. So that is a case of overfitting in this particular model. But you can correct it by introducing CNN layers and also the Dropout layers. The next step is to evaluate on the test dataset. Meanwhile, from the desk test dataset evaluation, we can see that this is around 63%. I would expect you to have an accuracy more than this. Next, you can save, reload and use the model for prediction. So this is the solution for the Capstone Project. You can also modify some of these things like adding a CNN layer using a different optimizer and downloading a different pre-trained model. 91. Frequently asked questions: Let's see some of the frequently asked questions as part of these deep learning frameworks that we have covered in this crash course. First, how to check the version of tensor flow? This is a basic question. In every package, you will have this particular function or command that can be used for checking the version. You can use the alias tf dot double underscore, version double underscore to check the version of TensorFlow pi taut MX net or whichever package you're using. The next question is, can I still use TF 1.4 version? So this is typically an earlier version which is also available and supported as part of TensorFlow. However, it is recommended that you migrate to the newer version because there are advanced features that have been integrated into TensorFlow 2 version that is covered in our trash goes question. Can I build generating models using these packages? Of course, you can build generative models using these packages because they are a combination of CNN networks which are competing against each other. Therefore, the ultimate architecture is a CNN model. However, one of the networks will lead an upsampling layer, which is available as part of any of these deep learning frameworks we are covering such as TensorFlow, Pi torch or MX net. Moving onto the next question, Can I do data augmentation using TensorFlow Pi torch or a makes net? Yes, you can perform data augmentation, but the options you have is limited compared to open CV and Open CV as extensive options for you to perform data augmentation steps. First question, why do we need open CV when TensorFlow or Python achieved the same functionality work for data augmentation. As I said, Open CV as an extensive image transformation operation which can be used specifically for performing data augmentation. In addition to that, it also has feature detection options. Next question, can I perform a numerical computation with TensorFlow, Pivot4J, and M? Excellent. Yes. Each of these deep learning frameworks will have a module for performing numerical computation, like in Python, MX net and intensive look. They form the core methodology and code data structure for tensor computations. So there you have it guys. 92. Additional resources for learning: In this video, we are going to look into the resources that you can use as reference for learning deep learning frameworks. In TensorFlow, you, if you go to the Resources section, you will have various links for tools, libraries, axis and extension, etcetera. You can use them as reference for some of the frequently asked questions or for some of the modeling techniques. For example, here, you can see how to optimize a machine learning model. You can also get the tips to improve the performance of the model. Similarly, for each of these packages, you have various tutorials and guides which you can refer in their own website. Now, in Python, there's an extensive information on how to build advanced models like generative adversarial network. There are links guiding you to the reference about the ecosystem of Python, but these are the additional packages that you can use in conjunction with Python to achieve some of the advanced features and operations. In addition, you will also get an access to resources about how to release your model in mobile phone. So these are available in these deep learning frameworks website. In addition to that, they also have their own YouTube channels. For example, TensorFlow has its own channel where you can play various videos. You can also check the latest releases in these tensor flow packages. This is available for all the different deep learning frameworks that we have covered as part of this crash course. Meanwhile, in MAX net, if you go to the ecosystem page, you will see a link to the deep learning book. If you open that, you can see that there is a link to various topics. All these topics have been covered using amex net. You can refer these topics for better understanding on these architectures and also for better understanding about how to implement them using amex net. In addition to this, there is a blog, blog platform called Towards data science, where you can search for various topics about deep learning frameworks. So these are the additional resources which you can use as reference for the deep learning frameworks. You can also use this reference for OpenCV package to all these links are shared as part of the attachment. You can download these links and act access these websites.