Artificial Intelligence - A short introduction | Astra Learning | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Artificial Intelligence - A short introduction

teacher avatar Astra Learning, Learn AI with ease!

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Course Description

      0:42

    • 2.

      What is AI

      2:34

    • 3.

      AI and the 3 faces

      2:44

    • 4.

      Machine Learning and the fundamentals

      3:29

    • 5.

      Deep Learning _ Example-Use-Case

      4:16

    • 6.

      History of AI

      6:23

    • 7.

      Different AI-Fields

      2:53

    • 8.

      Future Applications

      4:46

    • 9.

      What we have learned so far

      0:51

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

63

Students

1

Projects

About This Class

Welcome to our Artificial Intelligence course!

This course will take you on a journey through the past, present and future of AI. We'll explore the different fields where AI can be applied and showcase some interesting applications. You'll gain an understanding of the definition of AI, and learn about Neural Networks, including how they work and the differences between AI, Machine Learning, and Deep Learning. At the end of the course, you'll be assessed with a short Multiple-Choice-Test to ensure you have a solid grasp of the concepts. Join us on this exciting journey and good luck!

Meet Your Teacher

Teacher Profile Image

Astra Learning

Learn AI with ease!

Teacher

Hi, we are Astra-Learning, a group of AI-Enthusiasts who want to demystify the area of Artificial Intelligence and Data Science.

Since nowadays many courses about AI and Data Science are either very theoretical, long, boring or just too expensive, our goal is to be the alternative and provide you with a helping hand.

Have fun enjoying our courses.

See full profile

Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Description: Hello there, Welcome to the beginning of your first journey. In this course, we will guide you through the past, present, and future of artificial intelligence. We will also have a short glance at different fields in which AI can be applied and show some applications that might be interesting to you. However, we will also go through the definition of AI and define what a neural network is step-by-step. Show how it works and explain the differences between AI, machine learning and deep learning. There will also be one short multiple choice tests to see whether you've grasped the concept of neural networks and AI in general. Without further ado, we wish you the best luck and success on your journey. Stay motivated. 2. What is AI: Deep blue, which the world champion in chess, Garry Kasparov, AlphaGo destroys components in a game of Go. The first AI costs are coming. A new era for machine learning has begun. If you have heard or read such or similar things, you probably think, wow, I is the next big thing. But what exactly is the next big thing? What exactly is artificial intelligence? And why a term such as deep learning, machine learning, or neural networks repeatedly associated with the AI breakthroughs. To cross the first obstacle on your journey, Let's get started by defining the different areas in general. After we define each area, we will then continue to have a closer look at the differences. The definitions. If we go by the definition of one of the founding fathers of artificial intelligence than it is the science and engineering of making intelligent computer programs. It is related to the similar task of using computers to understand human intelligence. But a does not have to confine itself to methods that are biologically observable. It's the science and engineering of making intelligent computer programs, machine-learning go. The other side is an application of artificial intelligence that includes algorithms that past data learn from that data and then apply what they have learned to make informed decisions. Then we have deep learning, which is defined as a subfield of machine learning data structures algorithms in layers to create an artificial neural network that can learn and make intelligent decisions on its own. Did you notice something? In all three definitions, the word intelligent was mentioned. That's right, intelligence. But what exactly is intelligence? How is it defined or measured? When exactly is someone or something truly intelligent? Since it's a rather often discussed topic, we defined intelligence as for now as a general mental capability that involves the ability to reason, solve problems, think abstractly, comprehend complex ideas, and learn from experiences. It reflects a broader and deeper capability for comprehending our surroundings. Catching on making sense of things, or figuring out what to do. Now that we know the definitions of AI, machine learning, deep learning, and intelligence, we can look at how they differ from each other. 3. AI and the 3 faces: Hey, I enter three phases. Imagine the free areas as three concentric rings with, I think the largest ring and deep learning the smallest. Each area is simply a subset of the previous bigger area. S. A short overview, we can say for now that I need to explicitly programmed and can do only one task at a time. Machine learning systems have the ability to learn and improve from experience without being explicitly programmed. Deep learning or the other side, uses neural networks to analyze different structures and patterns and therefore work similar to the human brain. You will hear more about that in a few minutes. Now that we are done with the short overview, let's have a closer look at each of the three areas. We will start our journey with the field of AI. This field itself can again be differentiated into three different thoughts. Artificial narrow intelligence, also called weak. Weak, strong at certain activities but cannot surpass humans in general. Although these machines, if you're clever, The only have a limited range of capabilities. Which is why this kind of artificial intelligence is referred to as weak AI. Narrow AI just replicate human behavior based on the limited set of factors and actions. E.g. an AI program that to win games edges will most likely fail to play the game of Go. Artificial general intelligence, aka strong AI. At this point, A IS systems are becoming more human-like. Such an AI system could make its own decisions without human interaction, solve complex logical tasks that require abstract thinking, but also have emotions at one point. However, when considering that the human brain is the model for creating such general intelligence, it's not surprising that achieving a strong AI is an immense challenge. Artificial Superintelligence, AKA super AI. If we ever arrive at this point than one thing is for sure such a robot or being would not only outperform humans in multiple tasks, but it would instead be ahead of humans in almost every thinkable area, such as intelligence, wisdom, social skills, creativity, and many more. Well, if this causes some fear that machines will overrun us one day, don't worry, we're still far from even reaching the secondary iPhone. Currently no strong or super AI is known to exist, and it will probably still take decades to arrive there. 4. Machine Learning and the fundamentals: Machine-learning and fundamentals. Diving deeper into the next layer, we arrive at machine learning. Machine learning is a subset of AI and focuses on learning how to solve specific tasks without being explicitly programmed. Instead of just executing a list of automatic instructions, machine learning models improved through experience and the use of statistics. For this, they need three components to work. Number one, datasets. Before applying machine learning models to any task, they need to be trained on a collection of samples, also called data-set. Usually, this is one of the most time-consuming steps in machine learning, since most datasets require multiple thousands of samples, which takes a lot of time and effort to create. One of the most well-known datasets would be e.g. the iris flower dataset. This multivariate dataset consists of three different flower species, each consisting of 50 samples. Each sample has four features which describe e.g. the petal length or the petal width. Since this dataset is open to anyone and rather easy to handle, it's often recommended to AI Guinness when starting the first day I project. Number two, features, usually features, uh, pieces of data that describe the samples. Let's e.g. stay at the Iris flower dataset. In this iris flower dataset, there are four features which described the flowers. Petal length, petal width, sepal length, and sepal width. Depending on your model and the features, it can make a big difference on how your model performance during the training and testing. Have a look at the following graphics. Here we plotted the correlation map of the four features. This allows us to see what features are correlated with each other and what features are best for separating the dataset. Good choice might be the petal length and petal width. Why those two? You might ask? Let's have a closer look. In the graphics, we see the correlations with three different colors. These three colors represent, in this case the flower species. Now our task is to look at the map and decide which features best separate the dots in different colors. E.g. the first picture in the second row does a pretty good job at separating the yellow dots from the other ones, but it completely fails to separate the pink ones from the purple ones. However, if we look at the third picture in the last row, we can see that all three colors are almost perfectly separated. The features that were used were petal length and petal width. If you're still curious about the graphics, just pause the video for a few seconds and have a look at the other rows and columns. However, we will now move on to the next point. Algorithms. An algorithm can be imagined as a list of instructions which will be executed step by step to solve a specific task. However, in machine learning, It's often the case that multiple different algorithms can be used in combination with statistical methods to solve the same task or to get a better performance. One could also just combine multiple algorithms and play around with the settings. Now that we know what components are needed for machine learning, Let's have a look at deep learning in the next video. 5. Deep Learning _ Example-Use-Case: Deep learning and neural networks. Do you remember how important it was in the case of machine learning to select good features? In the case of deep learning, it's not necessary anymore. Instead, the model collects the features itself and the proofs with the help of so-called neural networks. Since deep learning was inspired by the structure of our brains, deep learning algorithms use complex multi-layer neural networks, abstract previously unknown patterns in the data to come to a solution. Still no clue what the neural network is. Usually when explaining how neural networks work exactly. It would involve some mathematics. But since this is the Introduction to AI course, we will explain it in a rather simple way. Neural networks consist of the following note layers. An input layer, one or more hidden layers, and then output layer. Each node are also called artificial neuron, connects to another one and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. Now to be able to train a neural network, we would need data, a lot of data actually, only then can we truly improve the accuracy of the model over time. But once these learning algorithms are fine-tuned, They will allow us to classify and cluster data in a very short time. Now that we've gone through everything, how about the small example? Example use case. Suppose you own a small business that specializes in sorting fruits into different categories. In the sorting plant, the fruits are all mixed up. It is necessary to separate the fruits and package them into caught, brought through traced before delivering them to supermarkets. Among the fruits that need to be sorted out, bananas, apples, and oranges. Now that we know the task, let's go through each of the three areas. Ai approach. In AI, you'd now have used an AI based algorithm that makes use of decision logic within a rule-based system. An example would be, if the object is an apple, then transport to the right. If the object is a banana, then transport to the left. However, the AI-based system success is dependent on the fruit being accurately labeled by the fruit pickers and having a scanning mechanism in place to tell the algorithm of what the fruit is. A machine learning approach. Machine learning based algorithm is now proposed for improving the AI based approach to fruit sorting when labels are not available. For machine learning to work, the description of what each fruit looks like, it's needed. This is called feature extraction. This is done by creating a blueprint based on the features and attributes unique to each fruit. The algorithm is trained using features such as size, color, shape, and so on to classify the fruits. Moving on to the next approach, we arrive at deep learning by removing the need to define how each fruit looks like. A deep-learning based algorithm could be used to solve any fruit. A major advantage of the deep learning model is that it does not require features to classify the fruits correctly. With lots of fruit images, the model can build up a pattern of what each fruit looks like. These multiple layers of neural networks will be used to process the images in the deep learning model. Then each network layer will define specific features of the images, like the shape of the fruits, the size, the color, and so on. However, for the model to achieve good results, it will require significant computational power and vast amounts of data. Now that you know somewhat the differences between AI, machine learning and deep learning, let's have a look at the history of AI in the next part. 6. History of AI: History of AI from the past to the present. Moving towards the first winter. Welcome to the history of AI. After hearing and reading many articles of successes in the eye, many people might assume that it is a relatively new field, but this is not the case. It has a longer past and you might think, Let's take a seat and have a talk about the awesome history and success stories. Today we hear a lot about new achievements in the field of artificial intelligence, automation and robotics. But it, you know that the idea of intelligent machines already existed in ancient times. Do you know the story of Carlos, the bronchi giant? According to move, Carlos is described as a giant bronze men created by the Greek god of invention and blacksmithing. Zeus, the king of Greek gods, assigned him the task of defending the island of Crete from attackers. While we haven't created a giant robots or anything like that in the recent past, we still have had a lot of interesting things. Let's start with Asimov's three laws. Asimov's laws were first described by Isaac Asimov as the basic rules of robotic service and should be followed by any type of robot. Asimov's rules are stated as following. First, a robot shall not knowingly injure a human being or through inaction allow a human to be harmed. Second, a robot must obey orders given to it. They are human unless such an order would conflict with rule number one. For a robot must protect its existence as long as that protection does not conflict with rule number one or number two. Moving forward in time, we meet Alan Turing with the so-called Turing test. He tried to formulate in 1950 how one could determine whether a computer or model could have the same ability to think as humans. The test uses a simple question and answer process between a human questioner and to anonymous respondents who are not visible to the question of the free, non predetermined questions are asked by people without any visual or auditory contact with the interviewer using an input tools such as a keyboard or a screen. If at the end of the test, the human question, I cannot determine from the questions which of the two respondents is the machine. The intelligence of the machine can be defined as human-like. Only six years later, the famous Dartmouth Conference took place. To Dartmouth Conference is considered the birth of artificial intelligence as an academic discipline. It was requested, planned, and carried out by John McCarthy, Marvin Minsky, Latin in Rochester and Claude Shannon under the full name dot MOV summer research project on artificial intelligence. It took place in the summer of 1956 from June 18th to August 16th at Dartmouth College in New Hampshire. Topics such as automatic computers, neural networks, abstraction or randomness and creativity were discussed. And as it turned out, after just a few years, practically all participants in the conference had become internationally renowned experts in the field of artificial intelligence. Many other innovations followed the Dartmouth Conference, such as the first chat bot eliza, which was supposed to take over the task of psychotherapists. However, as promising as these projects were, researchers finally concluded that the real-world is just far too complex to be processed there such models, which led to the cancellations of important findings at the beginning of the first AI winter in the 1960s. Preparing for the second winter. After the effects of the first AI winter began to fade, a new age of EI began. This time, much greater emphasis was placed on developing commercial items. Furthermore, significant conferences such as the Association for the Advancement of Artificial Intelligence, began in the early 1880s and saw a tremendous surge in ticket sales. Ai technology has peaked a curiosity of both the general public and government authorities. Expert systems were crucial to the commercialization of AI. This systems were created by developing if ten rule sets and have been used in a variety of applications. Including financial planning, medical diagnostics, geological investigation, and microelectronic secret is n. However, since the models and techniques we're still very limited and could not solve more complex problems. The second winter came just a few years later. The present progress will slow after the second winter, but major breakthroughs came only a few years later. Among other things, it was possible to defeat the den chess world champion Garry Kasparov with the help of Deep Blue. Deep Blue was a supercomputer developed by IBM specifically for playing chess and was best known for being the first AI program to ever win a chess match against the reigning world champion after losing the first six k match against Garry Kasparov in 1996 and receiving a massive upgrade, Deep Blue was able to beat the world champion in May 1997. A few years later, AlphaGo beat the world champion in the game of golf with four to one. It might not sound like a big milestone, but it truly is. Alphago differs greatly from earlier AI projects. To calculate its chances of winning, it used neural networks rather than probability techniques that were hard coded by human programmers. In addition to games that AlphaGo plays against itself and other players, AlphaGo also excesses and analyzes the complete Internet go library, including all games, players, stats, and literature. One, setup, it examines the optimal strategy to solve the game of golf without the assistance of the development team. Alphago estimates enormous amounts of probability for many moves in the future using neural networks and Monte Carlo tree search, which you will learn more about in another course. Now that we're at the end of the history, it's time to go back to the future. 7. Different AI-Fields: Future applications. There are so many theories about what impact ai will have on us in the future. And since there are so many possibilities, Let's just have a look at three examples that consume become a reality. Number one, fully smart and autonomous cities. The concept of fully smart and autonomous cities is an exciting possibility for the future of AI. With the advancements in technology, we can see homes and apartments becoming smarter with voice recognition systems, fingerprint sensors, and more. If this trend continues, we could soon see entire cities becoming fully autonomous. In these cities, everything from garbage disposal to public transport could be operated without human intervention. Just imagine wasting disposal trucks driving themselves to designated areas for collection or public transport systems that automatically reroute based on traffic and passenger demand. One of the potential benefits of autonomous cities is the reduction in traffic pollution and accidents caused by human error. This could lead to a cleaner and safer environment for residents. Additionally, autonomous cities could also reduce the cost of public services and enhance the efficiency. Number two, a, i, discovering new technologies and laws of physics. That's right. It has already been possible to predict some physical processes on a small scale with the help of AI or even to create new mathematical theories. Scientists from Osaka University and COBie e.g. have succeeded in extracting Hamiltonian equations using neural networks. That's a short info. Hamiltonian mechanics is based on Lagrangian and Newtonian mechanics. Without going into too much detail. In physics, Hamiltonian mechanics is the theory of how energy changes from kinetic energy to potential energy. And tech again over time, it's used to describe systems like a pendulum or a bouncing ball. However, its strength is demonstrated in more complex systems like celestial mechanics or planetary orbits. Number three, ai in law and order. I'm sure you have often heard that the legal system is struggling with too many tasks to help the legal system out. And AI created through cooperation with lawyers, judges, developers, and other groups of people can be used in smaller court cases such as damage claims. It can also save valuable time when structuring and preparing files. Nevertheless, there are also moral and ethical questions in this regard. However, since we don't have the time to go through those questions, we will learn more about them in another course. 8. Future Applications: Different AI fields and overview. Now that we have gone through the history of artificial intelligence, let's have a look at how it's used in different fields. Since I is a very complex and broad field, it is difficult to keep an overview or even impossible to list all areas that make it up. To help you out, you will first get an overview of the most important areas. Machine learning, knowledge representation, planning, neural networks, or e.g. robotics, computer vision, NLP, searching, and many more are all important sub periods of AI. One crucial sub area of AI is knowledge representation, which involves representing information about the world in a format that the computer system can use to perform complex tasks, such as diagnosing medical conditions or engaging in natural language conversations. Nlp, on the other hand, enables computers to understand and interpret human language. While computer vision is vital in enabling machines to perceive the environment. Each sub area is essential and plays a unique role in the development of the eye. While it is impossible to cover all these exciting sub arrows in this video, let's focus on some examples of how AI is currently being used in various industries and applications. Examples, searching exoplanets. Did you know that in the last decade alone, over 1 million stars have been observed to find out if they are home to exoplanets. In short, exoplanets are planets that orbit other stars. So far, the search has been largely manual, but through the use of AI and especially deep learning, the process can be automated and quantified. Just imagine, instead of 100 planets a year, you suddenly find thousands of new planets. In this context, a group of astronomers from the University of Geneva, burn and NCC our planet, Switzerland, teamed up with a company called this high-tech to use artificial intelligence for identifying planets in pictures. They wanted to find exoplanets that were previously undetectable. So they trained a computer program to predict how planets interact with each other. By using this new technique, the scientists were able to improve the search for exoplanets and make discoveries that they wouldn't have been able to find other ways. Ai in drug discovery. Various pharmaceutical companies such as fire, Moderna, and others are already using AI significantly shorten the research process for new drugs. The best example of this is the development of the COVID vaccine by the pharmaceutical company Moderna. With the help of data from the sars COVID virus, a predecessor of the coronavirus, and two combination with AI at especially deep-learning, the company has managed to provide the vaccine in a very short time. However, AI is not only used in a search for the right vaccine composition, but also in part to create drugs and test them for side-effects in simulations. Which not only saves time and money, but also reduces the number of animal experiments. Creates art. That's right. Pi creates images, videos, backgrounds, and artwork. With new emerging AI players such as stable diffusion, dolly, or medullary, the creation of images, videos or odd is easier than ever before. Just have a look at this short video here about the Assembly. President Trump is a total and complete the picture. Now. You see, I would never say these things, at least not in a public address, but that was pretty scary. How about creating fake faces instead? Although AI isn't perfect edits right now, imagine how it will be in the upcoming 15 to 20 years. There is also the possibility of combining two images to create a completely new work. E.g. let's just take the picture of Mona Lisa, but let's try it this time in fact hostile. Or instead, how about a combination of the screen and the picture of Obama? With a better understanding of the current applications of AI, we can now turn our attention to the exciting possibilities for the future of this technology. Let's explore some of the potential future applications of AI in the next chapter. 9. What we have learned so far: What we have learned so far. Arriving at the end, Let's just rethink what we have learned so far. We look together at the terms of differences between AI, machine learning and deep learning. Then we were able to catch a glimpse of the past of EI and surprisingly found that AI is an older research area than previously thought. We have heard about azimuths, laws and the Turing test. Back to the future, we learned which areas EI consists of and where it is already used today. In the last chapter, we were able to speculate as to how AI could develop as it currently stands. Now that you have a solid foundation of knowledge on the eye, you're all set to dive into the rest of the causes with ease. So keep up the great work and stay motivated.