Ollama Masterclass: Run LLMs locally | Amit Diwan | Skillshare
Search

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Ollama Masterclass: Run LLMs locally

teacher avatar Amit Diwan, Corporate Trainer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      About - Course Intro

      1:18

    • 2.

      Ollama - Introduction and Features

      3:43

    • 3.

      Install Ollama on Windows 11 locally

      2:15

    • 4.

      Install Llama 3.2 on Ollama Windows 11 locally

      3:33

    • 5.

      Install Mistral 7b on Ollama Windows 11 locally

      4:17

    • 6.

      List the running models on Ollama locally

      0:39

    • 7.

      List all the models installed on your system with Ollama

      0:57

    • 8.

      Display the information of a model using Ollama locally

      1:16

    • 9.

      How to stop a running model on Ollama

      1:10

    • 10.

      How to run an already installed model on Ollama locally

      1:51

    • 11.

      Create a custom GPT or customize a model with OLLAMA

      8:41

    • 12.

      Remove any model with Ollama locally

      1:31

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

47

Students

--

Project

About This Class

Welcome to the Ollama Course!

Ollama is an open-source platform to download, install, manage, run, and deploy large language models (LLMs). All this can be done locally with Ollama. LLM stands for Large Language Model. These models are designed to understand, generate, and interpret human language at a high level.

Features

  • Model Library: Offers a variety of pre-built models like Llama 3.2, Mistral, etc.

  • Customization: Allows you to customize and create your models

  • Easy: Provides a simple API for creating, running, and managing models

  • Cross-Platform: Available for macOS, Linux, and Windows

  • Modelfile: Packages everything you need to run an LLM into a single Modelfile, making it easy to manage and run models

Popular LLMs, such as Llama by Meta, Mistral, Gemma by Google's DeepMind, Phi by Microsoft, Qwen by Alibaba Clouse, etc., can run locally using Ollama.

In this course, you will learn about Ollama and how it eases the work of a programmer running LLMs. We have discussed how to begin with Ollama, install, and tune LLMs like Lama 3.2, Mistral 7b, etc. We have also covered how to customize a model and create a teaching assistant like ChatBot locally by creating a modefile.

**Lessons covered**

  1. Ollama - Introduction and Features

  2. Install Ollama Windows 11 locally

  3. Install Llama 3.2 Windows 11 locally

  4. Install Mistral 7b on Windows 11 locally

  5. List all the models running on Ollama locally

  6. List the models installed on your system with Ollama

  7. Show the information of a model using Ollama locally

  8. How to stop a running model on Ollama

  9. How to run an already installed model on Ollama locally

  10. Create a custom GPT or customize a model with Ollama

  11. Remove any model from Ollama locally

Note: We have covered only open-source technologies

Let's start the journey!

Meet Your Teacher

Teacher Profile Image

Amit Diwan

Corporate Trainer

Teacher

Hello, I'm Amit,

I'm the founder of an edtech company and a trainer based in India. I have over 10 years of experience in creating courses for students, engineers, and professionals in varied technologies, including Python, AI, Power BI, Tableau, Java, SQL, MongoDB, etc.

We are also into B2B and sell our video and text courses to top EdTechs on today's trending technologies. Over 50k learners have enrolled in our courses across all of these edtechs, including SkillShare. I left a job offer from one of the leading product-based companies and three government jobs to follow my entrepreneurial dream.

I believe in keeping things simple, and the same is reflected in my courses. I love making concepts easier for my audience.

See full profile

Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. About - Course Intro: In this video course, learn Olama and its concepts. Oma is an open source platform to download, install, manage, run, and deploy large language models. All this can be done locally with Olama. Popular LLMs such as Lama by Meta, Mistral, Gemma by Google's DeepMind, F by Microsoft, Quin by Alibaba Cloud can run locally using Oma. LLM stands for large language models. These models are designed to understand generate and interpret human language at a high level. These are trained on vast datasets and can perform tasks like text generation, summarization, translation, and answering questions. Here, the LLM model is processing an input prompt and generating a response like a prompt is typed on chat GPT. In this course, we have covered the following lessons with live running examples. We have also shown how to create a custom GPT that is a chat GPT model in the form of an assistant. Let's start with the first lesson. 2. Ollama - Introduction and Features: In this lesson, we will learn what is Oma? It's introduction and features. Let's see. So lama is basically an open source platform that allows a user to download, install, run and deploy large language models, and all this can be done on your local system. So you would be wondering what are LLMs. So LLM stands for large language models. These are models that are basically designed to understand generate as well as interpret human languages. Okay. So it includes billions of parameters. So you must have heard about LLM models like Lama, Phi, Gemma, Mistral with billions of parameters. So when the parameters increase, the performance of the model also enhances. Okay, it typically contains billions of parameters, allowing them to grasp complex language patterns, and generate text. Okay, so currently, we have Lama 3.2 mistrl seven B, Okay, these models are trained on vast datasets such as articles from the Internet, books, as well as through other sources. And these LLM models, when you ask a question to them on prompt, it generates a reply. Okay, so LLM processes the information and generates a new content for you, which is why we call it generative AI. Okay, I can perform tasks like answering questions, translation, summarizations. Okay, it can generate text for you. You can write an article, blogs, an email from it. Okay. You can run these LLMs using Olama. These are the features of Olama. It consists of pre built model library that includes your ama by meta, Mistral by MtoAI, your Phi LLM by Microsoft and many others. Okay, I also allows you to customize and create your model. We will see in this course that how we can create our own model. It's quite easy to run it and manage it. It includes simple APIs. It is cross platform. You can work around with Olama on Windows locally on Windows, Mac, as well as Linux. Okay, it has a model file. We'll be creating a model file later when we'll be creating our own model. Okay, the model file packages everything you need to run an LLM. So this is a single file which we'll be creating and we'll be using it to manage and run our model. So let us see what Oma can do and why it is so popular. It can allow you to run LLMs on your local system. That is on your Windows ten, Windows 11, Mac and Linux. Okay, so this is a basic interpretation of Olama what Olama can do. Using installing Olama locally, which we'll do later. You can easily download, run, even deploy and manage your Lama Gemma Mistral Phi Quin All these LLMs, Quinn is by Alibaba Cloud. Okay, Pi is by Microsoft, MSTrol is by MitroL Gamma is by Google. Okay, Lama is by Meta. So in this way, you can easily run all these models using Olama locally. We'll see it later. So, guys, in this video, we saw what is Olama? Why is being widely used. What are its features? Thank you for watching the video. 3. Install Ollama on Windows 11 locally: This video, we will learn how to install Olama on Windows 11. Olama is an open source platform to download, install, deploy and run LLMs. Let's see how to install Olama, go to the web browser on Google Type, Olama and press Enter. On pressing Enter, the official website is visible. Click on it. Now you can see the official website is visible. We want the version for Windows, so I'll directly click here. Download. Now download for Windows button is visible. This is what we need to click on. Click download started 664 MB. Let's wait. We have downloaded the EXE file. Now, right click and click Open to begin the installation. Minimize, click Install. Now the setup will install, let's wait. Y Guys, we have installed Olama. Now you can go to start Type CMD, click Open. Now let us verify Type Olama and press Enter. If the following is visible, that means we successfully installed Olama. Okay, these are the available commands. So in this way, guys, we can easily install Oma. 4. Install Llama 3.2 on Ollama Windows 11 locally: In this video, we will learn how to install ama 3.2 on Windows 11. So for that, first, we will install Olama which is an open source platform to run LLMs. Then in the second step, we will install Lama 3.2 on Olama. We already installed Olema in the previous videos. So now we will directly install Lama 3.2. Let's start. Now, we will use Olama to install our Lama 3.2. For that, guys, we need to use the command. Here it is run a model, that is run command. So just type lama, run the model name. So that is Lama 3.2. You can also verify this. If you remember we opened this, go to models. Now under model, you can see we are installing Lama 3.2 by Meta. Click on it. And here, you'll be able to find the same command. Go below. You can see all the commands or details about Lama 3.2, and here it is, we are installing 3.2, so you can compare the following command. It's the same. Okay. Olama Lama, 3.2. That's it. You just need to press Enter and it will take some time to install. It will also show that how many GBs will get downloaded. Press Enter. Or you can see it is pulling. Here it is two GB, so it will take some time. Let's wait. Guys, now you can see we have installed it. Success is visible. Okay, now you can directly type your prompt here. So let's say I'll ask. What is generative I? So it will give me an answer. So you have installed Lama 3.2 successfully on your system using Olama And you can ask anything. So, guys, in this video, we saw how we can install Oma. We installed Lama 3.2 on Olama and we also tried a prompt. Thank you for watching the video. 5. Install Mistral 7b on Ollama Windows 11 locally: In this video, we will learn how we can install the Mistral AI on Windows 11 locally. So at first, we need to download and install Olama which is an open source platform to run LLMs. In the second step, we will install them Mistral AI. Let's start with the first step. Installing Olama. Let's start. We already installed Olama in the previous videos now to run your model Mistle AI. You need to type this command run command. So let me type it. Here it is Olama run and then mention the model name. But we want the exact model name. So on this same website, go to models. Here and you can search the model you want or you can go below. When you go below, you can see Mistral is visible. Okay, we want the seven B. I'll click on it. Okay, so here it is, and here is the command. You need to type this command. If you want any other version, you can click here. But right now, seven B is the latest one. You can copy this command directly, or you can directly type Mistral press enter. It will take some time to pull. Let's see how many GBs. It's four GB, so it will take good amount of minutes. Let's wait. Now here you can see it is successful. It's visible. Now you can directly ask anything. Let's say, I'll ask, who are you? Okay, you can see, I'm a large Language mot trained by mistro. So let me ask what is Python? Let's see the answer by the model. That's it, guys, we successfully installed Miss Trolley. Thank you for watching the video. 6. List the running models on Ollama locally: In this video, we will learn how we can easily list the running models on OlamaF that go to start, Type CMD, click Open. Okay, type Olama and you can see all the commands. So we want to list. So we want to list the running models. So I'll type OlamaPs and press Center. So this was the latest one Mistral. Okay, in this way, you can easily list the running model. Thank you for watching the video. 7. List all the models installed on your system with Ollama: In this video, we will see how we can list the models on lama. That means which all models we installed using Olama on our local system. Let's see how we can list them. For that go to start type CMD, click Open. So now here in type Olama and you can see all the commands. We want to list the models, not to list the running models. We only want to list all the models. For that, I'll type Olama List. That's it. And press Center. So we have installed two models till now, Mistral, as well as Lama 3.2. So these are the models visible right now because we installed it using Olama. On this Maguis you can easily list the models you have installed using Olama. Thank you for watching the video. 8. Display the information of a model using Ollama locally: In this video, we will see how we can show the information for a model. We installed on our system using Oma. So on Command prom, we typed lama, and all the commands are visible. So to show, we need to use the show command. Okay? Right now, we have two models here. We listed using Oma space list command. Okay, now let's say I want the information regarding regarding Lama 3.2, soil type, lama, show, Lama 3.2, and let's see what information is visible. Here it is, all the information is visible. Architecture parameters 3.2 billion, we all know, the context length as well as the quantization for optimization is visible and the licenses. In this way, guys, we can also check our second one. That means lama show. So Miss troll. And let me check it again. It's Miss troll, right, Prayer Center. And here is the information. It was for 7.2 billion parameters and the context length and embedding length. With the quantization is also visible and the license also. So, guys, we saw how we can show the information of a model with Ulama. Thank you for watching the video. 9. How to stop a running model on Ollama: In this video, we will learn how to stop our running model in Olama. Okay, so here you can see we ran our model again. Okay, Olama run Lama 3.2. We can also verify it how using the list running models command, ps command. So when I type OlamaspacPs, it will show it will show the Lama 3.2. Here it is, it's visible because we were running it currently. Here it is I'm again showing Lama 3.2 was running. Okay, now we need to stop it. For that type lama, Stop. And the same command is also visible here. You can type lama here. Press Enter and all the commands are visible. This is the stop command. Stop or running model. Now mention the name of the model you need to stop. That is Lama 3.2 and press Enter. Now it is stop. You can again verify using lama Ps, lama space ps command, and nothing will be visible. Now a stop D running model. Using the stop command, thank you for watching the video. 10. How to run an already installed model on Ollama locally: In this feed, we will learn how we can run an already installed model on Olama locally. For that, go to start Type CMD. Click open. So here it is now Type Oma. Okay. So first, we will list the models we already installed here, so type Olama and the list command. Okay. Presenter and you can see we installed two of them. Now, none of them is running, we should verify also list the running models. Let's see which all are running Oma PSPs for list running models. I'll press Enter and you can see none of them are running right now. So what I'll do, I'll run it first. Now, let's say I'll run Lama 3.2 for that. Use the Run a model command. That means I'll type Olamarun Lama 3.2. We need to mention the parameters also exactly 3.2 press Enter. Now it will run it again, and it started. Okay. Now you can type anything, and let's say So we started with Lama. I'll type something a prompt. Okay. So in this way, you can again run your LLM models on Ulama using the run command. Thank you for watching the video. 11. Create a custom GPT or customize a model with OLLAMA: In this video, we will learn how we can create a custom GBT or you can say customize a model with the Oma. So on Google Type, Olama and Prey Center, click here. Here is the official website. We already downloaded Olama. Okay, now go to Github, their official Github. Here in Go below. And here in you can see customize a model. Okay? We need to create this model file. We will create in VS code, and we will then create a model and also run the model later on with our name. Okay, we will first create a model file. So let's do it. We need VS code for it, first let us install. Let's start. At first, go to the web browser on Google Type VS Code and press Enter. Now the official website is visible code.visualstudio.com. Click on it. Now you can see the version is visible. You can directly click Dwload from here, and here are the versions for Windows, Linux, as well as Mac. I'll click here. And now VS code will download. Here is the installer file. Let's wait. Now, right click and click Open to begin the installation, minimize, accept the agreement, click Next. It will take 373 MB. Click Next. Click Next. It will also create a Start Menu folder. If you want to create a Dextrop icon, you can click here, but I won't create it, so click Next. Now click Install. Guys we successfully installed VSCode. I'll click Finish, but I'll uncheck this so that I can open it from somewhere else. Click Finish. So guys we successfully installed VSCode, let us open it. Go to start type VSCode, and open it. Now, guys, I'll click here and I'll open a folder. Let's say my folder is in D drive. I'll create a new folder, Amit project. Okay. I'll click and click Select Folder. As I trust. Now, right click, create a new file, name it Model file. Okay, here it is. Now you can see here we have a create a model file. Code, just copy it and paste it there. Okay, I have pasted it, and also I have done some changes. So instead of this Mario staff assistant, I've just mentioned you are a cricket analyst. Answer as Amid Demand assistant only, you know how to play cricket, train other cricketers and all the terms related to cricket. Okay, file, save. Now, right click and copy the path, minimize. Okay. Go to start time CMD, click Open. Now you can see your models, which models are running?FaltyeOlama. I'll check the models using PS Command, lama Ps press center and you can see our Lama 3.2 is running. Okay, here it is. Now I'll just reach the path, right click and paste. Here it is, it is our D drive. I can directly do this also decolon and we are inside D Drive. Now do DIR on CMD, and you have a Amit project. CD space Amith project, and you are inside your folder. Here it is. Type the command create. You need to mention your model name. Let's say, I'll mention cricket analyst Ameth hyphen F, and then model file and Lpresenter. So I just took the code from here. Here it is, Olama create Mari Hyphen M. I just mentioned my model name. Now let's see lpresenter Sorry, I just missed Olama. Obviously. Okay, so lama create is the command. You can verify it from here. Create a model from a model file. This is what we are doing. And I'll just press Enter now. And you can see success. Okay. Okay, so now I'll run it. Our model is created OlmaRun cricket analyst Ametho, so I just created your cricket analyst Ath model. Press Enter. Okay, so we are running our model now. The Run command. Obviously, you know that here it is. You can check all the commands using the Olama. That's it. Now I'll press Enter. You can see it we'll run it. Now this is our custom GPT, which is like our assistant. Now your custom GPT is ready. Type Hi, how are you doing today? Herein, you can see it's visible that I am a cricket analyst. Okay, so let me ask how to play a perfect or to play a perfect cover drive. Okay, so the teaching assistant or cricket assistant is reply Okay. Y So, guys, in this way, you can easily create your custom GPT. We created a teaching assistant in the form of a cricket analyst. So here were our files. Remember, Indiv amid project, and this was our model file. Insider, S code. Okay, we just added this prompt here, and this was set for more creativity. Okay, temperature one. From Laman two, this is the model name, and this is our message. If you want to take it, you can take it from the official Github. And this is the code. This was our model file, and these were the commands to run it. 12. Remove any model with Ollama locally: In this video, we will learn how to remove any model on Olama locally. For that, let us go to start and type CMD, open it. Now let me type the command Olama and find all the commands here easily. First I'll list all the models on this system. Olama space List. Olmaspace list and press enter. Here are the two models we installed mistro and Lama 3.2. The first is by mistro and the second one is by Meta. Now, what I'll do I need to remove it. Okay, so use the RM command. Oma space RM and mention the name of the model. Let's say, I'll mention Mistral. I'll mention Mistral and here is the name. So you need to just type mistrl and press center. If you'll type Lama, you need to also mention the parameter. That means in this case, the name is Lama 3.2. Okay, so let me uninstall it, okay? Oma space RM space Lama 3.2 deleted. Okay, now fill check Lama list again. You can see only one of them is visible. Here it is mistroll is visible because we just deleted Lama 3.2. So, guys, in this way, you can easily remove any model with lama. Thank you for watching the video.