LangChain Crash Course | Amit Diwan | Skillshare
Search

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

LangChain Crash Course

teacher avatar Amit Diwan, Corporate Trainer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      About Course

      0:40

    • 2.

      LangChain - Introduction, Features, and Use Cases

      4:20

    • 3.

      What is Chaining in LangChain

      1:42

    • 4.

      Components/ Modules of LangChain

      2:59

    • 5.

      Preprocessing Component of LangChain

      1:42

    • 6.

      Models Component of LangChain

      1:57

    • 7.

      Prompts Component of LangChain

      1:59

    • 8.

      Memory Component of LangChain

      1:38

    • 9.

      Chains Component of LangChain

      1:31

    • 10.

      Indexes Component of LangChain

      1:57

    • 11.

      Agents Component of LangChain

      1:49

    • 12.

      LangChain with RAG - Process

      2:56

    • 13.

      LangChain with RAG - Final Coding Example

      10:47

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

10

Students

--

Projects

About This Class

Welcome to the LangChain course. LangChain is a framework designed to build applications powered by large language models (LLMs). It provides tools and abstractions to make it easier to integrate LLMs into applications, enabling tasks like question answering, text generation, retrieval-augmented generation (RAG), chatbots, and more.

LangChain - Use Cases

Here are some of the use cases of LangChain:

  1. Question Answering: Build systems that answer questions by retrieving relevant information and generating answers using LLMs.
  2. Chatbots: Create conversational agents that can maintain context across interactions.
  3. Retrieval-Augmented Generation (RAG): Combine retrieval of relevant documents with text generation for more accurate and context-aware responses.
  4. Text Summarization: Generate summaries of long documents or articles.
  5. Code Generation: Build tools that generate code based on natural language descriptions.
  6. Personal Assistants: Create virtual assistants that can perform tasks like scheduling, email drafting, or information retrieval.

Course Lessons

LangChain – Introduction

  • LangChain - Introduction, Features, and Use Cases
  • What is Chaining in LangChain

LangChain – Components

  • Components/ Modules of LangChain
  • Preprocessing Component of LangChain
  • Models Component of LangChain
  • Prompts Component of LangChain
  • Memory Component of LangChain
  • Chains Component of LangChain
  • Indexes Component of LangChain
  • Agents Component of LangChain

LangChain with RAG

  • LangChain with RAG - Process
  • LangChain with RAG - Final Coding Example

 What you'll learn

  • Learn LangChain from scratch
  • Understand the LangChain workflow
  • Summarize multiple PDF documents with LangChain and RAG
  • Understand chaining in LangChain
  • Get to know the LangChain components with examples
  • Load and parse the PDF documents
  • Split documents into chunks
  • Setup the embedding models
  • Learn to create a vector store from the document chunks
  • Setup a local LLM
  • Learn to create a QA chain

Who this course is for

  • Those who want to begin their AI journey
  • Beginner AI Enthusiasts
  • Learn LangChain with RAG
  • Those who want to understand chaining in LangChain
  • Those who want to summarize multiple PDF documents

Note: We have attached the Google Colab Notebook we ran in the class

Meet Your Teacher

Teacher Profile Image

Amit Diwan

Corporate Trainer

Teacher

Hello, I'm Amit,

I'm the founder of an edtech company and a trainer based in India. I have over 10 years of experience in creating courses for students, engineers, and professionals in varied technologies, including Python, AI, Power BI, Tableau, Java, SQL, MongoDB, etc.

We are also into B2B and sell our video and text courses to top EdTechs on today's trending technologies. Over 50k learners have enrolled in our courses across all of these edtechs, including SkillShare. I left a job offer from one of the leading product-based companies and three government jobs to follow my entrepreneurial dream.

I believe in keeping things simple, and the same is reflected in my courses. I love making concepts easier for my audience.

See full profile

Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. About Course: In this video course, learn Lang chin and its concepts. LangchN is a framework designed to build applications powered by large language models. It provides tools and abstractions to make it easier to integrate LLMs into applications, enabling tasks like on answering, text generation, Rag, chatbots, and more. In this course, we have covered the following lessons with live running examples. Let's start with the first lesson. 2. LangChain - Introduction, Features, and Use Cases: This lesson, we will learn what is Lang chin. We will also discuss the features as well as its use cases. Let us start. Lang chain is a framework designed for developing applications powered by large language models. It simplifies the entire life cycle of LLM applications from development to deployment and monitoring. It is useful for building context aware and reasoning based applications. You can easily integrate LLMs into applications enabling task like text generation, Rag, quotien answering, and others. By chaining multiple models and processes, ang chain allows a user to build complex workflows. With that, you can easily manage various components of an EA system using Lang chain. Let us first understand what does Lang chin interpret? It means Lang plus chain. That means large language models for Lang as well as chain is for combining these LLMs, like chaining these LLMs. So ang chain is built around LLMs like Open Es, GPT, huggingPase models, and others. With Lang chain, you can easily chain together multiple steps or components like retrieving data from a database or document store, processing this data to generate embeddings. You can also generate responses using an LLM interact with APIs and databases. Here are the features of Lang chain. You can easily integrate with LLMs. It provides a unified interface to interact with different models. Create chains of operations in which the output of one step is added as an input to the next step. For example, a chain that retrieves relevant documents and answer using an LLM. Lang chain also supports memory. As the name suggests, memory is used to store and retrieve context across interactions. Through this, you can easily build chat booards or applications that require context from previous interactions, like memory. Lang chin also provides tools for indexing as well as to retrieve documents. So you would be knowing about Rag, so Rag is retrieval augmented generation. Lang chin also helps in building rag systems, like a vector stored to retrieve relean documents for a query Lang chin also supports agents that are systems using LLMs to decide what actions to take and in what order. Like an agent to ease the sales work, an agent that can interact with external APIs or databases. LangchN also provides tools for prompts so that you can easily manage and optimize prompts. Lang chin is modular that allows developers to mix and match components so that custom workflows can be easily built. Let us now see the use cases of Lang chin. If you want to build a system for quotien answering, you can easily achieve this with Lang chin. This will retrieve relevant information and generate answers. Easily create conversational agents that maintains context across interactions like a chat booard. Implement Rag also with Lang chain so that you can get text which is more accurate and based on your own documents. You can also summarize texts, easily summarize long documents or articles with Lang chin, easily build tools so that quotes can be generated. With that, easily create personal virtual assistant for tasks like scheduling, retrieving information, drafting an email, and others. So in this lesson, we saw what is Lang chin, its features. We also saw the use cases. Thank you for watching the video. 3. What is Chaining in LangChain: In this lesson, we will understand what is chaining in lang chin. We already saw how to interpret lang chin like lang plus chain. So let us see what is chain. Lang chain is a process to combine multiple components or steps into a sequence so that you can fulfill a specific task. Chaining is important so that you can mix and match components to ease the work of creating custom workflows, easily build systems that use context from previous steps like chat history. With that, you can also easily break down difficult tasks into smaller, easier steps with chaining. Here is a quick example of chaining in lang chain. Let's you are building a rag that is retrieval augmented generation system. In this, what is the first step that is retrieved? You can use a retriever to fetch relevant documents. Then the generate pass the relevant documents and a quotien that is a prompt, asked to an LLM to generate an answer. And what will be the output? The answer will be returned to the user. So the sequence of steps we saw is a chain that combines retrieval and generation into a single workflow. So this is the concept of chaining. Chaining is like linking multiple components together to create a sequence of operations like document retrievers. So, guys, we saw what is chaining in Lang chain. Thank you for watching the video. 4. Components/ Modules of LangChain: In this lesson, we will understand what are the components of lang chin. Also consider them as the modules of lang chin. Let us see to build complex workflows, including LLMs, Lang chin has provided some components. Let us see them. The first is preprocessing. As the name suggests, it prepares your raw data like your documents so that you can use them in ang chin workflows. It also includes the task of splitting text into chunks, cleaning the data, and generating embeddings. Then comes your models. These are your LLMs, that is large language models or even the embedding models. You must have heard about GPT, also hugging face models with that, it also includes your custom fine tuned models. Then comes the prompts component. These are your queries on instructions or consider them as the input, which is given to the LLM to generate responses. These are the prompts you type on Chat GBT. It can be static or dynamically generated based on context or user input. Then comes memory. As the name suggests, it stores and retrieves like your chat history. It also enables applications to maintain continuity and context awareness. Change, as we discussed before, it is to combine multiple components into a sequence of steps. The idea is to pass the output of one component as the input to the next, forming a structured workflow. Through this, you can easily work around complex workflows like quotien answering, text generation, summarization, et cetera. Indexes, these are the tools for organizing and retrieving data efficiently. You can relate it with a layman definition of indexes, why we use indexes so that we can retrieve the data quickly. In this also, you can easily enable fast retrieval of relevant information. Agents, these are the systems that use LLMs to decide actions and interact with external tools or APIs. With this, you can perform tasks like calling APIs, queering the database, and others. So, guys, we saw what are the components of Lang chain. In the upcoming lessons, we will discuss them one by one, and after that, we will see a live running example of Lang chain. 5. Preprocessing Component of LangChain: In this lesson, we will understand the first component of ang chin, that is preprocessing. Let us see. As the name suggests, the Lang chin process begins with document loaders and text splitters. This can include PDF documents. The loading and preprocessing of data is what the preprocessing component includes. After this, the output is passed to other components like model, indexes and chains. Generally, under the ang chin components, this component of document loaders and text splitters are not included, but this is still an essential part of the Lang chin ecosystem. This eases your work in building workflows. So why we should start with the document loader and text splitter step so this is the data ingestion step includes the entry point for bringing your external data into the system. The role of text splitters is to ensure that your data is in correct format so that it can be easily processed by the model, whether it is LLM or wedding model. When you split the document into chunks, it allows for parallel processing and efficient retrieval. With that, you can also achieve scalability. Splitting the data allows you to handle large datasets. Guys, we saw what is the pre processing component of Lang chain. 6. Models Component of LangChain: In this lesson, we will understand what is the models component of Lang chain. Let us see. Models, as the name suggests, refers to your LLMs, that is large language models. It can also include your embedding models. These models are actually responsible for managing, understanding, generating and transforming text based on the input received. Let us see the types the models in Lang chin can include LLMs, and it can also include the embedding models. Some of the examples of LLMs are the GPT models, metas, Lama models. They are the brains of lang chain applications so that you can perform text generation, text summarization, translation, problem solving, and others. It has the capability of generating human like text. It can also answer ions, summarize content, and perform other operations. Then comes your embedding models. This is used to convert your text into numerical representations. That is embeddings. This is used for tasks like documented eval, clustering so that you can easily enable systems to find relevant information at a fast pace. Now let us see why these models are important. They actually provide the intelligence and are the brain behind Lang chin applications. So that task like creating content, summarizing text, quien answering can be easily achieved. By combining both the LLM as well as your embedding models, Lang chain can be used to build powerful systems. So, guys, we saw what is the models component in Lang chain? 7. Prompts Component of LangChain: In this lesson, we will understand what is the prompts component of lang chain. Let us see. So prompts are basically the input queries, instructions or context you provide to the LLM. Whenever you type a prompt on Chat GPT or copilot, so this is what we call a prompt. This basically acts as a bridge between the user and the LLM, and it actually allows the model to interpret and respond to a specific task. What is the role of prompts in Lang chin? We all know what is a prompt. We got a lot of answers on typing prompts on copilot, Claude AI, chat, GPT, Gemini, and many other chatbds. But what is the role of prompts in Lang chin? It actually guides the LLM while answering a question or while summarizing a text. These prompts can also include an additional context, like chat history, retrieve documents and others so that the relevance can be announced and to make the response of the LLMs more accurate. With that, you can also customize your prompts according to specific task or domains. This makes them highly flexible. So why prompts are important in lang chain, they actually determine how the LLM interprets and responds to a task. This helps in achieving more accurate and relevant outputs. If the proms are well designed, it can significantly enhance the performance of lang chain applications. Well designed prompts help in complex workflows like Rag, chat boards, or others. So, guys, we saw what are prompts and the role of prompts component in lang chain. 8. Memory Component of LangChain: In this lesson, we will understand the memory component of lang gin. Let us see. As the name suggests, the memory refers to the ability to store and retrieve context across interactions. So the system remembers previous inputs, outputs, or other relevant information. So there are two types of memory, short term and long term. As the name suggests, if the context is stored for the duration of a single interaction or session, it is called short term memory, like remembering only the previous quotien of a user in a chatbot, so that the flow of the conversation is easily maintained. Then comes your long term memory. It stores context or data across multiple sessions or interactions, like saving the historical data of what a user types in a chatbot so that personalized responses can be generated. This also helps in saving user preferences. Why is memory important? It easily allow multiple interactions so that the system can reference previous inputs or outputs. This easy is the work in providing more relevant and accurate responses. If you're building conversational agents, personal virtual assistant, then memory is really important. So, guys, we saw what is the memory component of lang chin. We also saw the types. 9. Chains Component of LangChain: In this lesson, we will understand the chains component of lang chain. So these are sequence of operations or steps that combine multiple components. This is like passing the output of one component as the input to the next, so that a structured workflow is formed. Here are the two types of chains simple and complex. When each chip is executed one after the other, it is called a simple chain. This is like a linear sequence. For example, a chain that retrieves documents and then generates a summary. Then comes your complex chains. These are non linear and can include branching, looping, or decision making. For example, a chain that retrieves documents. I also evaluates the relevance and generates an answer. Why chains are important. They provide a structured way to arrange complex workflows. Through this feature, you can easily build complex applications. They also enable modularity and reusability so that the developers can mix and match components for different use cases. So, guys, we saw what is the chain component of Lang chin. Thank you for watching the video. 10. Indexes Component of LangChain: In this lesson, we will understand what are indexes component of ang chain. Let us see. If you want to store data in a way that allows efficient and quick retrieval of data, then use the indexes. In a similar way, indexes in Lang chain are tools for organizing and storing data that enables efficient retrieval and search. They make it easier to find relevant information quickly and act as a structured repository for documents and embeddings. Here are the two types of indexes, vector and document. The vector indexes store numerical representations. That is embeddings of text. To enable semantic search and similarity based retrieval, like a library, FAI doubles for efficient similarity search of dense vectors. The second type is document. It stores raw documents or text tanks, like a database of PDFs or articles indexed by title orthotopic. Why are indexes important? For retrieval based task, these are really important and enables systems to quickly find and use relevant information. These are important for building efficient applications which are scalable. If you're dealing with large datasets or complex queries, then the index component really helps. Guys, we saw what are the index component of ang chain. We also saw the types of indexes. 11. Agents Component of LangChain: In this lesson, we will understand the agents component of lang chain. Let us see. Agents are AI powered systems that use LLMs to decide which actions to take. They act as intelligent decision makers that dynamically interacts with tools, APIs or data sources to achieve a goal. Here are the two types of agents, single action and multiple action. The single action agent perform one specific task or action based on the input, like an agent that retrieves weather data from an APA. The second one is multi action using the multi action agents, easily perform a sequence of actions, iteratively to solve complex problems. Like an agent that plans a trip. So what are the things required to plan a trip? Like an agent is used to book flights, hotels, and even the transport why are agents important? They are important because autonomous and intelligent behavior is enabled. This allows system to handle complex multi step task without any predefined workflows and independently. If you're building applications that require reasoning, planning or interaction with external systems, then agents are quite useful. So guys, we saw what are agents in Lang chin. We also saw the types. Thank you for watching the video. 12. LangChain with RAG - Process: In this lesson, we will see the process of ang chain with RAG. We will discuss step by step that how Lang chain with RAG is implemented. Let us begin. We just saw the Lang chain with RAG workflow. We will just use the same to implement Lang chain with RAG. First was document loaders. These were responsible for loading data from various external sources like uploading a PDF or multiple PDFs, databases and others. So for that inner code, we will use the Pi PDF loader to load PDF documents. Then comes your text splitters. In this, we break down the large documents into smaller chunks so that it can be processed by LLMs. For this, we have used the recursive text splitter to split documents into chunks. Then comes models. These are the LLM models or embedding models. For that, we have used the hugging phase embeddings class for the embedding model. This is used to set up the embedding model. In our case, we have used the sentence transformers. For LLM, we have used the hugging phase pipeline class so that we can set up the local LLM. In this case, we have used the following. Then comes your prompt this is the input querer prompts used to interact with the LLM we already discussed. For that, we have created a custom function answer underscore qui so that it takes a question as input and acts as the prom for the Qahin. Then comes your memory. This is to store intermediate data or context. In this implementation, we have not used memory, but the vector underscore store and document underscore chunks can be considered as part of memory. Chains, this is a component that refers to the sequence of operations. The retrieval Qchain is used in our code to combine the retriever and the LLM. Indexes, the FAI doubles vector story is used to index the document chunks. This helps in indexing of documents for efficient retrieval. Agents, this component refers to the decision making entity that interacts with the system. In our code, we have used the local RAC system class that acts as a high level agent, planning our workflow. So, guys, we saw the process. Now in the last lesson, we will see the coding example and run a Lang chain with Rack system on Google Colab. 13. LangChain with RAG - Final Coding Example: In this lesson, we will see the final coding example to implement Lang chain with Rag. We will see the code and also understand the steps in the form of code snippets. And in the end, we will also run the code. We will upload more than one PDF and summarize it using our Lang chain with Rag system. Let us start. So here is our Google Colab, we created a new notebook and here is our code. So let us understand the code. First, we will install the necessary packages. I hope you know that in Google Collab, we use the following command to install any library or package from the Python package index. Here we are installing the necessary packages like anhain sentence transformers. The code then imports various modules and classes from these packages like here, Okay. The imported modules enable functionalities such as text splitting, document loading, embedding and language model. Now let us set up logging here in step two. This code is used to configure the logging module in Python. The first line here sets the basic configuration for the logging module. The following line creates a logger instance. The G logger returns a logger instance. Here, the following is a built in Python variable that holds the name of the current module. In the first line, this function is used to configure the logging module. The following sets the logging level to info. This means that all log messages with a level of info or higher will be processed. Now we will create a class representing a local Rag system. Let us now initialize the local Rag system object using in Okay. Here the documents, consider it as a list to store the loaded documents. Vector Underscore store stores the embedding documents. Now we'll upload PDF from the local machine to collab. The following function files dot upload is used for this. The uploaded file names are logged and returned. Any exceptions during upload are caught, logged and erased. Okay, the method is part of a class and uses a logger for logging messages. It returns a list that is a list of uploaded PDF paths. Then comes your load underscore documents function. This will load and parse the PDF documents. It defines a load underscore documents method to load and pase PDF documents from given file paths. The following Irades through each PDF file path attempts to load the document using Pi PDF loader and appends the loaded pages to the self dot documents list. Any exceptions during loading are caught logged and the method continues with the next file. The number of loaded pages for each file and the total number of pages are logged. The method updates the self dot documents list with the loaded document pages. Now the next step to split documents into chunks. This method will split loaded documents into chunks using recursive corrected text splitter. The splitter divides the documents into chunks, which are stored in self dot document underscore chunks. That is here. It takes optional parameters, chunk underscore size, and chunk underscore overlap. The default is 1,000 and a default for this is 200. The number of chunks created is logged. Next step, set up the embedding model. This code sets up the embedding model using hugging face transformers. It takes an optional model, underscore name parameter. The chosen model is used to create a hugging face embeddings instance. The setup process is logged, including the model name. Any exceptions during setup are caught logged and reased. Then the next step in the next step, we will create a vector store from the document chunks. This code creates a vector store from document chunks using the FAI Ws it uses the previously set up embedding model to generate vectors from the document chunks, fs dot from Underscore Documents method creates the vector store. The creation process is logged and any exceptions are cat logged and reased. In the next step, we will set up a local LLM using Hugging Face. This code sets up a local large language model using hugging face transformers. It loads a pre trained model and tokenizer using the specified model underscore ID. A text to text generation pipeline is created with the loaded model and tokenizer. The pipeline is wrapped in a hugging phase pipeline instance and stored in self dot LLM. The setup process is logged in any exceptions, the setup process is logged and any exceptions are caught, logged and reased. Now let us create a Qchin using the vector store and LM. This code sets up a QA that is a quotient answering chain using the vector store and LLM. It creates a retrieval Q instance with the specified element and vector store as retriever. The retriever is configured to return the top K results. The Qin setup process is locked with the value of K. Any exceptions during setup are locked and re raised. Now we will answer a question using the AC system. The following code defines a method to answer a using a AC system. It takes a quotien as input and uses the QA underscore chain to generate an answer, the quotien and answer are logged for tracking purposes. The answer is returned by the method. Any exceptions during the answering process are caught, logged and reased. Now let us run the complete setup process. Okay. This code defines a method, run underscore setup to execute the complete setup process for a RAG system. It calls various methods in sequence to upload PDFs, load documents, split documents, set up embeddings, create a vector store, set up a local LLM, and set up a QHN. This method takes optional parameters to customize the setup process. The completion of the setup process is logged. Any exceptions during setup are caught, logged and reraised. Now, here is the example usage. This creates an instance of the local Rag system and runs its setup process. The setup process configures the system with specific parameters such as chunk size and language model. After the setup, the code asks two questions. The first one is here and the second one is here and prints the answer. The questions demonstrate the system's ability to understand the main topic and summarize key points from the documents. The answers are successfully generated using the Rag system. So here what I'll do. I'll just run the complete code. You can also select runtime from here, select change runtime type, and select T for GPU and click Save. Now I'll run it. Let's wait. Now here, let us upload the documents. Click Choose Files. So let's say I'll upload amid sample dot PDF and Python certification. I'll click Open. Now we have uploaded two documents. It will now extract the result. It is displaying the main topics, as well as it summarized the key points from the documents. So in this guys, we can work with Lang chin. We saw Lang chin with Rag example. Thank you for watching.