The Complete Ai Prompt Engineering Masterclass for Everyone | Shaik Saifulla | Skillshare
Search

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

The Complete Ai Prompt Engineering Masterclass for Everyone

teacher avatar Shaik Saifulla, AI Prompt Engineer & App Developer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Introduction to Prompt Engineering Masterclass

      10:01

    • 2.

      1.1 What is Prompt Engineering?

      6:14

    • 3.

      1.2 Prompt Design vs Prompt Engineering

      12:16

    • 4.

      1.3 Basics of AI Large Language Models (LLM's)

      6:11

    • 5.

      1.4 How LLM's Process Prompts?

      10:34

    • 6.

      1.5 Applications of Prompt Engineering

      9:45

    • 7.

      2.1 Basic Components of Prompt

      9:48

    • 8.

      2.2 Types of Prompts

      8:27

    • 9.

      2.3.1 Basic Prompt Patterns : 1. Zero-shot Prompting

      4:31

    • 10.

      2.3.2 Few-shot Prompting

      6:48

    • 11.

      2.3.3 System Instruction Prompting

      6:10

    • 12.

      2.3.4 Role-playing Technique Prompting

      9:30

    • 13.

      3.1 Structuring Prompts for Optimal Output

      10:43

    • 14.

      3.2 Iterative Prompting

      12:46

    • 15.

      3.3.1 Context Management - Part 1

      5:16

    • 16.

      3.3.2 Context Management - Part 2

      10:03

    • 17.

      4.1 Prompt Optimization

      8:51

    • 18.

      4.2.1 Advanced Prompt Patterns (Part 1) - 1. Ask for Input Pattern

      17:55

    • 19.

      4.2.2 Persona Prompt Pattern

      16:15

    • 20.

      4.2.3.1 Question Refinement Prompt Pattern - Part 1

      13:57

    • 21.

      4.2.3.2 Question Refinement Prompt Pattern - Part 2

      12:40

    • 22.

      4.2.4.1 Cognitive Verifier Prompt Pattern - Part 1

      13:58

    • 23.

      4.2.4.2 Cognitive Verifier Prompt Pattern - Part 2

      16:39

    • 24.

      4.2.5 Outline Expansion Prompt Pattern

      23:50

    • 25.

      4.3.1 Advanced Prompt Patterns (Part 2) - 1. Tail Generation Prompt Pattern

      14:22

    • 26.

      4.3.2.1 Semantic Filter Prompt Pattern - Part 1

      8:56

    • 27.

      4.3.2.2 Semantic Filter Prompt Pattern - Part 2

      12:12

    • 28.

      4.3.3 Menu Actions Prompt Pattern

      13:28

    • 29.

      4.3.4 Fact Check List Prompt Pattern

      15:00

    • 30.

      4.3.5 Chain of Thought Prompt Pattern

      16:09

    • 31.

      5.1.1 Prompt Chaining - Part 1

      9:45

    • 32.

      5.1.2 Prompt Chaining - Part 2

      19:40

    • 33.

      5.2.1 Prompt Engineering Applications & Use Cases

      3:56

    • 34.

      5.2.2 Initial Prompt Setup - Helpful Assistant

      9:22

    • 35.

      5.2.3 Writing Effective Prompts for Different Use Cases - Part 1

      4:58

    • 36.

      5.2.4 Writing Effective Prompts for Different Use Cases - Part 2

      14:54

    • 37.

      5.2.5 How to Write Advanced Image Prompts using ChatGPT

      5:50

    • 38.

      5.2.6 How to Write Advanced Text Prompts using ChatGPT

      15:06

    • 39.

      5.3 AI Ethical Considerations

      10:54

    • 40.

      5.4.1 Understanding Different LLM's Pros & Cons

      6:29

    • 41.

      5.4.2 Understanding ChatGPT Capabilities with Use Case 1

      7:15

    • 42.

      5.4.3 Capabilities of Gemini, Claude, Perplexity & Copilot with Use Case 1

      8:37

    • 43.

      5.4.4 Understanding ChatGPT Capabilities with Use Case 2

      7:05

    • 44.

      5.4.5 Capabilities of Gemini, Claude, Perplexity & Copilot with Use Case 2

      15:02

    • 45.

      5.4.6 Capabilities of Deepseek, Grok Ai, Qwen Chat and Mistral Ai with Use Cases -Part 1

      14:22

    • 46.

      5.4.7 Capabilities of Deepseek, Grok Ai, Qwen Chat and Mistral Ai with Use Cases -Part 1

      19:12

    • 47.

      5.5.1 How to Use Different LLM's to Write Effective Prompts ?

      8:33

    • 48.

      5.5.2 How to Use ChatGPT for Writing Advanced Prompts - Part 1

      12:34

    • 49.

      5.5.3 How to Use ChatGPT for Writing Advanced Prompts - Part 2

      11:57

    • 50.

      5.5.4 How to Use Gemini, Claude, Perplexity & Copilot to Write Effective Prompts

      17:11

    • 51.

      5.5.5 How to use Deepseek, Grok ai, Qwen chat and Mistral ai for Effective Prompts

      17:19

    • 52.

      5.6.1 Prompt Engineering Tools - OpenAI Playground Parameters Part 1

      15:20

    • 53.

      5.6.2 OpenAI Playground Parameters Part 2

      5:36

    • 54.

      5.6.3 OpenAI Playground Parameters Part 3

      6:54

    • 55.

      5.6.4 OpenAI Playground Parameters Part 4

      5:16

    • 56.

      6.1 The Future of Prompt Engineering

      16:21

    • 57.

      6.2.1 Prompt Engineering Opportunities

      6:53

    • 58.

      6.2.2 Career Opportunities in Prompt Engineering

      9:37

    • 59.

      6.2.3 How to Find Jobs & Freelancing Sites for Prompt Engineering

      12:24

    • 60.

      6.2.4 How to Prepare for Future Opportunities as a Prompt Engineer

      4:10

    • 61.

      6.2.5 Basics of Fine-Tuning and RAG

      8:49

    • 62.

      6.2.6 What is Retrieval Augmented Generation (RAG)

      7:08

    • 63.

      6.2.7 Fine Tuning vs RAG

      12:35

    • 64.

      6.3.1 Overview of GenAI

      12:59

    • 65.

      6.3.2 Role of Prompt Engineer in GenAI

      10:04

    • 66.

      6.3.3 Applications GenAI Prompt Engineering

      10:36

    • 67.

      6.3.4 Impact of Prompt Engineers on GenAI Success

      13:43

    • 68.

      Final Thoughts

      3:05

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

106

Students

--

Project

About This Class

Master ai Chatbots like ChatGPT, Gemini, Claude, Perplexity, Deepseek, Grok, Copilot, Qwen chat and more.

Unlock the power of Generative AI with Prompt Engineering, the essential skill for mastering AI tools like ChatGPT, Gemini, Claude, Perplexity and Microsoft Copilot. In this comprehensive course, you’ll learn how to craft effective prompts that yield clear, accurate, and creative outputs from cutting-edge AI models. Starting from the basics and progressing to advanced techniques, this course equips you with the tools to optimize AI for content creation, automation, business innovation, and beyond.

What You Will Learn

By enrolling in this course, you will gain:

  • A deep understanding of prompt engineering basics, including zero-shot, few-shot, and chain-of-thought prompting.

  • Advanced skills in leveraging prompt patterns, such as persona creation, semantic filtering, and cognitive verifier techniques.

  • Insights into the strengths and limitations of leading AI models like GPT-4, Gemini, Claude, Perplexity and Microsoft Copilot, Deepseek, Grok ai, Qwen chat, Mistral ai and learn how to tailor prompts for each.

  • Hands-on experience using AI tools like OpenAI Playground and prompt chaining method to refine and optimize prompts.

  • Techniques to analyze, compare, and fine-tune outputs from multiple models for clarity, precision, and engagement.

  • The ability to stay ahead of trends in the evolving Generative AI landscape, from ethical considerations to industry-specific applications.

  • How to use ChatGPT, Gemini, Claude, Perplexity, Copilot, Deepseek, Grok, Mistral and Qwen chat ai to write Text and Image Prompts for our Use Cases.

  • How to find freelancing projects and jobs related to Prompt Engineering field.

  • Access to Full course document and Prompts.

Why You Should Take This Course

In today’s fast-paced digital world, generative AI is revolutionizing industries—from marketing and content creation to research and product development. But to truly harness its power, you need to communicate effectively with these AI models. This skill help you to get high paying job in this Ai era or promotion in your company. By the end of this course you will unlock your true potential to build something using Ai without having knowledge in technology.

No prior experience with AI is required. Whether you’re a tech enthusiast or completely new to the field, this course will guide you from beginner to advanced levels with clarity and support.

Enroll Now

Don’t miss the opportunity to master the skill that’s shaping the future. Join us and become an expert in Prompt Engineering—your gateway to unlocking the full potential of Generative AI!

 

Meet Your Teacher

Teacher Profile Image

Shaik Saifulla

AI Prompt Engineer & App Developer

Teacher

Hello, I'm Shaik.

See full profile

Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction to Prompt Engineering Masterclass: Hi. Welcome to prompt Engineering mastery course. Myself, Shake Sepul and I am a flance a prompt engineer with the past one year of experience in art. I also worked in the SOD AI company for the outlaer client as a prompt engineer and also I am app developer. In this course, we are going to learn what is actual prompt engineering is. Okay. So as we know, this is the new era of AI in which we see, there are a lot more AI models out there like ChagptGroq AI, Cloud gem.ai, deep Sik like that. There are a lot more AI models in the future or right now. In this AI wall we need to know how to use these prompt patterns prompt engineering for LLMs. How we can get the most out from EI, like getting the best output from EI. For that, we can use this prompt engineering skill. As we know, the AI models will use each and every industry will use the AI models in the future or right now because it is very important for us because in the case of automation or getting the content from it, because the AI models all trained by different traned by large amounts of data in which we can save the time. So in this case, in future or right now, every industry like education, marketing, businesses, all those companies or industries are looking to transform the whole organization with AI in that these ALLL models can help. For this, we need to know how to use these AI models in effective manner to get the best output from AI, but that this prompt engineering skill will come into picture. I hope you understand these points. But thus, we need to know how to use this prompt engineering skill in our daily life and professional life because AI models are everywhere. So for that. In this particular course, we are going to explore nine different AI models, like like HAGPT gem.ai, Cloud purples.ai, Microsoft Copt deep Sik Krog in chat AI, and mist AI. Not only do, we will see what is actual prompt engineering is, we will go from basic to advance. We will explain and I will explain our basic components of prompt, how to write the prompts, what is the actual formula to write the best prompt, we will explore more than ten different prom patterns, advance level prom patterns in which you can use this particular prom patterns to automate in chat in the chat as well. You can write here and you can do whatever you want. There is more curious. You can see this, all those things in the upcoming classes and sessions. I'm very excited to share my learnings, my experience with you in this course as a prompt engineer. Not only that, we will also explore how to use these nine different AI models to write the best AI prompts for us. Okay, not only for task writing the content, email copies, all those things, we can also use these prompt patterns, AI LLM models to write the best prompt for us. That is very interesting. We will explore those things also and we will explore ChagPT in depth with the open EI platform, playground, we have, and we will explore some opportunities as a prompt engineer, what we have after this course, how we can find the jobs for it, fancing projects, flancing gigs or more. After this course, you can also explain what is the role of prompt engineer in the Gen AI, the future scale, all those things in this course. This course is going to be amazing because after this course, you will unlock your mind by chatting with EI. I'm not only just tell you to writing the how to write the prompts for use cases, not only that, but I am unlocking the creativeness or effect of or potential of AI modules in which it can help you to build something. That is simple. The main purpose of this course is stat line. Not only that, this course is made based upon the company's requirement skills to use a prompt engineering. Not a technical bit, but using the AI models. Every industry or companies are looking for the prompt engineers who have their specific skill set, like crafting the best prompts for different AILM models, and evaluation of the output of the output from AI models, testing out and RITA rag how we can see are to generate the best output from EI for the particular task to test to evaluate the output and to check which LLM will help to solve the particular task to evaluate output and all those things. I have explained all those things in this particular course step by step, how to write the best proms for use cases to different LLMs, how to test ALL modules. I have explained all those things nine different AI models with use cases, how to test each and every LLM model. To choose the specific task specific LLM for our specific task. I've also explained you different capabilities and functionalities of nine LLM models. Not only that, I also explain you much more. I'm looking to share much more things with you, but that I created this course. After this course, trust me, you will get the hands and experience in that. So for each and every model, this course is divided into the six models. For each and every model, you will get the resources and assessment. After this whole course, you will get the full document course in which you can I have written all some step by step, each and everything that I explain in these videos. You can get access after this course or in the last. That is all over this course. Okay. There is much more to share with you, but you can go and watch the upcoming sessions and classes, you will clear it. I hope you understand this course, my points, not only that, remember one thing. Remember one thing. I have explained in slowly manner in the all videos. Even the beginner can also understand each and every thing. So for that. If you know already about prompt engineering, if you're comfortable, so you can change your speed to two X like that past or for better understanding. I hope you understand these points. Follow each and every video, don't skip each and everything. Not only that after this course, you will get the full course document in which you can get the insights, what I have explained in the videos, all those things. In this course document, you will get all the basic to advance knowledge. Good explanation with examples, insights, great. I have also explained all those things in this document also. For your reference, you can check it out after seeing the videos. Okay? So basically these course have divided into the six models in which you can get the resources and assignment for each model. Okay. After that, after completing all the recorded videos of this course, you will get the final project as well. You will get access to this particular document here that you are looking here. Okay. So I hope you understand these points. So make sure you get this document after the recorded videos for better understanding. Let's not that. I remember one thing. This course and final project of this particular prompt engineering course is designed based upon the company requirements of. Basically, this course is created based upon the company's requirement, job requirements as a prompt engineer. I'm confident after you follow all the lecturers perfectly and practice well that I can tell you you can ready for the applying for the jobs for the I prompt engineering. Without a technical part. Then this writing the prompts form. I hope you understand these points. After completing the all the recorded videos, please go refer to the final project and please practice with this for different applications. Please follow all I have given the steps. These steps giving these steps are based upon the company's requirements. If you practice well, all these steps, and completing this final project, you will get the better understanding about how to write the best prompts and how to evaluate and how to compare and how to optimize it. All those things in this particular course. I hope you understand all these points, good luck. Go each and every lecturer, practice well and please couple with all the assignments and take help with the resources and full course document also. Let's start with model number one that is introduction to prompt engineering. Let's dive into that. 2. 1.1 What is Prompt Engineering?: Okay. What is the prompt engineering and why it is important and what is applications of this? We will see this model one. Many people, if you are a beginner, if you don't know about prompt engineering is, we will cover those topics in this model and we will go from very beginning to very definitions and each and everything, if you have any idea about it is good. If you don't have any idea, so there is no problem in it, we will cover all those basic terms and foundations in this model one. Okay. Let's start. This model one lays the foundation and I will explain what is the prompt engineering is why it's important and how it works with AI language models like hGPT Cloud AI, that are called LLMs. We also explore its applications and we will discuss what makes a prompt effective and all those basic things in this model one. Let's start from introduction to prompt engineering. That's it. So if you know about a little bit more about what is prompt engineering is when I searching for it in online, so I have listened so many YouTube gurus and online influencers or something telling that writing the promptly is a prompt engineering, but not like that. If you think after I analyze so many company requirements in the job description, what the prompt engineer that skills have should have that skills to become a prompt engineer and have analyzed so many AI AI prompt engineering jobs that company or want or want, that particular candidate should have this type of skills. After analyzing, I come to the picture that is really the prompt engineering is different while the YouTube gurus are saying simply writing the prompt, but not like that. Don't worry. This course is mainly focused by the and based on so many companies A prompt engineer job description. So don't worry I learn this prompt engineering, this course, whole course and practice assignments, I will guarantee you can get ready for the job. That's I think because this course is mainly based around the job description and the companies want the skills that particular prompt engineer should have these skills. Don't worry. I will cover all those things in upcoming model classes. So let's focus on first what is actualis prompt engineering is. So let's see here, we have some definition like crafting precise instructions for AI language models. Let's calling prompt engineering. Right. Okay. Let's see that something we have that is a prompt engineering. Prompt san something writing question. But what is the meaning engineering? We have some different type of engineering like civil engineering, electrical engineering, mechanical engineering, but what is the meaning of this prompt engineering? If writing a simple prompt I prompt engineering, but there is a different of subject we have. But this is a thing we have to learn this. The engineering take place in writing the prompt is called prompt engineering. We had no about this. Let's see the detailed definition of this. Prompt engineering is art and science of crafting instructions or queries. Queries means prompt to interact effectively with AI language models like ChagBT, Cloud, Gemini, and other that is called prompt engineering. Yeah. This is a simple. For example, think you are doing conversation with an AI. Aware the better you express what you want, the better the AI response will be. Is this simple. If you go to AGPT and you will write a prompt something you want. So how much we will express your idea, you content that what is that you want? So the better the AI response will be. So that's why we have to know how to write the prompt, effective manner, that the AI can give a better response for our prompt. Okay? Let's see this. Okay. Before going to the move deeper, see, we will see here key purpose. So why the prompt engineering is came. Okay, already the A models are very smart enough, but why the prompt engineering is. So simply the key purpose is to improve the quality and relevance of AI responses. Why? Because so many large language moduls are trained by large amounts of data. Right. This is not only specific writer, the A like hGPD is trained by most of the amount of data. It can be give the responses by combining all those things, just throwing the stones. So if you know the prompt techniques, writing techniques, patterns, and how to ask the question to AI in effective manner, that AI can provide a response to our prompt effective manner. So to improve the quality of generation, output from AI, the prompt engineering takes place a major role. That's why the prompt engineering is take place. Okay, let's 3. 1.2 Prompt Design vs Prompt Engineering: Okay. Most of the people say this writing simple prompt is called an prompt engineering, but not like that. There is a quite difference between what is the actual prompt designing is, what is the actual prompt engineering. Let's dive into that. We will see what is the difference between prompt design and prompt engineering. So the prom design and prompt engineering might seem similar, but they have some different purposes. Let's see the prompt design. Prom design, this involves writing basic instructions or questions for a language model. It's about creating a prompt that gets the AI to respond but does not necessary for specific applications. Yeah, that's simple. You can see here Prom design. Simple. It is a simple question that we will ask you to AI, you can see the example here, write a poem about nature. This is a simple question. There is no reasoning in that. There is no detailed instructions, and there is no other goal that we want from AI. It is a simple question that is write a poem about nature. There is no indirect words, there is no indirect sentences, indirect, simple. It is a straightforward question. This is called simple prom design. Okay. When come to the prompt engineering, this is a more advanced approach. Prompt engineering means, it is a more advanced approach where the prompt is optimized for a specific application or outcome. It involves crafting detailed instructions and that align with the unique capabilities or limitations of the e and it is optimized for specific application. You can see the example here. You can compose a rhyming poem about nature in the style of William Wordsworth. So this is a direct quotien and this is have some reasoning that challenge the model like compose a rhyming poem about nature in the style of William Wordsworth. Okay? We are looking that not only want the poem about nature, but I want the poem about nature in the style of William Wordsworth. Okay, have you see the difference between these two prompts like. Prompt design means it's simple writing direct quoi straight word word quotien. But prompt engineering have some reasoning and some extra detail instructions for a specific applications called prompt engineering. Okay. There is a Okay, so you can check here. You can see this prompt engineering example prompt, have not only gives the AI clear direction, but also leverages app ability to emulate library. Styles. So it can give some good amount of AIS response when compared to this prompt design. Okay, there is no difference. There is a small difference in between that. So the question which I ask directly without any involving a detailed instructions is called prompt design. But prompt engineering means giving detailed instructions and extra information in prompt itself to for a specific application, is called in prompt engineering. So don't be fully confused. So it is easy if you practice well, and it will easy when you see this in upcoming classes. Okay. Let's start why is prompt engineering is important. Okay, let's see this. Don't worry if all these PPTs and document that I have explained, each and everything will be provided to you after this course. Don't worry. Let's see. Next heading is why is prompt engineering important? See, how many Atos of seeing that Chachi BT, Cloud, emit A. This LLM means large language models are powerful because they have NLP techniques like natural language processing, and it is trained by large amount of data. It can very helpful. It can very helpful for people or business industries, to use in that to make the things easy and workflow or to make the things very fast. Okay, it helps very much in each and every industry, right? So why is prompt engineering important? Let's see in this session. So when compared to see prompt engineering important means writing good prompts is important. Simply, if you write any prompt, it's not come under the prompt engineering, but writing the good prompts, which helps to build a specific application or to get specific data from AI is crucial. Let's see what we will see this. What is a good prompts leads to and what are the bad prompts scan leads to the AI response. Let's see that. If you prompt is poorly bad, it can confuse or irrelevant responses. The AI can generate confuse or relevant responses. If you don't provide a detail or background information for your specific application, it can generate irrelevant or confusion. Data. If you write a good prompt, the A provide can generate a best and accurate response and meaningful response for your prompt because the prompt engineering is specifical. The prompt engineering means it is a specific applications. We write the prompts for specific application. Okay. I'm not talking about the prompt designing. I'm talking about the prompt engineering, which is only for specific application. That's why C. Prompt engineering for specific application means. What is the good prompts means? What is a good promise? Good promise means if you provide, if you are building so on uh if you take an example of content creation for health content creation. That is fitness and work. Let's fitness and health. If you want the content that is more accurately for your fitness and fitness and health content. This is a specific. So what you will do, you will provide some detail, the instruction that you want. Okay, that you want, specifically. In that, if your prom doesn't have background information, that what you want for a specific, it can be lead irrelevant response or it can be generate inaccuracies. With this prompt engineering, you write a good prompts, good prompts means what? You have to include a detailed instructions and you have to include any background information that AI cannot know. We have to give the idea what you are looking for and the output you want. All those things come into the prompt writing skill in prompt engineering. The good prompts can lead to a relevant and accurate air responses and the prompt engineering is main user for complex tasks. Okay. This is all about good prompts, but there is something question in your mind, why is prompting is important? Many of the companies are already started using LLMs in their workflow. Some companies are with training AI models with their own data. To automate something in their companies or to help employers for better productivity to make the things easy. This AI now is taking most of the le. I'm not talking about the is sticking but it can gender them more jobs also. That is in that prompt engineering also good career opportunity for us. So let's start. So why is prompt engineering important miss everywhere? In each and every industry will use LLMs as it goes, it goes very training. Okay, I upcoming future years, so all across any industry, they will use LLMs, okay? Like CharBT all those things. For that, for every interest you use will write. So at where at where LLMs used their prompt engineer plays a major role to control AI or to generate something from the LLMs to control AI to use AI effective manner, prompt engineer come in the important role are there. But for them, if you have prompt engineering skill, but you don't have a specific skill set that you are looking to do job in specific company. What is a prompt engineering skill that you have? Prompt engineering skill only is beneficial for you when you have a specific already skill. If you know coding, for example, if you taken, if you know coding, how to code Python. If you can use a prompt engineering for writing the basic code and all those things stuff to make the things easy, fast and reliable. If you don't know how to Python code, but if you go and just ask Cha GBT to write a code, it can generate a code, but you don't know where it is code is wrong, which Python code is an inaccurate that ha GBT has generated. You should have some specific knowledge about that skill, then only the prompt engineering skill can beneficial for you. Otherwise, it can lead to some inaccuracies and all those stuff. Okay. This is why the learning the prompt engineering is very, very important for upcoming futures and now because we already in the AI era, if you know how to use AI, you can do all those things. Let's see. In this lesson, we have learned what is about prompt engineering and what is the difference between prompt design and prompt engineering and we have learned about why prompt engineering is important. So this is a lesson we have learned it. So for upcoming lesson that is 1.2 models second lesson, we will learn what is some basics of large language models like LLM, like Char GPT and all other things and how the LLM process 4. 1.3 Basics of AI Large Language Models (LLM's): Guys, welcome back to the second lesson of this Model one. In this lesson, we are going to learn some basics of AI language models, and we will explore some extra information like how LLMs process prompts data and we will see some examples of good and bad prompts as well. And we will see some applications of prompt engineering at the prompt engineering will be used. And we will see some common issues that why which prompts will fail and why do prompts fail, and we will see some solution for it. Okay, that's it for this model, guys. So let start from scratch like first that is some basics of language models. Okay. So if you know already you know about what are the LLMs means LLM means large language models, you will see the examples here I have shown here, some gem.ai, which is developed by Google, Leonard AI. It is a image generation tool. It is also considered as LLMs and ha GPT by Open AI, publicity.ai, Cloud AI and Mid Journey. These are some large language modules, and there are some other more language models like Microsoft Co Plot and other AI tools out there. So I have just written here some examples that you can easily understand. So you can check it, there are a lot more LLMs out there. You can search in Internet easily. Okay, let's start this topic that is what are the basics of language models. Okay, you can see this SM here. What are the LLMs. You can see the definition here. The AI systems trained on large datasets. Understand and generate human like text is called an LLM. Okay? You can, for example, the best example for LLM is Hajibt. If you already use JGB, you know that it can generate the responses like human human is texting with us. This is called some large language models. So to understand how prompt engineering works, we first need to understand language models, how the LLMs are trained and how the LLMs process the data. Okay. So we don't go the technical part because this prompt engineering is simply just learning the art of writing the better prompts. Okay? This is our main topic. So the technical part is another topic. Okay. Let's see what are the language models? We'll see the language models like GPT four, Cloud, a gm dot A or A systems, trained on massive amounts of text data. They learn patterns in language enabling them to generate human like test, in response to your prompts. Response means I will generate some output in response. Prompts means you will ask a question to LLM. This is called an prompt. You already know about that. So we will see how they work. You can see some single line diagram here, how the basic LLM will work. Okay. First, when you write the any prompt quotient in GPT or other LLMs, it first, it will analyze the input. Input means you are prompt. You question, you are query. It will analyze input. After that, it will recognize patterns because every LLM is trined with data in some patterns. Okay? Understand it. So it will recognize in patterns. After that, it will generate a output is simple. It is simple single diagram that I drawn for you for better understanding for you, there is a lot more technical in these three. Okay, I'm not going in that deep. Just as a prompt injury, you have to know how the LLMs work. Let's see. You can see the examples of here, what are some large language models. Example, you can see all these things, let's see the second thing. That is how LLM process prompts. Prompts means some questionnaire query that we'll ask. Let's see how LLMs process prompts. Okay, see, when you provide a prompt, the language model analyzes it word by word, okay? It analyses by word by word. Looking for and after that, it will Analyze input, as patterns, context and intent. How it will promise means when you see this line diagram, first it will analyze input by word by word. Word by word means, you will write some sentences. The sentences have some word by word. It will analyze each and every character and word. After that, it will recognize patterns, context and intent, and what the user is actually in. What is the user actual intent? It will analyze it and it will generate a response based und based on what it has learned. Listen carefully. What it generate response based on what it has learned during training. I will generate a output based on what it has learned during training. The quality of the output depends on how clearly the input. That means you are prompt, conveys your intent. Okay? Are you understand? I hope you understand well how the lems work. 5. 1.4 How LLM's Process Prompts?: That is how LLM process prompts. Prompts means some questionnaire query that you'll ask. Let's see how LLMs process prompts. Okay, see. When you provide a prompt, the language model analyzes it word by word. Okay? I analyses by word by word. Looking for after that, it will Analyze input, as patterns, context and intent. How it will promise means when you see this line diagram, first it will analyze input by word by word. Word by word means, you will write some sentences. The sentences have some word by word. It will analyze each and every character and word. After that, it will recognize patterns, context and intent, and what the user is actually intent. What is the user actual intent? It will analyze it and it will generate a response based Run based on what it has learned. Listen carefully. What it generate response based on what it has learned during training. I will generate a output based on what it has learned during training. The quality of the output depends on how clearly the input. That means you are prompt conveys your intent. Are you understand? I hope you understand well how the lens work. Is simple. First, it will analyze your input. After that, it will analyze word by word. After it will recognize the patterns, and lastly, it will generate the output based on what it has learned during his training. By this, it is concluded that the EI is only generate output what it is trained. Okay. Simple. It is that the quality of output will be depends on quality of probed that you will write it to that you will write or give to the AI model. You can check here you can see some analogic example AI as a chef, prompts are recipes.This simple I have for you. AI now, I already told you that is prompt engineering means specific application. You can see the analogy. So we have we have using AI as a chef. Chef means one specific application. AI as a chef. Now, the prompts as recipes. You are asking a chef Okay, you are asking a chef for some looking for recipes. The prompts once your question are like recipes. So what is a good prompt here? As a prompt you need to know, you need to know. That is specific and detailed instructions. Good prompt means the prompt which contains specific data and detailed instructions is called good prompt. What when come to bad prompt, it is ambiguous and vague. Va means some irrelevant data that cannot that can lead to inaccuracies in the responses. This is simple. I hope you understand. So you can see some examples of good and bad proms. So you can see, for example, bad. What is a bad prom means, simply explain climate change. You can see this is a sight, whatever question. We can see Okay. Before going into this, I will explain a so much deeper in this because it will clear some basic fundamental. So when you see here an analogy, AIS hf prompts as recipes. So AI, when you think AASHfo it knows that AI now for example, AI is now thinking as a chef, it knows thousands of recipes. Recipes means patterns. Okay. Just imagine now AI Asa working as a chef, okay? Now, Chef, no, thousands of recipes. The thousands of recipes is called as pare tens. Okay. Understood. The thousands of recipes are pare tens, but needs clear instructions to cook the dish you want. That's called prompt. Okay? Or you understand? I think I hope. Okay? You have to think like AI is chef. Now, AI as chef no chef, no already thousands of recipes. But you want some specific dish. Okay. That way you will write the prompt for specific dish is called prompt engineering. That is her. So we are training them we are training that EI as a specific, that is AI as a chef. Patterns means the already chef know thousands of recipes. Means AI know patterns. And you want the specific dish from the particular chef. That means you are writing the prompt for specific recipe you want as simple. I think I hope you will understand this you can see there's some example, prompt design means you can, if you think this analogy, what is a prompt design and prompt engineering, different different prompts. If you ask some straightforward question like make me a meal. That is simple straightforward question. That is a prompt design category. When you when you focus on prompt engineering, it has some reasoning question like saying that make me vegetarian lasagna with extra cheese, cooked for 30 minutes. Okay, understand. I have taken some reasoning question. That comes under the prompt engineering is like make me vegetarian meal with extra cheese and cook for 30 minutes. So I am writing prompts detail as much, how much in time and what specific I want from you, like from AIS hef. So there's some prompt engineering that we have writing that we have giving some detailed instructions that we want that comes under the prompt engineering. When it comes to prompt design, it is some straight Word question like make me a meal. So it's as simple. Okay. Let's see the examples of some examples of good and bad prompts. See, you can see some bad prompt example here, explain climate change, it is a simple and straightforward. You can think it as a prompt design. Okay. When you see on the top of prompt design, it is a good prompt. But when you come under the prompt engineering, you have to think about reasoning and writing instructions for a specific. At that time, you will consider prompt design is a bad prompt, okay? Please keep in mind that. So bad prompt means explain climate change. It is a straightforward. There is no reasoning in that. When you come to the good prompt, you can see the detailed instructions here. Explain the causes and effects of climate change in simple terms, suitable for a 10-year-old. Wow, it's good question, right? Some people don't know about terms. Okay. When you ask directly this question, explain climate change. It can generate direct with halonization words, effective words that you never heard about in your life. Okay, if you give the prompt like this, explain the causes and effects of climate change in simple terms, suitable for a 10-year-old, this AI, the AI will generate for a 10-year-old, how the 10-year-old will understand. It will analysis and it will think it and it will generate a output for a 10-year-old. That means you can easily understand the causes and effects of climate change easily when compared to bad prompt. This is what is the difference between bad and good prompt. We can see white matters. White matters. Because clear prompts produce better and effective outputs when compared to bad prompt. You can see this how easy you can understand by the good prompt. With the bad prompt, you can understand, but it has some words that you can understand because LLMs are trained by large amounts of data. I have some effective words that you never heard about. Okay, you don't know about meaning of that, how you can understand the output. If you write the detailed instructions, how you want the output and which way in which style and in which time, so it can generate a better output to understand you to get understand by you. So it is all about writing the good proms and bad proms. This is examples of good versus bad, so it will base it on other other topics, okay? 6. 1.5 Applications of Prompt Engineering: So let's see some applications of prompt engineering. As I said earlier, so it can be used everywhere where the AI LLMs are used. So let's see some industries in which right now and upcoming future years, the LLMs are used, and the prompt gene will be very crucial and plays a vital role in that companies and industries for the AIuse cases. You can see some examples I have mentioned here, industries like education, health care, content creation, programming, and automation, like this. So when you come to education point of view, so this is something adaptive learning tools. It can gender specific tools. We can, to generate some content structure, okay, outline structure for writing the documents and content for, um, students, all those things. When compared to healthcare, we can generate some patient communications and workflows, all these things. When coming to content creation, this prompt engineering plays a major, major role in content creation world of error because it can write the blogs, write the marketing copy, emails as fast as possible. So we have to, we have to just address some it is good for content creation, and it can help in programming as well. So most of the aGVt current version can solve most of the coding problem and debugging and it will generate good code snippets. Okay, it can be very effective. It can save a lot of time for writing the basic code. Okay. This is a simple some industries that I mentioned, but this is not limited. So this prompt engineering is very, very important for every industry at where the LLMs are used. I hope you understand all those of the stuff because prompt engineering will be one of the good skill if you learn, incoming futures and now, so this is some applications. There is most there are other industries, and the prompt engineering is applicable and simply where the A tools are used the prompt engineering will takes place. Okay, already some of the companies started hiring the prompt engineers. So this course is based on this based on the company's prompt engineering job description. So I created this course based on and after analyzing all the companies prompt engineer job description, what the actual prompt engineering is and what are the skill and what are the stuff needed by the candidate to become a prompt engineer in their company. So this whole course is based around that. So please learn all these courses because it helps you to become a good prompt engineer at everywhere. Okay. So after that, let's see the second thing. That is why do prompts fail? So as I said, there is some good and bad prompts. So actually, if you see that, why do prompts fail? Simple. If your prompt doesn't have some background information or don't have some lack of context, lack of detail, or there is no reasoning like that bad prompt, the prompt can fail. Okay. Fail means the output will be that not efficient. Okay, that not efficient and have some inaccuracies and have some mistakes, all those things stuff. That's why the prompt fails. Okay, this is not something other like that. So there is some common issues you can see here, Ambigui that is missing clarity or intent. Okay, if you doesn't provide a clear intent to the EI, it can lead to inaccuracies and there is no clarity in that output. The lack of context because no background provided means we are talking about the prompt engineering. When you go to write the prompt for specific application, you need to provide some extra information that supports the main context. Okay? Context means something that you all sentence like information you are providing to the AI, like this. If there is no enough context that AI can analyze and it will generate a output based on your specific application, so it can generate some irrelevant response. That's why the prompt fail, okay? Over complexity. What is over complexity means? Simple overloading the prompt with unnecessary details. When you write the prompts for specific application, you should keep in mind that you have to give the AI the required data online. Okay? If you give unnecessary data, it can combine and it can just generate all the combining the words which cannot which not relevant for you for that specific application. So that's why the prompt can fail. Okay. So what is the best solution for it means refining prompts to be clear, specific, and concise. Refining prompts, we will talk later in the upcoming models. So refining means writing first prompt. And it will generate some basic output. After that, you will analyze the output. After that, you will write again prompt. You will write again prompt by adjusting, analyzing the first output, and you will write the second prom that you want some adjustment in previous output. You will write that prompt again. That is called refining prompts. To be clear. Specific css. That's it. So this is that this is not a complex, so it can easy easy for you when you start writing the proms in hav. Don't worry. We have practical sessions in upcoming models there is so much to learn prom patterns. I will share all those my learnings in this course. Don't worry about that. There is a much more practical implementation in the upcoming models. We will go I will write the prompt patterns. What is the refining of the proms? What is the patterns? What are the different patterns we have? What are the techniques we have to use and how we will use LLMs to generate prompt by themselves, all the advance and basic we will cover in upcoming models. So just don't leave this course. I can change life. So that's it guys for this model. So we have completed first model and in which we learn some basics foundations and all those stuff. So in this model, we have learned some basic language models in which we will see the water, the LLMs and how they work. So LLMs means simple the systems which are trained by large datasets to understand and generate human like text. Examples we have seen already, if you have used, it is a good hgPTjm dot a Data Day, Cloud, Microsoft Coplon, and there are other AITs out there. You can see that by Interneting search, and we will see how they work. First they analyze input means you are prompt word by word, and it will recognize patterns, and it will generate output based on what it has learned during training. Okay. This is simple. After that, we will see what we will see some specific example as AI as a chef. Okay, prompts as recipes. What for this AI system, specific system. What is the prompt design and what is the prompt engineering? Okay, how we will write for those prompts for under scenarios. And we will see some examples of good versus bad prompts. What is the reasoning? If the bad prompt doesn't have some background information, reasoning, or other things that we can get the best output. You can see the good prompt have some reasoning terms and background information that how output is customized by ourselves. And we will see why it matters also. We will explore some explorations. Prompt engineering, we will see some industries that are using prompt engineering and upcoming feature will be the grade for this. So there is in education, health care, content creation, programming, and other industry skills. And we will lastly see this why do prompts fail in that we discussed some common issues that we are doing right now, missing clarity or intent, lack of context, over complexity, and we have seen the solution for it also. So that's it for this model guys. We will dive into the other model with some intermediate sections in prompt engineering. So let's move to the next model. 7. 2.1 Basic Components of Prompt: Guys welcome to our next model number two of this master prompt engineering course. So in this model, we are going to see some foundations of writing effective proms and what are the key components that we have to keep in mind while writing the prompt, and we'll see some why these key components plays a major role in writing effective proms. So let's understand some basics and foundation of writing effective proms. So let's see. So we will explore some components of a prompt. So we have three components of a prompt. That is clarity. Number two is context and number three is specificity. These three components are very important while we have to keep in our mind while writing the prompt. So let's see first one is clarity. So clarity means writing simple and direct sentences which have some clear intention of you. So the AI module will generate a best output when it is easy to understand your intent. So you can see the example here. You can see the example here. So what is the clarity means? It is a straightforward direct sentence. Which clear your intent? You can see the example. Tell me something interesting about space. So EI will think what you need. You are asking a broad question. There is no specificity in your question. There is no clear intent. Okay? Tell me something interesting about space. Okay? It will simply throw some interesting points about space. Okay? If you try to give some clear instructions like what are some recent discoveries about black holes, you can see here. So you have clear intent. You need some discoveries about black holes. Black holes means you are focusing on specific topic in the space in which you have clear intent. It is a clear prompt, so that AI will understand, Okay, you need some recent discoveries about black holes, so it will give the best output for your prompt. So compared to this, tell me something interesting about space. It will just throw some interesting about space. There is no specificity in that. There is no clear, there is no clarity in that. Okay. This is some clarity points, so we have to keep in that. While writing the prompt, you should keep in mind that clarity plays a major role in that. You have to give the instruction to a model like you are asking for a specific topic in which you have direct clarity in your mindset while writing the prompt. So let's see second one. That is context. So what is a context means, you will provide enough background information to support your main intent. Okay. So let's see. It can be done by setting the stage by describing the scenario or defining the role in which role AI want to act like that. Okay? So simply, you have to provide enough background information to the AI model to understand the task and actual intent of you. Okay, let's see the example here. So you are a science teacher explaining gravity to a 10-year-old student. Okay, if you remove this, you can just write Explain gravity to a 10-year-old student. If you write prompt simply like this, explain gravity to a 10-year-old student. It will just explain gravity, like how 10-year-old student can be understood. Okay? It will just explain the gravity. There is no background information that there is no, to get the output very specifically and accurately. Because AI is trained by large amounts of data, it can simply throw with other words which are not in the part of gravity. Okay? When you give this when you provide a background information, it can be done by defining the role like you are a science teacher. So you are providing here some background information in which the EI will think it is like a science teacher. AI will think I'm a science teacher. I have to explain gravity to a ten year tool. By this, the AI will generate a best output when compared to writing simply explain gravity to a 10-year-old student. So you can analyze these two outputs by you yourself, simply writing the first prompt like explain gravitude ten years stone and other for you are a science teacher, this whole prompt in any language model like Cha J PT, you can see, and you can analyze the output and you can define the difference in between that. So context plays a major role after clarity, so keep in mind that. Keep in mind that. Next, our specificity. So specificity means we have already learned about what is a prompt engineering. Prompt engineering means writing the instructions for a specific application, right? So specificity means precise means just write what you want and get from the AI model. Okay. See can see this here. Be precise about what you are asking. The more detail you are, the more relevant response will be. Okay? The AI module should understand your main intent and much more what you want. Okay? So for that, you have to give more detail of your problem or what you want from AI. So you cannot write a simple question or answer. So to get the most of the AI models, you have to give the detail as much as detail to get the best out of from AI. So you can see the example here instead of saying, write a story. Okay. It is a simple question, right? There is no reasoning or there is no enough detail. Oh, okay, to understand AI. So you can see if you go and ask AI, like, write a story, the AI will think, Okay, I will write a story, but in which style in which tone, in which topic, I have to generate a story. It cannot define, uh, in which I have to give. It will simply write a random story with random words that can be not relevant or that cannot be good. The output cannot be good. Compared to other, right? So if you give enough detail, more detail about what you want, like you can see the example here, write a 300 watts science fiction story set on Mars, where the protagonist discovers water. Sorry, this protagonist discovers water. So you are in this prompt, you are giving more detail that what you want. Okay, you have given the 301, 300 what science fiction story. You are described here what story I want and which topic I want. So it is enough for the AI. Have given some detail, more detail about what you want from AI. So the AIL think, Okay, I need to generate this fiction story on Mars. What where the protogens discuss water. So it will simply generate a specific story for your prompt. So that's why specificity plays a major role in writing the prompt. So, let's see why this component matters. So as we discussed the three components, why these components matter means, simply can see this summarization. When your prompt is clear, the model avoids confusion. That's good. So when you write the prompt clearly, in any language, you will ask to model. So it will understand. It will understand your intent and it will generate a best output for your prompt which have clarity in your mind and in prompt also. Okay? That is good context. Context means it helps to understand your intent. The AI model will understand your intent and purpose as well as task and generate best output according to your prompt instructions. And specificity means it will reduce irrelevant or off topic response in which you will provide the more detail that what you want in specific way which reduces the irrelevant response in it. So that's why this component matters a lot in writing the effective proms. Okay. So that's it for this lesson, we will move to another lesson of this model in which we will see some types of proms and let's dive into the next lesson. 8. 2.2 Types of Prompts: Back, guys. So welcome to our next lesson of this model number two in which we are going to learn some different types of proms we have. So there are three types of proms right now. So we will hope there are more prompts will come in future as this prompt jeering field goes emerging technology, it more prompt techniques and patterns can innovate in upcoming future. So for that today we have no technology for no prompt engineering field, we have some three different types of proms like instructional proms, open ended versus closed ended proms and multi conversational proms. So we have these three types of proms. This is a basic proms. So this is a foundation proms because in this first two proms are simply basics. When compared to third one, that is multi conversional proms. We have so many advanced prom patterns that we discuss in upcoming model classes. So let's see the first one. So first one is instructional prompts. This prompts is actually simple questions, queries or instructions that you will ask AI model to generate a specific answer. You can see the example here, list five healthy snacks for children and explain why they are healthy. This is a simple question you will ask to EI model to get the answer. So it is simple, right? Simple asking question to EI model is called an instructional prompt. So writing a question or query or instruction is also called a Zi prompt, okay? You can see the name here only we can understand that is instructional prom because we have to give some instruction to AI model to get AI output. So you can see when the instructional proms will work better. So when you need structured or factual or step by step answers. Right? You can give the instructions like teachers in the college or schools will instruct the students to do some experiments like that. So like that, you can ask to AI to generate a step by step procedure for completing the photosynthesis experiment like that. So it will work better when compared to other prompt methods. Next, prompt types we have that is open ended versus closed ended proms. So you can see here, open ended proms means encouraging creativity and longer response. Yes, you can see this. Open end means adapting nature or getting the prompt which challenge our model to think and to generate output which have more information. Like you can see the example here, what do you think are the benefits of renewable energy? Because in open ended, you are writing the prompt like the EI should think. You can see the example here. What do you think are the benefits of renewable energy? So the AI should think like creativity, and it will give the best output longer responses for this prompt. While come back to clause ended prompts, it will simple for specific answers like what is the capital city of India, the answer will be the deli. It is a simple question and getting the specific answer. That is close ended. There is no thinking, creativity, and there is no longer response or step by step, anything like that. Close ended simple asking question is called close ended proms. When compared to open ended, open ended proms, which encourages the creativity in AI and which generate the long response that called open ended proms. You can see the example here for better understanding, you can write the open ended prompt in any language model you like, and you can see the output. After that, you can write any close ended prompt which you want specific answer, like what is the capacity of France, India, like that. You can get the specific answer, which have some creativity or there is no longer response in it. So you can check it out. Okay, we will easily understand about this difference between these prompts. Let's see that third one, which is very important in prompt engineering. So multi ton conversational proms. So you have earlier see these two types of proms. There is no some reasoning in it. Is is simple writing quotien or instructions and getting the answers from it. But when compared to multiton conversational proms, it has some refining power process, refining, output analysing and much more in the multi ton conversational proms. So we will explore more advanced prom patterns under the multi ten conversional proms in upcoming model classes. So don't worry. We will cover all those stuff in upcoming classes. So let's see here. Let's know some basic foundation of this conversional bom. Sometimes you need to have a conversation with AI, for example, you will write a prompt. That is first to prompt. Tell me about renewable energy. So it will generate some renewable energy information. After that, you will ask a follow up question which is related to previous prompt. That is generated by AI. Sorry, that is output. First you will write tell me about renewable energy. After that, AI will generate some energy about renewable energy information. After that, you will ask some follow up question based on output of previous prompt. In this case, tell me about renewable energy. You can see the follow up prompt here. Can you explain the environmental benefits of wind energy in more detail. Okay. So it is a follow up question. After that, you can have many follow up questions. You can write a third follow up question, fourth, fifth, as many as you want. So in this call and multi ton conversational prompts in which you will you will talk with EI in conversation format like we do chatting with our friends, family members, colleagues, once we will write some text, they will written text. So after that, we will ask some follow up question or like that. Same it is a simple prompt. Okay. So this builder dialogue which is useful in chat boards are multi step tasks. So you can see the chat board like hat GPT, other AI language models or like these multi ten conversational prompts. So you will ask follow up qui or other in same patterns like that. So these are easy multiten computational prompts for, we will explore more in upcoming classes. Okay, let's yeah. So that's it for this lesson, guys. And let's move to our next lesson of model number two, that is basic prom patterns in which we will use hGPTo we will use JAGPT to understand the different types of basic prom patterns, and we will use JGBT for practical information of proms and how they will work and how we have to write all those stuff in upcoming. In this next lesson of this model, let's dive into the different types of basic proms and we'll use AGBT for practical implementation. Let's go. 9. 2.3.1 Basic Prompt Patterns : 1. Zero-shot Prompting: Guys welcome to our third desg of this model number two, and we will see some basic prompt patterns that we have right now. These are some basic prom patterns that every prompt engineer will use in their Dale conversation with AI to get the best output and to train our AI models. So this is some basics we will see in detail with this model in this lesson. Let's see if that is these are the four basic prom patterns like zero shot prompting, few shot system instructions, and role playing prompting. So we'll see the first one that is zero shot prompting. It is asking model to perform a specific task. So without prodding any examples, that's mean just writing the prompt, we don't give any specific contexts like background information that we have earlier discussed about what is a context, right? Context means providing enough background information to that I will understand our main intent, right? So we have in this prompting pattern, we don't give any example or we don't give any another information background information to do a task, right? So you can see that so you can see the prompt example here. Then you can easily understand. Summary is the main idea of the following test. You can insert any text here, paragraph in any that the AI will easily generate out. But let's see in this chat, we will jump into our trajivity AI language model and we will see how this zero shot prompting works. I will jump I will jump here, the Cha GBT. You can go if you are already using the Cha GPT, that you can know how to sign up and get the account from this, it is easy. Our main focus is on zero shot prompting. Let's see. We are using this summarizing summarize the main summarize the main idea of the following text. So I copied some paragraph from the Internet. So I'm going to paste here. So I pasted here. Then see we will see the output, how the AI language model will generate. Let's go. Yeah, you can see the summarization of this text I provided here in the prompt. So it will simply summarize the three points likely in 200 prompt lines. What is the actual this paragraph here saying? It easily summarizes it. So a simple zero short prompting, like you have asked some question or you have written the prompt to do a particular task. So we can write another thing like summarize this book and provide some book name as also. So you can see the example here, summarize the rich dad and pod book. Okay. Let's see how the AI will generate the output. So it will summarize all the contents that are very important points in the rich and put dad book have. So it will easily summarize and see. I have it done some specific task. Okay, he completed some specific task like Rich dad put dad by Robert KoskFS the difference. So you can do anything. Zero shot prompting means simple writing prompt to perform a specific task like this summarization or remove any grammatical mistakes from so this paragraph or remove effective from this paragraph like that. It is a simple task that you will ask you to AI to do that. Okay? It is easy. So let's see our second prompt pattern that is few short prompt. 10. 2.3.2 Few-shot Prompting: Short prompting. It is opposite to the zero shot prompting. Soft if you understand this, you will understand easily what is zero shot prompting. Light. So let's see. Few short prompting means you will provide some few examples in the prompt to help the model to understand the task. Okay, you will provide some examples how the output should be, right, how you want output. You will provide in Prompt itself. I need output in this format, right? You will provide this type of stuff in prompt itself to help the model to generate the output what you need. Okay, we can see the prompt example here. So here is the example review summary. Okay, I will jump to Cha JB to explain in more detail of this few short prompting. Let's go. In few shot prompting, the main purpose is to tell AI to perform task in this way only. Let's see. Few short prompting means providing examples how the output should be should look like B. So I am using, for example, I use two people conversation like Sara. Okay. Let's see hi. How are you? Sorry, we'll just try this. Let's see. Now we can take another person like Sam. It will tell I am fine. What about you? Okay so I have written some conversation between the persons I have provided how you should act like that. So I will provide one more example, like, Sarah. Yeah, I am good. What you are doing. Right. No. So the comebacks Sam response is, I am looking for buzz at my home. I simple I have taken the example. Now, I will write Sara. Can I drop you know what happens, I will just write a SAM and I will simply don't write this format. After that, I will instruct a AI tool to complete the SAMs response. Then it will generate a SAM response. Why? Because we have provided a example like how you should give the answer. So I have provided some few examples like a Sarap and a SAM conversation I have. After this, if I simply don't never I will not write the SAMS response. So let's Yeah. That is good, right? The SAM. The AI is generated SAM response like here because it learned from my example how the output should be. That is all about fuchon prompting in which we will give some few example to AI how the output should be, how you want output. Like by giving example. It is one type of example you can give any type of example like this. You can give the output should be in English format. You can give all those stuff like providing a few example. You have to write some prompt itself, you have to write some question and answer by yourself, that I can learn from your instructions to provide template, same output, how you try to AI. You can see there. So I have read that few examples of how the output should be. So I have written some quotien and I just ask you to complete the Sam's response, so it will generate that's a real kind of few, Sarah. If it's not too much trouble, I would appreciate this right. Thank you. This is all about few shot prompting. So it is easy. So we can compare these two things with few shot and zero shot prompting. Zero short prompting means we don't provide any examples like few shot prompting that we have earlier discussed now. Just write a prompt to perform a task, without providing any examples. When compared to few shot prompting, we will provide some few examples to help with the model to understand our task and to generate the output how we want. It is simple as that. Okay, let's see the third prompt pattern that is system instruction. 11. 2.3.3 System Instruction Prompting: System instruction. Okay, so to understand better for this, so we have some playground by hajbT itself, in which we can write the system instructions. After that, we can see more conversational prompt in that. We will see in upcoming advanced prompt patterns that is how we can write system instructions, but we will see some basic about this right now. So what is it? Sort of setting the role or tone for the model to follow. You can see the use case when you want the model to behave in a specific way, such as an expert, teacher or a translator. So how the system instruction means, you can see the example here. You are a professional chef. Here we are given some context that we have. Here we are given the context. Context means we have provided some background information. Background means you are a professional chef. Okay. You are a professional chef. This professional chef is called an system instruction. Okay, you can see the sum prompt. Explain how to make a simple pasta dish to someone with no cooking experience. This is called prompt. That is instruction. It is called an system. System means it is a set of whole system like computers have. Computer is a system in which they are you can see the prompt means we are giving input to the Be keyboard itself to do some various tasks that computer have. The computer means it is a system. As system will work with our main instructions. Okay. It can be easy to understand by cha GPT. Let's go to this ha GPT. Let's see this how the system instructions work. Okay. Let's see. I will try the AI module like now expert writing content on health only. Is a system prompt, system instruction. You can see this. You are no expert at writing content on health only. So the EI model will think, Okay, I'm a system, and I am only I have only expert at writing health related content, not other. Then I will write some prompt some instruction. This is called system instruction. Then I will write a prompt. Now, please write about new pen. Let's see what will generate the AI. S, you can see here. The importance of nutrition for a healthy life is so so blah, blah, it will generate related to the nutrition. Okay. So we have defined the system working here, in which type you have to only work. Okay? If we see if we can use if we can write, please write the content. So we can directly write here. No, please write, we can see it. Please write the content for uh, that topic which is not related to health, we can check whether it is thinking like system or not. Please write content for. We can take another topic that is IT. We'll check what is the output of this. See, you can see here. That is a system prompt. Currently, I am focused on creating content related to health and nutrition. Let me know if you'd like to assist in that domain. You can see the system prompt, how it works. So the system work means we will give some system to do spun specific task only. That is a system prompt. Okay? After that, we will write some prompt to follow our instructions. You can see I have written some system prompt like this. You are no expert at writing content on HeLD only. This is a system prompt. Okay? This is a system instruction. Okay. After that, I have written some question or query. No, please write content about nutrition. It had generated some nutrition because it is a health consider topic. When I ask EI to write about IT content, then it will simply refuse to generate the content because the EI is thinking like system, specific instructions that I have given to the EI to do the health content only to generate health content only, not other. So the EI will think, I am a system. No, I am trying to generate health related content only, not others. Okay? If we ask not related to health, it will simply refuse to not to generate this type of content. This is an example of system instructions which are very important when we try an AI module to do particular or specific task. So I hope you understand this system instructions. By practicing by yourself, you will get more idea about this prompting. Let's see the role playing techniques. 12. 2.3.4 Role-playing Technique Prompting: Let's see the role playing techniques. It is a quite similar like system instruction. So because in this role playing technique, you are going to train a model as a specific instructor, like you can see here. Instructing the model to act in a specific role, such as historical figure, teacher or professional in that. So in earlier we see you are expert at writing content for health. Okay? This is also role playing. Use cases, creative or instructional task where persona improve engagement and understanding. Yes, role playing means persona is most important. Persona means personalization. Uh, training AI model. Running A module for a specific task by assigning the specific role in it. Let's see the prompt example here. Pretend you are Albert Einstein explaining the theory of relativity to a child. So let's jump into the chargeability to understand more about this role playing technique. Okay. Now, we can write the forgot about so this is the forgot is very important when you doing different different things at a particular hagiBt interface like this because it has some memory update function in it. Okay, let's see forgot about. Now, you are No, you are experienced science teacher in which you have expertise in photosynthesis. Now, so what I have, so I have assigned a role, specific role to I model to act like the role that I have given to EI like, no URS experienced science teacher. This is called role playing. Okay, role playing means telling the AI to think specific role. Think like it science teacher, think experienced science teacher, in which we can get the best output from the AI. After that, I that in which we have expertise in photosynthesis. I have telling that I tell the AI to specific topic like we have expertise in photosynthesis. So now, I will write the prompt. Now, I will write the query that I want the output from AI model. Explain me about photosynthesis. Easy. Understand. Wait. Let's see what AI will generate. You can see the memory update option here. It has a great future in hagibt when compared to other A models, that's why I will love using this hagiby. You can see here photosynthesis made easy. It will explain me about photosynthesis easy when compared to other type. If you ask, you can see the example here. So from now here, the AI will think like it is experience science teacher. So to break out this pattern, we have to write forgot above, it will forgot the above previous role playing technique and it will generate as casual, we will interact with AI. Okay? So this is okay it will generate the specific and role playing technique will reduce the irrelevant response or give the better relevant response when compared to writing the prompt without role playing. Okay, if I write simply explain the photosynthesis, it can just throw some random words and random explanation without going deeper explanation. If I train with role playing technique, if I train AI, if I tell AI to think like experienced science teacher and generate about photosynthesis and so topic. So it will experienced teacher how they think and how they explain with the subject expertise, the AI will also think like that, and it will generate a uh explanation like subject expertise that like have. You can see the how this easy. Now, if, for example, if I just tell forgot about and just explain about photosynthesis. You can see that I will just explain photosynthesis, not much better output when compared to previous one. You can see this here. This photosynthesis is a process by which green plants, algae and bacteria convert sunlight into blah, blah, blah. It has some summarization part of this. When compared to this, it has some good points, key ingredients, the kitchen, the recipe, here, it has given the best example here. It's in the formula, what is important, all this dsing. But when compared to here, it will just thrown the explanation about what is the photosynthesis. This is how the prompt role playing techniques will play a major role when compared to get the specific information from AI with deeper knowledge. Okay, you can understand you can easily understand by this by practicing with your own prompts. Okay, write the simple question, ask the AI, and it will generate some AI response. You analyze it. And after that, you write the prompt with role playing techniques like RS experienced science teacher, like other stuff and give some background information. After that, you see the output. There is a much better output by using the role playing technique. So you can see this the two difference between just I have Without role playing technique, I just written the query explain about photosynthesis. You can see the output here. That is not much better when compared to this output because I have used the role playing technique in this prompt, that is you are experience science teacher in which you have expertise in photosynthesis, in which I have trained AI model in specific way to get most of the AI model and for better understanding. So that's it for these guys, this model. So we will so we will see some role playing techniques, so you can easily understand by practicing by yourself in the char gebe itself. So in this model of third class, we have shown that we have discovered some prompt patterns in which we have discussed some zero shot prompting, in which we just ask a question or we will train a model to perform a specific task in which we use the chargeb to do some summarization of some paragraph. After that, we see the few short prompting in which we uh, provided some few examples to get the output what in the format we want, and we will generate it from the Charge JBT itself, we see some system instructions in which we give some system role playing, in which the system only works with our instructions. After that, instead of out of that instructions, if we ask model to perform a task, it will refuse to do that perform task, which is out of system instructions. Okay, we can. And the last one is role playing technique in which we have seen providing a background information, assigning a specific role, we can get better response when compared to asking that random question. So don't worry, guys. I will put this chat link Taibty chat link in a document itself you can get after this course or models assignment. Okay. So that's it for this model guys, we will explore some more advanced and prompt patterns in the next models of our prompt engineering course. Let's dive into our next number Model three. 13. 3.1 Structuring Prompts for Optimal Output: Come back guys. So welcome to our master prompt engineering course, and we will dive into our module number three in which we are going to learn how to structuring the prompts for optimal output, and we'll see and we will discuss what is a simple structure to follow while writing the prompts, and we have also explore some example and how to write that best to prompt using structure that we discuss right now, and we'll jump into the ha GPT, and we will see the practical intimation of this prompt. Okay. First, let's discuss this simple structure to follow. Okay. Imagine you are giving some instructions to a particular person for a place. Okay, if your instructions are not very well, okay, that person cannot find the right place that he want, okay? Similarly, the AI module can also think like that, okay? Generate output like that. If your instructions are not clear, the AI will generate relevant response, okay? Similarly like that only. Okay, we will understand deeply by writing the proms. Okay? First, we discuss the structure to follow it. Okay. The structure contains three parts, mainly three parts. That is role setup, number two task definition, number three context. You can see the role setup. As earlier seen some prom patterns like role playing, system instruction and some bad and good prompts. Okay. On the top of that, we are, uh, using this structure to write some advanced proms. Okay? First, we understand this simple structure to writing the best prompt here. So first structure is that is role setup. So we have to set up we have to assign some role to AI to think like that, to think that background. Like you are a helpful assistant or you are experienced teacher. You are a scientist, okay? Like you are a life coach in which you have ten years of experience in mental health, Okay, like that. We assign some specific role to AI to think in that background. Okay, which leads to a better response. Okay, like the specific person which have subject field. Specific subject field can give the answer, okay? Like AI can be generate a response like the specific person who have the specific mastery in that subject. Okay. After that, after assigning role, we will define our task. So what I need from the AI, that is a task. Okay. Next, third one is context. We have to provide any background or additional information or examples that can guide the response. Okay? We will also earlier see some few short prompting, in which we have provided some examples to AI to generate output like we want, in which we have defined that output in prompt itself, okay? Is a few shot prompting. So here, context similarly like same, we have to provide additional information in which topic you want the output. That is the background information. So this can be easily understand by reading the prompts. Okay. So you can see that I have taken some example that is one, poorly structured prompt, how the poorly structured prompt can look like. Simply tell me about AI. So you can see it is simple, tell me about AI. There is no other information. There is no rule set up in that. This is a simple question we ask to AI. So think how the AI can generate the response. So it will generate some random or summarization of AI in all the cases like AI in healthcare, in education, transportation, and all applications of AI. Okay. But how the l structure prompt looks like. You can see I have followed the structure that is this structure, role setup, task definition, and context. So you can see the here. So I have assign a specific role to AI. You are an AI expert. This is a first set role setup. After that, I have write the task. I have defined the task what actually I need from the AI model. Like explain what artificial intelligence is focusing on its application, okay? You can see this is a task. Okay? Explain and provide concise examples for each sector. But where is the context in it? Where is the additional information that I given? We can see the here healthcare, education, and transportation. I need the output for these three different types of applications only. I don't need other types of application in which that means you have provided some, uh, specific additional information in which AI can generate output for these three types of applications only. That means you have guided the response, you guided the AI to generate response in these three types of applications only. That means you have provided context. Okay? So this prompt you can easily understand by AI, and it will generate as we need. Okay? This is the difference between the poorly structured and well structured prompt. You can see the head. The second prompt is specific gives a clear task and sets a role for that model resulting in better output. So for more understanding, we will jump into ha gibt, and I will see the output how the output looks like for these two types of proms. So I jump in the ha gibt. You can use any other model to analyze the two different outputs. Okay. So I will write some poorly structured prompt like tell me about AI. Let's see what the AI can generate. You can see the artificial indigence refers to a simulation of Homan intelligence. So it has generated some related AI concepts, supervisor learning, so I don't need this type of all those things. But it is concepts of AI only. So it will generate all the random things, all these things. Okay, it has generated summarization like that. So I use this prompt. Okay. That is well structured prompt. I already copied that. I will paste it here. So I just delete this. Okay. I have written the well structured prompt. So let's see what is the output of this. Yes, you can see this. The output is different from previous prompt. Why? So I have defined a role in which the AI will generate a better output in that background only because it is AIX, but that is specification. Prompt engineering means specification. Writing the prompt for specific use cases is called an prompt engineering. Okay? Can see there. After that, I have task definition I have and already, I have provided only please generate this explain the artificial intelligence in the three type applications only. Okay, that is healthcare education transportation in which I guided the AI to generate output for this type of applications only, in which the AI is generated that I want healthcare education, transportation, like this simple. So otherwise, you can write like this also. I explain what artificial intelligence is focusing on its application in health care only. I just delete this. I can see though. Different output from it will explain only healthcare. That is C. You can see that if I go specific, you can see this here. So when I use when I guided the AI to write the explain AI in healthcare, education, transportation, in which there is no specification, but there is a three types of different specification, in which the AI has just thrown generated output like summarization of each subtopics of this applications. You can see. But when I go deeper for specific application like health care only, it generates a more deeper of the healthcare AI. You can see the more. So that is why the prompt engineering is very, very effective to interact with AI to get the best and best output from AI. So prompt engineering is all about writing the prompts for specific to get the best and relevant output of of our requirements that we need. Okay. I hope you understand this structure and the role of the role setup. Okay, I hope you understand this lesson. Clearly, that is some simple structure of writing effective prompt in which you have to use three steps like role setup, task definition, context. Okay. After that, I have seen some examples how the specifications work. Uh how the AI will generate the output based on our instructions. So after this lesson, and in next lesson, we are going to learn some iterative prompting, which is best and most important method to get the best output from AI. Let's dive into that. 14. 3.2 Iterative Prompting: Okay, guys, welcome back to this lesson in which we are going to learn the most important technique that is iterative prompting. So this prompting is quite similar that we already discussed earlier, it comes under the multi turn conversation. Okay, I interaction in which we write the prompt and I will generate the response. After that, we will write the prompt that is follow up prompt, follow up to adjust our output. Okay, this is called an iterative prompting, so we will discuss more in detail in this lesson. Okay? So let's see what you will learn from this lesson, iterative prompting. So we will learn how to refine prompts to improve AIR responses and we will see some technique and we will see some examples as well to understand better what is iterative prompting is. Why retive prompting is important. So the language models are trained by large amount of data. So it has smart also. But the language models sometimes need guidance to generate a output that we want, how the output we want. Okay? Needs some guidance to generate that output. Much like, for example, editing a draft of a document, editing a document. So if you use any Google Docs, we will adjust the paragraph or content in the document by analyzing it by describing or by simple proof reading. Like that. I also will adjust the output with our guidance, okay? See the iterative prompting is a process of adjusting your prompts based on the output you receive. Okay, means first, you will write some prompt to AI, that is instructions or question or query. According to your prompt, the AI will generate a response, okay? The output is analyzed by you. If you think if you want to adjust some output, you need some extra information, from that output. You will write some follow up prompt to get detailed output from previous output. Okay. So it will adjust the second output from previous to understand your follow up prompt. So we will understand by practical implementation JA GPT. Don't worry about that. So you can see this technique is essential for refining and narrowing down your responses to meet your needs. Okay, it is a best and most effective way to get most of the EI model means to get most of the effective output from JA GPT or any language model. So it is the best way. We will see this. Let's see. So there are some steps we have to follow to iterate effectively. First, we have to analyze the output. First, we have to write some prompt, it will generate a output. The first step is we have to analyze the output. Check if the response aligns with your intent or needs or how you want. If aligns with your needs, that's good. If not, means what I have to do next. We can see the identify gaps. Second thing is you have to identify the gaps. Look for areas where the output is not clear or have some inaccurate data present in the output. So you have to identify that gaps. After that, you have to revise the prompt. Means you have to write the follow up prompt. Best then the previous pawn prompt to avoid the previous output. Okay? So it can be easily understand by practical implementation and practice. So we will see this also. We'll see that also. Okay. Let's first simply understand the steps. First, I have to write the simple prompt. After that, I will generate some output after I have to analyze the output, whether it has some incorrect or unclear data, or after that, I have to identify the gaps that is inaccuracy or anything that uh, after that, I have to revise a prompt. I have to ask the follow up question, or I have to change the prompt previous prompt, to get the best output in that. So that's it for this iterative prompting is very easy. Like you can like it having some chatting with our colleagues and friends like that. We will see some example to get better understanding for this. I can see the intial prompt. That is describe renewable energy. The output will be the more I have just, for example, I have taken two lines only. So renewable energy comes from natural sources examples include solar and wind energy. This is quite simple answer that I have taken for this, but the output can be very long, okay? Have revisor some prompt. Explain renewable energy, it's benefits and three specific examples, solar, wind, and hydropower use simple language for a beginner audience. So see this is a well structured prompt. This is a revised prompt. Why? Because I have analyzed output, which it is good, but I don't understand it. I just got some specific answer, but to understand for me for easy understanding for me for a particular topic, I have to guide the module, much with my requirements. Okay. According to my capabilities, I have to revise the prompt again to get the best output from the AI. So we will see in the practical intimation Jab. You can see the revisor prompt sets clear expectations leading to a more detailed and taller response. Okay, we will jump into JGBT. We'll see how it works. Let's go to Char GBD. And so I'll take the new chart. Okay, I will write a simple question like, Okay, we will take this previous example of our PPT that is describe renewable energy. Let's see this. Describe energy. That's what the output will be. You can see some output related to our remable energy. You can see the output here, advantages, challenges, applications. Okay. It is best. Okay? This is reverse to energy derived from natural sources that are continuously replenished and virtually instable. Okay, if I beginner to renewable energy, what is the meaning of replenished and virtually inaccessible? So I don't understand it. So for that, I have to go to the Google and I have to write the meanings of plenehaustible. So what is that? So to avoid these things, I will write a prompt like this. I have copied that, so I will just direct a page here. So explain renewable energy its benefits and three specific examples solar, wind, hydropower. Use simple language. This is most important. Okay, when you are going to learn something from the language models because the AI is trained by the advance English, okay? That is advanced English and more data, trained in English with more data in which all comes with the advance more complicated words, English words that we never heard in our life, okay? So we cannot understand that. If you use the use simple language, so it will generate an AI response in simple words that we can easily understand. So let's see this You can see what is the renewable energy C. You can see this very, very clean and simple language that we can easily understand this topic. You can see this here. What is renewable energy? Renewable energy is energy that comes from natural sources like the sun when these sources are always available and don't run out, unlike coal or oil. They are clean and help protect. See you can see the examples how we want the output. It is a quise and very effective output when compared to you can analyze the output. You can check these two outputs. See, this is very effective words, complicated words that we cannot easily understand as a beginner, but it can be easily understand by beginner because it is simple language to explain to us. That is why writing your requirements, providing the more detail what you need from AI is powerful. It will generate according to our need. That's it. So you can see, I have just written the simple pro. After that, I analyze it. I analyze it. So this output is better, but I cannot understand. Then identify the gap. What is the gap? So I didn't understand these two words replenished and inaccessible because I don't know, because I am a beginner. So for that, I when, what I got the AI, so expire renewable energy. And I simply tell that some specifics benefits and any three specific examples use simple language because I have to learn renewable energy in simple language. I have to understand because I'm a beginner simple When I got this idea after analyzing the first output. That is what the prompt engineering is. There is a first and most step you have to follow is in iterative prompting. You have to write any first instruction after that, analyze the output and change your next prompt according to your need and more detail as you can possible. After that, it will generate some AI, which is more effective than previous one. That's simple. So that's why iterative prompting is very most and effective method to get the best output according to your need. I hope you understand and practice by more and more. So you can go with follow up questions like iterative prompting is not only here, stop. I can I can analyze this output, and I will identify any gaps in that again. After that, I will revise the prompt again. I need specifically for this only I need in Spanish language, in French language, or in Hindi or other regional language. Okay, to understand the output for me. That's most import. First, you have to write the prompt. It prompting, you have to write the prompt. After that, you have to check the first output from AI. After that, identify the gaps and provide more as much you can more detail in second revised prompt, it will generate the best A output than previous one. So you can go up to that AI input meets your requirement. You can go for the 582 prom, 60, ten, 20, 30. How, there is no limit in that. So why? Because you want that output, that exact output, what you want. For that, we will use eight root to prompting. That's it for this guys. This sit roto prompting is very, very easy if you practice well and with more examples. Okay. I hope you understand this Understand this. Okay, we will move to next lesson of this model number three in which we learn some context management that is how we have to provide right background information by balancing brevity and detail in our prompts. Let's dive in. 15. 3.3.1 Context Management - Part 1: Welcome back to our next lesson of this model number three in which we are going to learn what is context management. As we earlier discussed about context, providing context means providing additional information to a prompt or AI to guide our output, how the output should be generated. Okay. Okay, we will provide additional information. So right amount of information we have to write in the prompt. That is also plays a major role. So what is what additional information or background data that I have to give the AI to get best output. So we will see some context techniques and some tips or example of like this. We'll see that what is the role of context in proms, which is very important and some tips, and we will see some example and we'll just close it. Let's see. Context management. Context management means the providing background or additional information to AI in prompt to guide AI to generate a output that we want, which the context means providing additional information will help AI to understand our main intent. Context, what is the role of context in prompts? Remember this, if you provide too little context too little additional information, that lead that can lead to some irrelevant or unclear output or response from AI. On other hand, if you provide too much context, can lead the model and reduce the output quality. Okay. Either you provide little or too much, there is a chance of getting very poor quality of response from AI. How we can write the best or right amount of additional information to AI in which we can get the best output as a response from AI. We discuss in detail in this model right now. The key is to include just enough information to guide the AI without overloading it. Yeah, that's simple. You have to include just enough information, what you need. That is enough information to guide the AI without overloading it. Because some people will just write that additional information which is not required. Which is not required to generate an output, which is not required. We have to delete that. We have to write what we need exactly. That can lead to a better output from the AI. Let's see the example or we will see some tips for managing context. Let's see. B specific. Include details that help the model understand your needs. Just include details what you need. That is simple. You do not need write or additional information which is not required in that topic in that output. Just write the specific details that help the model understand your needs. Next is use examples. If your task is complex, you can include some sample outputs to set expectations, to guide the module to generate a output like this only. We already have earlier discuss that is few short prompting in which we have provided some examples how the output should look like. Okay? Exactly what we have in this context manner. Context means providing additional information or examples or other data which support our intent, which helps AI to generate a output that we want. That's simple. Use examples and third one is avoid redundancy. Redundancy means, keep the prompt concise and to the point. Just concise and to the point. So you have to keep these three tips in mind while writing the 16. 3.3.2 Context Management - Part 2: You can see the example, best example here. So these two proms are very well structured, but it is more overloaded. You can see this. You are an expert in climate science role setup. It is very good prompt, and this is also very good prompt. But it is overloaded, right? So write a detailed essay about the causes effects and potential solutions, it is more detail, rather than this, but it is optimizer. It is overloaded. Why? It can be defined by seeing the output only. Because we will see that Cha GPT. The optimizer prompt keeps the task focus while being informative. First, we understand these two proms. This is a well structure that is also a well structure, but it is overded by more additional information. But why the more detail can be guide the AI to generate the best output, but why it is overloaded. Here some surprising is when you try EI, you are an expert in climate science. You are an expert in climate science. You do not need to write all these subtopics because in climate science itself, it already know these topics and you already understood. Your task is write a detailed essay about the causes effects, and all these things. It is all these things already known by AI because it is expert in climate science. But the main depend on your intent. But your task is just writing a detailed essay about the causes effect, all those things, which is come under this climate change. Instead of this giving more information, you can just write like this, write a 500 essay about the main causes of climate change and three potential solutions. Use examples and data to support your points. You told the AI, use examples and data to support your points. What is the data to support AI points means that this is carbon emissions, deforestation, industrial pollution, renewable energy sources, all these things comes under this example data support your points, which is already known by AI that is climate science. You do not need to write all these subtopics. Okay, because if you don't give this additional information, already I can know what are some causes and effects done by the climate change touching, all those things. This is overloaded because we have given so much additional information, this is not required because the already AI know because AI is now expert in climate science. I understand I hope you understand this prompt. But when compared to here, it is well optimized because why you have written you are assign the role? That is you are a climate science expert, in which the expert know, in the expert know this all these topics. You are intent, write a 500 word essay about the main causes of climate change and three potential solutions. It is just quite concise and direct to the point. There is no nothing. Use examples and data to support your points. This simple. But here we have given you some additional more. We don't have to write this additional information. Why? Because the pollutions to climate change is already known by the expert that is in climate sense. It will automatically generated if it doesn't given these points. If you don't give these points additional information, the AI can generate the solution based on these topics. Okay. Let me see. It can be understood by practical implementation. I will go to the hat GPT, I will page this overloaded prompt first, then we will go to the Optim measure prompt. Okay, let's go. So we'll take the new chart. Let's paste that. This is a overloaded prom that I have directly copied from my PPD and will paste here. Let's see what AIs output. Here's some output that climate change causes effects and potential solutions. Causes of climate change has some points explain about causes of climate change here. Let's see deforestation, industrial pollution, effects of climate change. Okay. Extreme weather winds, widediversity loss. It's good. It's some detail as detailed because in overloaded prompt, if you give the overdt prompt, the output also be overloaded, simple, right? So AI is enside by us, the AI will only generate according to our needs and prompt. Let's see this overlaid prompt we have written here. So the output also that we have given here, that is quite long and more detailed. Let's see what happens with optimized prompt. I will copy from here. Go to share GPT and payto. It is some optimizer prompt. Let's see what is output of. The main causes, you can see it is explained. Deforestation, greenhouse gas emission, industrial get activities, three potential solutions transition renewable energy. Reforestation conversation efforts, policy reform and global agreements, conclusion. It is best output from previous then. Why? So it is not about output. It is all about prompting. Okay? So you can see here you can see the output. There is something causes of climate change, global warming, carbon emission, deforestation, industrial pollution. You can see the exact output from other up to measure prompt like main causes of climate change, greenhouse gas emission, deforestation, industry, and agricultural activities. It doesn't give that additional information in this prompt here, explain like this. So when you compare this, write a details about causes and potential solutions to climate change, touching, global carbon efficien, hinder shell pollution. I never provide additional information in this optimizer prompt, but I know about that topic. Okay, because it is expert at climate science. Okay. So it will automatically generate about what is the main causes of climate change like greenhouse gas emission, deforestation. We don't need provide an additional information here. That is simple. You can see the solutions that overloaded prompt generated, adoption of enable energy reforestation, industrial innovation, policy changes that is here. Additionally, this prompt have. But in the optimizer prompt, it doesn't provide it, but it will generate it. Solutions. It doesn't provide it only explain this. I just write what I need. Automatically AI will generate the solutions, transition to renewable energy, which is quite similar to overloaded prompt. Okay, this prompt. You can check it this easily. That is simple. That is why the context management is very important. There is no nothing in that. If you provide or not, there is nothing but sometimes AI will generate these topics only rather than focusing on the main causes of climate sense. I hope you understand this point because you can see this. If I cancel this carbon emission and industrial pollution, the AI will generate the causes of that is touching on global warming, and it will simply refuse or it will simply delete the carbon emission and industrial pollution from the output because you have only asked for specification here, specific topic that is renewable energy policy changes, and it will only generate the public awareness like that. It will never explain about carbon emission because you instruct the AI to generate these topics only. Okay, it will write an essay, but it will only write an essay about global warming, deforestation, renewable energy. So it will simply delete these two topics because you delete it from the additional film. That's why we have according to our need and things. So our prompt can be by providing too much over prompt can lead to a irrelevant or very poor response. Okay, according that depends on our requirement, okay? When our main intent is writing the essay about climate change, okay? Here are only the same. You already seen the output from Ja GPTsRsponse. So that is quite similar. Okay? 17. 4.1 Prompt Optimization: Back to master prompt engineering module number four in which we are going to see some advanced prom patterns. Let's dive into that. So before going to discuss some advanced prompt patterns, we see some prompt optimization tips techniques. Okay? We already discussed earlier some best practices to write prompts. Okay, don't confuse. It is all about there is similar things all we learned earlier. There is no different in that, call them. So what is actual is prompt optimization. You can see the optimization is the art of fine tuning. So don't get panic by adding this fine tuning. It is similarly refining prompt, training AI with your prompting. Simple that. Optimization is the art of fine tuning your prompts to ensure clarity, reduce ambiguity, and improve engagement. This three is very important. You have to keep in your mind while writing the prompt. The best prompt will reduce ambiguity and any irrelevance response. It's about asking the right question in the right way to get the best response. That is, prompt optimization means simply asking a right question in the right way to get the best response in which we can get some improvement which improves engagement and reduce ambiguity, which leads to a better AIR response. That we see DIP. There is some key points we have to keep in our mind while writing the prompts for AI. First, we have already discussed that is clarity. Clarity means using simple and precise language and avoiding the confusion or unclear words or sentences that I cannot understand our intent to generate relevant response to our query or task. A, instead of C, you can see the example here, we have to recall again, tell me about history. You can see that there is no clear in that. There is no specifical. Tell me about history. History means it is a broad thing. So the AI will think, okay, I will have to explain history. It will just generate a random data related to random information related to history. There is no nothing in that. Instead of that, if you use clarity. That is, can you provide a summary of World War tools, causes and outcomes? It is a specific topic in the field of history. So now AI has think clearly, Okay, this question have some clarity that the AI will think, I have to provide a summary of World War tools causes and outcomes. It is a specifically, right? It is a specific topic that AI can generate best output related to this prompt. That's why we have to make sure the prompts should be clear and specific as possible to get the best output from the AI. Let's see second point that is roll of formatting. Formatting means you already know about this formatting. If you use docs or any document if you have any idea about that is formatting is the best thing that we can, which saves time to find the points, to see the things or proof read the things that we have written in that document. Formatting is nothing but using headers, bullet points, and small headings. That is all these things. So it is a best practice if you use formatting in the prompts. Otherwise, it is necessary. But if you are looking to become a professional prompt engineer, your writing skills should be very effective. Okay? The more effective you are at writing, the better output and the better outcome you can get from the AI. Can use some format like using bullet points in your prompt number list or headers in your prompts to get structure response. Like an example, you can see here the list of following in order, advantages of solar energy disadvantages and future potential. Just to guide the AI to get the output in this format only. So it is simple. Let's see that. Third one is engagement techniques. What is the engagement techniques? Okay, if you generate AI from A any language model, if the generated response is not engaging, you are with yourself. So the other people cannot also engage with that AI response. So what is the effort of that getting the AI response, right? So you have to do while it is very important when you get when you are looking for content creation or article writing where the people will read your book or anything. That is, the response, the output should be very engaged, right? Except that we cannot get the best reading capability. Okay. The engagement is very important in any use cases. So for that, we have to frame your questions to invite curiosity or provide context. Context means here background information. Additional information regarding your topic. So example, you can see, imagine you are a scientist in 2050. What breakthroughs in AI might you describe? Imagine you are a scientist. So here we assign some role in 2050, that is a future. Okay? So how do AI think like that the AI think that I'm a scientist in 2050, what the breakthroughs in AI might you describe. So that AI will generate a best output regarding that AI is thinking I'm a scientist in 2050. So it is engaged or content, because you build a connection. The AI is connected with scientists in 2050. In which it can generate a best output. Imagine imagine what it takes to engagement techniques. Okay? If you are okay, if you are not use this type of prompting here, imagine you are a scientist. You just use what breakthroughs in AI might you describe in 2050. That is a simple thing. That is a simple question. If you use simple, imagine you are a scientist that you are doing you are guiding AI response to engage in their thoughts. Okay? They engage in their data. The cabulty of AI, thinking that will connect with their knowledge base, and it will think, and it will generate engaged content rather than asking simple questions in which there is no engagement in the prompt. Okay? So this is why you have to use some techniques, words that AI can think that AI can imagine and which connects their knowledge base and words that can describe your output very easily. Okay? This is some key ones that we have to keep in our mind while writing the prompts for AI, that is to get the best response to improve engagement, reduce ambiguity, that is unclear response and clarity. Okay? So after that, let's go to our main part of this module that is Advanced prompt pattern P one, in which we are going to see five different and most important best practices prompt patterns as a prompt engineer, you need, and you have to use for solving complex task. Let's start. 18. 4.2.1 Advanced Prompt Patterns (Part 1) - 1. Ask for Input Pattern: Welcome back, guys. Welcome to our Master prompt Engineering model number four, and which we are going to learn Advance prompt patterns part one. Okay? So in which we are going to discuss some five most and best prompt patterns that are popular one, and as a prompt engineer, we have to use in our daily lives to get the best output from the AI. So instead of these five proms, we have five more other prom patterns that we discuss in part two of this model. Let's discuss the first prompt pattern that is ask for input pattern. Let's see in detail of this prom pattern. See. So this ask for input pattern is a powerful way to craft prompts that guide AI interactions effectively. This pattern involves external asking for an input, providing clear contextual instructions and specifying the desired response structure. Why we are using this to reduce unclear responses and clarity and makes interactions more predictable and easy and very effective output to get from AI. Okay? It is simple, very simple to learn. So it is a ask for input pattern is very easy to understand. Let's see this. To use this pattern, our prom should make the following fundamental contextual statement. Fundamental means, ask me for input X. X is nothing but we have to replace X with our goal, task, or question in which we have to get output from AI. That is simple. So what is a fundamental contextual statement for the ask for input pattern is ask me for input. This is a very important fundamental sentence we have to use in prompt itself for matching any task type. Okay. Let's see this working of this prom pattern by practically implementing the Cha GPT, and let's dive into that. So I am jumping into Cha GPT. So let's see what is about ask for input pattern. Okay. Let's see. So I will just describe any um task to AI before we write ask me for input X prompt pattern. So for that, so I will quickly copy my prompt, and I will paste here. So you can see the exact prompt here. From now on, I will provide fitness goals and other relevant details about my routine. You will create a weekly workout plan tailored to my input. For each day, include exercise, sets and reps. At the end, suggest a recovery activity for the week. Okay, so it is a simple task I have given to AI. Okay? This is a simple prompt I have given to AI for my preferences. After that, I have used that is ask for input prompt pattern. You can easily see here. That is ask me for my fitness goals and current fitness level. Actual how it works. Let's see this. If you go to ATM machine to withdraw your money, you first insert your ATM card in that machine. After that, it asks you some input is from machine itself. The machine will ask you Pincode, your pin number, ATM pin number, and how much amount you want to withdraw, right. So that questions will be asked by Machine itself. Like that you are training AI. Okay? You are writing the prompt like that after you two begin your task. Okay, to begin your task, the AI will ask you question. When you give the answer, after that, it will proceed the main task. That is, ask me input prompt pattern. Okay? Let's see what happens. I will go. It will ask some question to me. It will ask C. Got it. To create a personal workout plan, I need some details from your side. See, after I provide these answers for these questions, then it will generate a workout plan for my preferences because I have defined the instructions in the prompt. After that, I use ask me input prompt pattern in which the AI will ask questions to me. When I provide answer to these questions, then only I will generate a weekly workout plan tailored to my preferences, simply as simple as that. It is simple it is similar like ATM machine to a amount from the bank itself. Okay? So you can match this ATM machine, you ask me for fitness goes and current fitness level, that is you are inserted and ATM card. After that, it will ask what is your pin number, how much amount you want, and you want to withdraw from current or savings account, all these preferences. Like this, it is similarly work. Okay. Let's see. After I provide answer these questions, it will generate a weekly workout plan for me. Okay, let's see I will provide answers quickly. You can check here. What is your fitnllGs? I will go to Weight lass W weight loss. So number two is, what is your current fitness level? I could take intermediate? Number three, do you have access to a gym or prefer homewouts? I will prefer home workouts. Number four, the question A specific preferences or limitations you have I will take no heavy lifting. Number answer is how much time you can dedicate diary to your workout? Let's take 30 minutes. So let's see what the AI will generate a response. Here you can see that there is a better output that is the A generator weekly workout plan based on my input here. Right. That is best. Okay? It is very effective and it is very best output to get my tailor or to get my preferences workout plan because it ask more detail about me my preferences. The AI ask me my preferences to generate a effective and near and easy preferable workout plan, which is set my rotin. Okay? If you see instead of this, if you write question, just provide me I will just provide my fitness goals. Okay, if you write instead of this prompt pattern, if you write, create a weekly Wout plan Telatum for my 30 days Workout plan. It will simply generate some random information or weekly workout plan randomly without knowing your preferences. If you use this prompt pattern, ask me for input prompt pattern, it will ask your preferences, what actually you need in what preferable output you want, and what aspects and in what way you want. So this is the ask me for input prompt pattern works. You can use for any other applications. I have taken only for the weekly workout plan, so I can take for study, education purpose, any complex task that I cannot know the background information of you to actually solve the real problem, right? So if you give the Uh, details details to AI, which supports your task. It can generate the best output as we earlier discussed about the prompt optimization, right? So it is a best practice practices while writing the input prompt pattern, which helps you to get the best output. Okay? So why this pattern is useful means, so we can improve accuracy of output. Okay? It is a best we can improve accuracy of output because we have declared our requirements in here because the AI asked me the questions related to my task that AI wants from me, because the task is generating weekly workout plan for you for me. Then AI will ask questions to me only in a same after I provide my preferences here, it will generate a accurate and shotable weekly workout plan for me. That is why this ask me input prompt pattern is very powerful if you go into deeper, deeper in that. So can easily understand by writing more and more prompts regarding this ask me input prompt pattern in which we can get deeper insights if you use this and if you practice well and well with the ChagPT and other AI language models as well, but it works in chargB so most thing. It works better in ChargeP because it has some capabilities, good capabilities other language models. And don't worry if we discuss this topic also understanding different LLMs and their capabilities, pros and cons in upcoming models after this module. Okay. So this is all about ask me for input pattern. So I will take another example to get better understanding of this. So I will take simple example. So let's take some from now. Okay. I will take another thing here from now. Okay, I will take I will describe Since in text. You will translate. I will take a simple prompt to get better understanding it. From now, I will I will tell which language speaking language you use to translate the given task. Sorry to a given text. So I will ask, I will just provide her task. How are So what I have tell to AI from now, I will tell which speaking language you use to translate the given text, how are you? This is a task in which I tell to AI, I will tell you. I will tell which language you should use to translate. You should use to translate the given text, how are you? So what I will tell, I will use here, ask me Input prompt pattern here. Ask me so I will take this. Now, ask me for which language use, I need to use. Let's see what will AI generate. You can see the A ask me got it, which language should I use to translate, how are you? Okay, I will tell French. So I just provide the answer that is French. Now you can see the, how are you is translated to this one in the French. So this like the ask me for input prom pattern works, you just have to define a task in which you have to use this. I will tell which speaking language you should use to translate the given text, how are you? Last at the last point, you have to use ask me. That is now ask me for which language I need to use. That is based around your requirement or task that you are going to solve by the AI. So you can change this, but you have to use at the last stage is ask me for but, uh, you can so you should make sure you have to define task itself also. In this case, I have tell to AI. I will tell AI. I will tell which speaking language you should use to translate this. I have tell to the AI because I will tell you, Okay? For that, I have to write the ask me for input prompt pattern at the last. This will two matches here. Yes? When I provide this answer, it will take here and it will translate to how are you in French. So this is quite easy if you practice well by yourself in the ajebti. So don't worry. I will provide this chat link in the document itself, and it will get after this course to get better understanding it. So That's it, guys. For this, all about ask me for input prom pattern in which we have seen, we have to use some fundamental contextual statement that is ask me for input x, it's maybe something your goal, question or task or anything that according to our preferences, so we have seen two examples in which we have seen that is one from now, from fitness related to anima, there is translation regarding this, so you can um understand this more deeper by practicing by you yourself so you can get some deep insights and you can understand very well in that. So that's it for this prom pattern. So let's move to our other prom pattern that is persona prompt pattern. 19. 4.2.2 Persona Prompt Pattern: Okay, let's see the prompt number two that is persona prompt pattern. So as we already discussed some prompt techniques that is role assigning role technique. Yes. So it is like that only. So persona means, um, guiding the AI to act some personal assistant. Or some specific role, you can see the example here, act as a high school math teacher. So I train AII guided AI to act as a high school math teacher because I want the Pythag theorem in the 15-year-old student explanation. So why this personal prom pattern is very effective? So because by using this personal prom pattern that is act as a specific role, assigning specific role. So this pattern tells AI to act as a specific domain knowledge expert, right? So the AI will think, first, I am a high school math teacher expert. For example, it can be easily understand by this example. So I assign a role like AI to AI that is act as a high school math teacher in this way, the AI will think, I am a high school math teacher, and I have to explain Pythagre student to a 15-year-old student, right? It will help AI to generate a specific an explanation to a 15-year-old student, right? So by this, by using this prompt pattern, the AI will generate a effective and more accurate output when compared to without personal prom pattern prompt. Right? So by assigning a specific role or tone or any style with a specific domain, so the AI will think in that field only, right? So if I assign AI to math teacher to act as a math teacher, so it will only act as a math teacher in which we can get the more insights from this AI to get more knowledge about MATS. Right? So it will just act like a high school math teacher. It will think like a math teacher, and it will generate a response as a math teacher only. So that is most of the companies or any professional prompt engineers use this personal prompt pattern more effectively to get the best output from AI because it is very most important in language models. Why? Because the AI is trained by large amounts of data, so it can simply randomly generate a output which have some inaccuracies in that, which have some unclear responses, right? So if you train A AI to act as a specific domain. It will think in that deep of that specific domain, which have the chances of which have more chances to get the best output and more accurate output from the AI, right? So you can see the best example to explain to understand it. Easy prom pattern, so you will understand it easily. So let's see when to use it. So if you are looking to get some specific domain knowledge from AI or to solve a specific task or to get a specific answer or specific problem solving, so it can help you to get the best insights from AI by using this personal prom pattern, so it will help you to generate the best output when compared to writing a simple question or queries. We will see some example in the jib itself, the how will work. Okay? We will see simple first writing the explain Pythagoras theorem to a 15-year-old student, and we will compare the output of this and we'll compare the whole prompt of this output. Okay. Let's jump into the ha JB and we will see the practical. So I'm going to the ha Jib, I have come here and I will write a simple explain explain Pythagoras theorem. Okay, 15-year-old student. Let's see the output of this prompt. It will generate some Pythagore theorem, explanation. There is a right angle. There are three sides. That's good. There is no in that, that is a good output only, right? Okay, let's take for main act as a personal prom pattern. I will just copy it there and I will paste here so you can act as a high school math teacher and explain by the sum to 15-year-old student. Let's see what is output of this prompt. So you can see that there is something good output from when compared to here because you can see the formatting sure there is anything that is not much or effective when we using the act as a prompt pattern. So you can see here. There is a step one guiding the 15 year student with EZ and it is a role of geometry because it will thinking in that field subject field in depth because we're trying AI to act as a math teacher only. So it will act as a specific math teacher, specific math subject field that have, right. So it job then that AI will goes to depth, AI depth, goes to math knowledge depth, and it will generate related information about Pythagore theorem, very deep explanation and all the you can see the difference between this output and this output, it has quite effective when compared to previous one. By because we use act as a personal pattern in which we're trying A to act as a math teacher only, not thinking outside of that, in which we can get the depth, in which we can get the output in deep about particular specific knowledge. Okay. Let's see another example by using two prom patterns. Earlier we discussed number one prom pattern that is ask me for input pattern in which we have some written task and we will give the input. Okay, so all those things and we will use these 2:00 P.M. Patterns and we will see this. So I will just use first that is personal prom pattern. So I will write a act assay travel, recommend that. So don't worry if have any words or sentence or have some mistakes, I will automatically understand. Why? Because it is thinking or this interaction is like men's text. It has some great NLP techniques. I will remember our words and it will easily understand our intent. So it is no problem that. So don't see the mistakes and words, all those things, see the technique and process. So I will use here act as a travel recommender. So I have used some specific task or specific role I give into AI that is act as a travel recommender. Then AI will think only travel recommender. The person who have all the skills capabilities that travel recommender have, it as similarly, the AI will think like that person only. Okay, like the travel recommender have. So it will focus on this travel recommender only right now. So we can see. So I will tell you I will tell you which city I will tell you which city you need to recommend. You need to give recommendation. You need to give recommendation to visit to visit such such beautiful beautiful places in that city. After that, I will use ask me for input pattern, right? So what is ask me for input pattern? So we have seen some fundamental cost. We have to use some fundamental contextual statement that is ask me for input X. X means we can use our question or goal or anything. So you can if you have previously recall that, please recall it is very important. So I will write ask. Now, Ask me. Now, ask me for which city in which city you are looking to visit. No, the AI C can see them. So I just use act as a person of pattern that is act as a travel recommender. Now, I tell the EI, I will tell you which city you need to give recommendations to visit such beautiful places in that city. I will tell that's a RNEA so I will tell you, don't worry about that. So after that, I will instruct the AI. No ask me. Ask me for which city you are looking to visit. Right I will think, Okay, I'm a travel recommender. So no, I have to ask which city that person need to get recommendations to visit such beautiful places in that city, right? So these two are very important. While using the ask me for input pattern, you need to very careful. You have to tell to AI, I will tell you, right? And these are the last statement you have to use this fundamental contextual statement to write the input prom pattern, right? So I will use here too. That is persona prom pattern as well as ask me for input prompt pattern. Let's see what is the output of this prompt. So you can see here. So great. I'm here to help you to plan your visit to the most beautiful places. It will ask me to input. It will ask me which city you are looking to explore. So as we earlier discussed about ask me for input. Prom pattern, which output the output from EI after intel prompt is input quotien. We have to give the input. After that, the task will proceed. Like that, I will tell that is new. So it will automatically generate the recommendations about that city in which have some beautiful places to visit. So let's see the AI will recommend some places in that New York City to visit. So that is the easy way to get some things. So you can write, Okay, you can write this, you can start from this without writing this. But if you use this, so there is more chances to get best accurate and possible output from EI, right? So that's why it is most important while writing for LLMs, especially for LLMs, because the EI is trained by large amounts of data that they can just randomly can give the recommendations. Okay, if you use this personal prom pattern, it is a specific, right. It is a specific in that the AI is only focus on specific, which we can get the best output. Instead of randomly taking and randomly throwing the output, that is not a best output, right? So this is no further simple. So it is very important while it is very important, while solving most complex problems for a specific use case specific domain in specific domain. If you are looking to solving some complex task, you need to use this personal prompt pattern very, very, important. You have to use this prompt pattern because you are looking for solving some specific problem. So your prompt should be as a specific, right? So at that time, you have to use act as a travel recommender. For example, if you are looking to solve some coding problem, so in Python, so you have to tell act as a Python developer who have ten years of experiments in sol ring some bugs like that, you can use that. After that, you will write a task and so on, so, right? You can use Ask me input pattern and other prompt patterns we will discuss in further classes. That is no problem. You can use like this if you are looking to solve any content creation problems or if you are looking to generate some specific content from AI like that, you can use it as a educational content creator who has ten years of experience in writing effective and creative engaging content to grab the audience attention. And after that, you can write any task because assigning a specific role can generate the best output when compared to other without act as a prompt pattern prompt. So that's why you can easily understand this. So it's simple assigning a specific role to AI in which we can get defect to output. So you can practice by yourself. So there is only one thing that you can get the more deeper knowledge or more writing skills, prom writing skills by practicing only. So practice by yourself and interact with AI with different prom patterns to enhance your skill set, which can help you to get more stuff from AI. So I hope you understand this prom pattern very well. So let's see another prom pattern that is chine refinement prom pattern, which is very most important to enhance our prompt writing skill. Let's go. 20. 4.2.3.1 Question Refinement Prompt Pattern - Part 1: So let's see the question refinement prompt pattern in which it is very important to write best prompts or anything that we are looking from AI. So what is actually question refinement prompt pattern? In the heading itself, you can easily understand question refinement. Refinement means writing the question again with effective manner, in effective manner by reducing errors or sentence formation, and to be specific, right? I effective manner. Right? The question definement means writing the same question by reducing any errors or by improving writing in effective manner. That simple is. So the question deferment prompt pattern is same. So you can see that the template for this pattern can be expressed as. So this is the simple method we will use in HGPTR now to understand variable. So don't worry. So I will just tell you I will explain what actually definement prompt is. So if, for example, imagine you are interacting with ANI model, for example, take the JAPT. So if you writing some question if you're writing a prompt to AI, right? So your prompt writing skills, it's something better if you think, right? I imagine you have some prompting knowledge, you are writing the prompt, maybe it is some question or task, right? So if you have some confidence, so I'm writing the best with sentence formation or techniques, but there is something gap in that AI in ourselves. That is sentence formation or grammar. Right. So for that, the AI is better now because it is the AI is trained by Advanced English with such a beautiful grammar and effective sentence formation. As a humans, we can make some mistakes in writing English language. So as we already see that I have made so many mistakes while while interacting with age Bt, I think you observe, right? So that's why. As a human, we make mistakes, but EI is trained very well by Advanced English. It can suggest a better version of our English. For example, if you write something, there is something errors, but that question can be improved, right? The way of your asking to AI can be improved too much this according to our question. So that improvement can be written by AI with this prompt pattern. That's why it is very most important. To get different versions of our input, right, different versions of our quotients, even prompt, even any paragraphs, even anything that we ask to AI, it can suggest a better way of expressing those words, which makes very powerful and effective because A is very well trained in English pattern. Right. So let's see to get the more insight from AI, we dive into deeper with interaction with ha JBT to better or to understand effective what actual actual quotient refinement prom pattern is. Let's see. So I will go to hat GBT. I will take a new chat. Let's see. So if, for example, I will ask AI to please generate. So I will just take some task. Please generate a, I will take a specific only generate a No, no, no, please generate a story which have more engaging words. And fun for ten year ten year, boy. Let's see what the AI will generate. So you can see some dias generated some story which is suitable for ten year boy in which they have some engaging words. So you can see the output here. Once upon in the quiet tone of greed, that is a story. So it have some more. So that's why writing the prompts is very important. So I can suggest here, please write a story or 500 words story or, uh, 300 story, 300 word story, which helps to get the precise output from AI. So that's why Okay what you can see the example, you can see the output here. What if I told AI. What if I told AI to suggest a better version of this prompt? To suggest a better version of this prompt to get the most effective output rather than this. That means this prompt can be improved. This prompt can be improved that much of level that I can suggest. So what I will simply write this, please suggest better version. Sorry, my prompt. Let's see what will happen. I will just copy this. I'll paste here. No, it will generate some suggestions based on our prompt. It will generate some best versions of my prompt that I can interact with AI to get the most engaging story. You can see the here. You can see the better version when compared to this. So please create a fun and engaging story with exciting vocabulary and adventurous elements that would capitivate 10-year-old boy, right? This version sounds more specific inviting. So the AIG suggests some, um, suggested some prompt when compared to this. So that is a question refinement prompt. This is a basic question refinement prompt in which we give some input, and we will tell AI to suggest a better version of this, in which we can get some, uh, best inputs best prompt rather than our thinking. It. So is no, right? A is no. What in which way that prompt can ask to me the best output I can give, right? So that's why using AI to write some prompts can be very helpful, but not I am tending to just use and copy this some basic fundamental to write this top of the prompts. So we have to use this sum fundamentally. After that, we have to make changes according to our AIs output. So the best prompt writing skills you can get after analyzing output only after refining the first two prompt initial prompt, right? We will see all those techniques in further classes. Let's focus on this quoi refinement prom pattern. Okay, I see. So I have just user sum, please suggest me a better version of my prom. So writing suggest me better version of it is very most important. So it is a main fundamental sentence or context that we have to use in our prompt to get the best version of any input. So I will use this method. This is not actual method. So it is whenever now if you are looking to enhance your writing skills, so I will definitely try HGP to act, uh to go in that flow only. So for that, I have just written. Whenever I ask a question, you can write instead of this paragraph or anything that you want, any story here, anything you can use. So I will use I will take whenever I ask a question, suggest a better question, right? So what you have inputted, you have to input here. Suggest a better question and ask me if I would like to use instead. So you can see here. Uh, ask me here, why we use here? It is ask me for input pattern, we have used. So as we earlier discussed, this is a basic one, right? So in upcoming prom patterns, we use all these prom patterns, from basic to advance only. Okay, let's see here. So what we will see? Let's see what I will generate for this prompt. So it will generate, Okay, I will do that. Got it. I will suggest a better version of your question moving forward and check if you'd like to use them. So would you like to go with the revisor question one, suggested earlier, or do you have another question in mind? So that is the best capability that hajTi have that is memory update. So it will ask So would you like to go with the revised question I suggested earlier? I will asking, I have to go with this prompt. So that's why this ajvity is very, um, best apart from other language models in this case of memory update. Don't worry, we will discuss all these capabilities in upcoming classes. That is no problem. So let's see here. So I will suggest a better version. Just give me, you have another question in your mind. I will write a write 200 word. Write 200 words, article 0N global warming. Let's see what the air will suggest. You can see Let me know if you like any changes or additions. So it will just directly uh directly generate answer. So why this happens? Sometimes I will, um, goes misleading. So we have to cure it to AI. It is AI. So it will do some mistakes. This is not 100% perfect, not accurate. At this time, we have to tell to AI you are missing the way or flow. That's why I will write AI. So I have told you I have told you to suggest me better version. I have told you to suggest better version of my prom, my quotien. I have told you to suggest better version of my quotien. Let's see what the output will be. So it will simply apolyse us. You can see here. You are right. Here's a better version of your question. Could you please write a 200 word article and global warming. This is a better approach or a better version of my question. That is, please write a 200 word article and global warming. So sometimes I will generate AL AL just done the task. Why? I will do some mistakes that is not 100% perfect. In that case, we have to recall we have to recall the task. To AI. After that, it will comes to the same flow. That's why the AI is generated. That is R right, here's a better version of a quien. So we don't have to fall back from this. So we have to AI will do the mistakes. After that, we have to tell to AI that you are doing mistake. That's why human creativity is very most important while interaction with AI. So you can see the here. So I give some better versions dependent on your prompt, writing prompt. If you are writing the prompt, that is three lines. So it will generate rather than more than three lines or less than two. That is how the AI is thinking. It is most important because AI is no better at writing something that can be very effective than you in English or any other language. Okay, I hope you understand 21. 4.2.3.2 Question Refinement Prompt Pattern - Part 2: Let's say another example. So because from now, I have told to EI only like this. Whenever I ask a question, suggest a better version. So when I ask question here, so it will only suggest a better version of my question. So you can write three different like this, suggest a better version or suggest better approach, not only version, you can ask suggest better version or suggest better approach while interacting with EI to get more so we will see the example. So I will just write a prompt here is a prompt. You can see here. So we'll see that. You can use some particular act as a prompt pattern. Why? So when come back to prompt writing skills. So this prompt pattern can help you to become a write prompt writer also. So let's see how we can see that. So for that, I will tell AI to act act as a expert prompt engineer. Okay. I will assign a role. That is your expert prompt engineer. I will tell this awesome with ten years of experience. Ten years of experience in, in this field. In writing effect to proms. Right? In writing effect to prompts for AI. So instead of AI, you can use any language model. So Cha JPT, Gemini, Cloud, anything that. Okay? So instead of, Okay, I will tell you in another time. So act as an expert prompt. Engineer with ten years of experience in writing affective prompts for AI. Okay? So I have ascender role, some particular role to AI. No, I will use Okay, you can see here. So I am combining the three prompt patterns. That is first 10 act. Okay, ask me for input pattern after that persona pattern. After that, I will use this quotient refinement pattern also in this prompt. So please focus on very well. Let's see. So I will tell now, I will tell Q, I will tell I will provide you. I will provide you to generate or to suggest better approach. So prompt or approach is quite similar because prompt means you are interacting with AI. Approach means that is also same word. Same meaning you are approaching means you are approaching to AI, something to get from AI. That's it. So you can take directly prompt here. Better form, better prompt. I will provide you to suggest better form, better prompt. So you can write here. I will provide my prompt. To suggest better prompt from my From my prompt. You can write as much you want. So here is to try and AI very well to get the best output. Okay? Now, ask me for prompt to suggest. Better output. Better son. Let's see what the AI will do. I was thinking, Yes, you can see that. Got it. Please provide your prompt and I will suggest a better version for you. So that's the prompting is. That is the most important your way of interacting with AI using prompt patterns, so it is very most important. So I will just provide some prompt. We'll say, right. Black post, right? Block post on Let's D AI in detail. Let's see. This is a prompt. I have turned to AI. So let's see it will suggest a better version for me or not. See you can see that. Here is a more refined version of your prompt. That is, could you write a detailed blog post on artificial intelligence covering its key concepts, applications and future impact? So I never get the idea about to include key key concepts, applications, and future impact because I lack the knowledge. I don't know about much more about artificial intelligence, their concepts. But AI is no. That's why the more you give that's why the prompt suggestion is very good, right? The gap between our knowledge can refill by AI. The more you give the detail, the more information can be given information generated by AI. That's fine. So if I write simply like this, the AI can generate some random answers. But when I give this exact prompt or with the background information that is covering its key concepts, applications, and future impact, so it will generate a best output rather than this, just simply prompt. Right. When I writing this, so I didn't get this type of prompting, like covering its key concepts because I'd lack in knowledge. But there is a gap between that, right? But is no. What is artificial diligence? What is there have some concepts, applications. So it will suggest a better prompt this prompt can be improved. So with this prompt pattern, which can improve our prompt writing skill also. This is a simple one. If we go specific, right? If we go specific domain, so we can write the best prompt using AI itself, using hagibt itself. Yes. For example, if you take another example, like I will take here, this is only act as we go specifically. You'll see some better examples here. So I will act as export prompt engineer with ten years of experience in writing effective prompts for AI in. I will take a specific domain in which we can get the best output from in AI in, I will take some specific subject like algebra, mathematics. Okay, algebra, mathematics. So this is a tough one. I'll take some easy one for you understand. They are in educational content creation. Educational content creation. Okay. No, I will provide my prompt you to suggest better version of my prom, no ask me for prompt version. So what happens here? So from previous one, I will just tell AI, you are better at writing prompt in this prompt. I tell I train AI, you have ten years of experience in writing effective prompts for AI in educational content creation only. So it will think it has some great prompt writing techniques in educational content creation. It has ten years of experience, so it will just think in that field only in that master subject field only in which we can get the best prompt for our basic then it will suggest a better effective prompt for writing educational content. Let's see what its output is here. Now, it will ask me to provide a prompt. So please provide your prompt later to educational content creation and it will suggest a better version of you. I will just write a write a full lesson about education you can take anything, write a full lesson about I will take something photosynthesis. Let's see what is a better version of this question we can get. So you can see here. Could you write a comprehensive lesson on photosynthesis covering its process, key concepts, importance to plans and environment and related scientific terms. So you can see the output of this prompting here. So this is a definement prompt pattern. So this is our lag, so I don't know. I have to write this case concepts, I have some plans and vvement. I have to include scientific terms because I lack the knowledge in writing. A is no, what is the photosynthesis, What is the H because he has some ten years of experience in educational content creation, right? So even more, you can go specific in that woman. You can write this act as a prompt ten years of experience in writing, effective prompts for AA in specific subject, physics content creation, English content creation. Or even you go and go for E in English for specific topic, content creation. Can go much as deep specific to get accurate and relevance response of your prompt. And it will help you to solve any complex or any task that is very tough to you to solve. So these have many examples. So there is no limitations you can use. You can try by yourself as much by combining all these prompt patterns or with some writing skills with your prompt interaction with your examples and much more. These writing skills improved by only practicing. So practice by yourself by different experts, by different taking examples, different prom patterns, combining all the prom patterns or any and one or two, all of the things. So it can blow your mind. So it is very interesting to learn this prompt engineering. This is only the prompt patterns I have showed you. There is another prompt pattern which tX your prompt writing, which will increase ten NC at writing prompts, even though it will suggest to better things. Okay, to enhance your skill set. So that certified this prom pattern. I hope you understand very well. So it is very best prom pattern, question deferment prompt pattern, which helped me to get better at that writing best prompts. Okay, you can also get that skill by practicing it. So let's go to our next prom pattern that is cognitive verifier pattern. Let's dive into that. 22. 4.2.4.1 Cognitive Verifier Prompt Pattern - Part 1: Come back to our fourth prompt pattern that is cognitive verify pattern. So this pattern is very easy to understand and it is most important to get the best specific and relevant output to our task. So what is actually cognitive verify pattern is? The cognitive verify pattern uses a structured approach to enhance the accuracy and depth of responses generated by any LLMs like chargeb or any other LLMs. What is the main purpose of this using cognitive verify pattern is? This will subdivide it. It will subdivide a complex query. It will divide a complex query into smaller questions after we give answers to that questions, it will combine whole answers. And this pattern will reasoning means this pattern will help EI to minimize errors. So what will happen means first, we're told we will try EI or we will tell TEI about our task to do some particular task. In that, so we will tell EI, ask me some subdivided quotien related to that task. When I provide answers for that quien, the answer will support our main task to get accurate and relevant response of our task, which helps AI model to minimize errors. I will best way to do that. Actually, how it works means it can understand by writing them or you can see the example here, how we have written the prompt example here. I just tell AI about my task. That is how did World War two impact global politics. I given a task to AI. After that, I use some buttons like ask me subdivided questions. This subdivided quotients means the AI will ask me some subdivided questions related to this task that is about World War two impact global politics. So the questions are related to this topic. So when I provide answers for the subdivided quotiens, the AI will use that answers, I will combine and it will generate a best output related to our answer and the task that I given to AI. By this, the AI will generate a best and accurate output. While minimizing the errors in which we can get the best effective output without any errors and bias in the response. We will see the tactical inch GPT. Let you can understand the prompt here. How did World War two impact global politics? Ask me subdivided questions. See, you can we are using a ask me input pattern. So subdivided questions related to this main topic. So what I'm telling you is subdivided questions related to this main topic. Main topic, what is the main topic of this prompts, how did World War two impact global politics? This is the main topic which helps you to generate best overall output. So the question should be like that in which the AI can get help to generate best oral output after I provide answers to your subdivided questions. This is a main template of this prompt. You have to keep in mind that from here to, you have to keep in mind ask me to subdivided questions. This is a main template of this question. Anything you can put here about your task. It is a simple easy. We will see the practical in gibt. Let's go to ha so I'm head GBT. So I already copied the prompt that I have earlier shown to you, and I'll paste here. So you can see here. So I written the how did World War two impact global politics, ask me subdivided questions related to this main topic which helps you to generate best overall output of R provide answers to your subdivided questions. Now, ask me subdivided questions. You can see here, I use. Ask me input prompt pattern template here. So this is our last right, so you can see here. Let's see what will happen. So, yeah, you can use even more like, you can use here, persona paternal. So how so uh, act as a history researcher who have ten years of experience in politics, right? So you can go in like that. So we will see the best we will see the uh, combining prompt. We will see we will use four different prompt patterns up to we have discussed and we'll write some best prompt at the last of this section. See what will the output is. So here are some divided questions later World War Impact, global politics. This help me provide comprehensive view. So when I provide answers for these questions, right. So you can see how many questions the AI is asking to me to write the best output. So you can see, would you like to proceed with answering these questions or would you like me to refine or expand upon there? So you can see. So the AI is ask me some questions. How did the World War affect the global economy and the financial demand in different countries? So I have to provide these answers to these questions. So there is a lot, so it can take time. So I'll just go with the AI. Would you like to proceed with answering these questions? Yes. Let's see what will happen. So it will asking some questions again. Great, please provide a response to the question below. Feel free to answer as many as you would like to use. So generate the oral output. So instead of writing all these questions, so it will asking again from it is eight, right? So again, it will expanding. Up to how 17. So to stop this, instead of that, I will just write an answer for this first question. It will also generate the response, right? Let's see how did World War affect the global economy and the financial dominance of different countries. So I will just write that is Germany got Germany had loss, more Germany had lost, more economic capital. You'll see what a simple let's see. I will just tell AI. I will just write it first answer for the first question. After this, it will automatically generate. So if you give these questions, if you give answers for this all questions, it will generate a best output because instead of writing with the own train data, right, it will asking a real time data to you to provide that that based upon that, it will generate the best accurate real time data. So it is best pm pattern to get the real time data by asking you to provide that. Okay, so we will see I just give AI for the first answer for the first question. Let's see what will happen. So it directly generates and thanks for the input, could you clarify Jem economy loses after World War two. So it will again asking the specific is after I provide a first quota. Answer for the first question. It will going through the specific up it will ask tiens up to the required information is needed to AI to generate a best response. Right, I will ask again and again up to up to the Is required information. Need required information, up to required. All right. So let's see what is the specific economic baton phase. I will tell simple destruction of infrastructure. I'll just copy it. I will face this, that is. Let's see what will happen. You can see the all the generate the output of that particular question. So you can see here. Got it. Germany face every economic challenges after World War two. It is a output. This infor is a more detailed breakdown based on your input. Input means I have given the input here, and this is also an input, right. Is answers that I have given for the question. So it is a output from that AI to our task. So this is a simple example I have taken. If you provide all the answers for these questions that are asked by AI, it will generate the best output regarding our main task that is how did World War two impact global politics? So this is simple example I have taken. You can use for the many as many as possible to get the best output. If you are looking to solve a complex problem or specific problem, if you are looking to solve a real time data, real time question or real time complex, which I don't know. In some cases, AI is a the models are up to some specific limited date, right? So advanced also no advanced models are getting better with real time data. But I'm telling is, if you uh, this will help when, uh, need some reasoning from your side. So the AI cannot do all the things, but there is some human creativity that AI need to do that, right? So to do that, it is a best prom pattern to use and to solve any complex problem which have some reasoning and your involvement, right? So it is a best when solving real time queries or private data. Okay, something some information on Internet is be private, right not to public it, right? So when you are looking to solve any problem with private or restriction data that have some complementary regulations, right? So you can use this method. Yeah. While doing that, you have to keep in mind that while you are playing with private data, which is there is no data on Internet, so you have to check uncheck this uncheck the one toggle option, which is you can find your profile section here and you can go to settings and you have to uncheck that is data controls. Go to data controls, and please please off it because improve the model for everyone. When your data is very private and something that have some regulations that not to show to public, and you are using LLM so without off this option, improve our model for everyone, it can train, right? Your data goes to AI training. So the AI is learning day by day with our own data only. If you switch off these data controls, if you toggle of this data control of your data is safe. The data goes to AI model, so keep in mind that. So this is the best prom pattern to solve any problems easily, which data need your involvement because your data is something that information is not trained by CHA GPT, right, any other LLM. So while some information is within yourself due to some regulations or any company some data that is not be public and shown. So when that if you are looking to uh, solve by that data. So you can use this prom pattern in which you can tell to AI with some task and give this verified prom pattern in which it will ask some questions, and you have to give the answer later to that is, which is red. And after that, it will combine all those answers, and it will generate a baser upon this. It will generate output based upon the answers that you have given to that related quotients. It is best to minimize errors bias and to improve the quality of output with accuracy. So it can be easy to understand by yourself by practice 23. 4.2.4.2 Cognitive Verifier Prompt Pattern - Part 2: So let's see, we will see another example by using all the four prompt patterns up to we have learned from it. So what are the four prom patterns that we have learned? First one is aski input prom pattern, red, second one is persona, third one is question refinement, and the fourth one is that is current one, cognitive verifier pattern. So we will use these all four prom patterns. We will write one single prompt in which we will see the uh, creative prompt writing skill. So for that, I will tell I will act. I will using the first persona prom pattern, act as. Let's see what will happen. So I will go telling that act assay. Yeah, I will take a content creation in that. So act assay. Story right? Act as a story creative writer with five years of experience in crafting or in writing, some fun stories. I will take example. Okay. Let's see. I am used act as a person of prompt pattern in which we are assigning a specific role to AI to think and to generate output in that expertise. After that, I will use ask me for input, prompt pattern in which the AI will ask to me to give input to proceed next steps of the task. Let's see. I will tell I will tell I will tell you, which person need that story. Okay. Which person need that story, right? So I will tell you that which person need the story. Then after that, I am using that. Then after that, ask me. I am using the cognitive prompt pattern in which I am asking AI to ask me subdivided questions, related to the main task. Okay? So I have to define a task here. You task is task is to generate or to write best engaging story per person. Let's see example. After that, I use. This is a task I have got to AI. Okay. After that, I have used some ask me input prompt pattern here. I will tell you which person need that story, then ask me. Then ask me subdivided questions, right? Relate it to the main task, which helps you which helps you to generate overall best output. Okay. So I have used the three prompt patterns here. That is one personal prom pattern in which I have assigned specific role. After that, I define a task to AI. After that, I use some I will. So after that, I use ask me input prom pattern here, you can see here. After that, so I have used the quarent cognitive verify input pattern here. That is then ask me subdivided questions relative to the main topic main task which helps you to generate overall best output. So what happens here, first, it goes here, it will think act as a creative writer with fives of experience. After that, it will see the main task. After that, t will understand my task, and it will ask him some subdivided questions. I will ask for input. Which here person you are looking to get the story from me. After that, it will ask some sub subdivided questions. That is a related topic, all those things. So let's see the output. Okay. Even you can write here. Okay, I will miss something. Ask me now, ask me for, ask me for which person? Which e person need story. This is, uh, you can see this is very most important thing. So after you decide here, you instruct AI to I will tell the person here person that I need the story. After that, you have to write the last, that is ask me for input prom pattern that we have learned earlier that we have to use fundamental contextual statement. That is ask me for input prom button X. If you are earlier recall then again. So I have to write the last here. After that, I have to tell after that. After I provide input for it, then ask me subdivided questions. The simple Okay. Okay. So this is a clear task, right? So I have written all this using this at the last stage of the prompt, we'll think AI, my first step is asking this task. Okay? So you can see here. So what is output? Let's see. So it will ask me which year of person you need the story like that. Let's see what will happen here. Great. Which year of the person's life do you need the story? Once I know the year, I will ask questions to gather details needed for the crafting best story. You can see the output here, which is very beautiful, right? So great. For which year of the person's lives, do you need the story? Once I know the year, I will ask us to gather the details needed for the crafting the best story. Let's see. I will tell the person's age is, let's see, 45 years. 45 years. Got it. For someone at 45 years of age, here are some questions to tailor the story. It will asking some questions. When I provide the answers for these questions. Right? When I provide the answers for these questions, it will generate a best story. Let base it upon my answers, right? It is simple. You can see here. Personality interest, what are the person's keys? I will just answer some of two or three here to get the story. What are the person keys, personality traits, Adventurous. Let's take humorous. Humorous. Okay. Let's take what are the hobbies or interests define their life? Exercise. Let's stick. Okay. Sorry. Exercise. What is a professional or main occupation? He's a teacher or that? Are they facing any significant life events that is career changes, family milestones, family milestones. Let's take family I will give one more. That is do they have a notable achievement or dream they pursuing at this age? No, I will take this. Should the story take place in a realistic or imaginative setting? Take the realistic. Let's provide this answer for these questions. Let's see what is the output here. You can see the It will ask again some specific questions related to my answers to get the best output. That's why this prompt pattern is very, very effective. So it will going the specific in specific to get the best output to minimize the errors, right? So that's why the prompt ngining is very, very effective to learn. So again, asking some questions related to my specific questions related to my answers. Again, is a person is a fitness enthusiast, just starting to encorrupt routine starting taking this, just starting. After after I see some questions? Are there any specific type of exercises they enjoy? I take yoga. What subject or age group does a person teach 24 years? Do they have any memorable students or funny teaching moments that could inspire their story? No, I will take this. What milestone is significant at this point, child graduation family trip? Let's say family trip. So let's see I will provide some answers to specific questions. Let's see it will generate some story here. So you got it as a summer of the details for your roles. It will ask again some uh questions. You can see here. Now let me confirm a few final details. So I have to give the answer for these questions again. So, you can see this prompt pattern is again going to specific as much as possible to get the best story, right? So why? Because I have provided some questions only, some answers for the above questions. If I you all the answers for these whole questions, it cannot ask that much of questions in specific because after I provide all the answers for this question, you have some enough information to present the best output, right? So I have just provide two or four answers for the above questions. That's why I asking again and again some specific questions related to my answers. So you can see here. So again, you will asking is a person struggling with yoga poses? No. After that, teaching content, what type of teacher high school elementary math. Next, I will take what type of teacher, it is all right. Do they convert humity to their teaching? Yes. Do they grade. Where is the family tree Beach Mountains? Let's say mountains. Are there any memorable funny chatty music? No. That's I will generate a best story here. So again, it will ask you some questions. Oh, no, you can see the quotient. You can see the story here. So at 25, Mr. Kamar was many things, a veteran, elementary math teacher, a self proclaimed comedian, and now reluctantly a beginner yogi. You can see the output story of that particular person. So it will generating the best uh story regarding our information that we are this is simple example I have taken. When when you practice with your examples, so you will get the best and best insights. So I will recommend you to practice this prom pattern effectively rather than other because it will solve your maximum problems with this prom pattern because it will ask you details to come up with best output regarding your foundational data, right? So Okay, as I said, we will use all the form patterns. In this prompt, we will just use three prompt patterns and we leave the quotienRfinement prom pattern. Right, as I said, the QuotiRfinement prompt button, how it works, it will suggest a better prompt or it will suggest a better version of our input our paragraph, anything that we're asking today. Why? Because it is well structured and in English. It will trend by advanced English, right? So what will tell? So I will just click here, pencil button right on here. I will tell suggest to me. I will just take a quotation mark here. I will tell here. I'll just write suggest to me. Better version suggest to me better version of this given prompt. Let's see what will happen here. You can see here. It will suggesting some better version of my prompt. Act as experience if creative This is something you can see there. It will add a crafting, engaging and fun story, captivating story for specific year in a person's life. It will ask some best sentence formation. First, ask me if each of the story should focus on. After I provide the year, you ask subdivided questions, story details, all those things. So you can see here. You can compare these two prompts here. So which one is looking more professional, right? I think this is more professional rather than this. Why? Because E I know better at writing, at captivating, at combining the English words in specific manner, in effective manner. That's why we will use a question refinement prom pattern. That's as we see more examples earlier in this section. We have discussed four prom patterns which are very important at the foundational base and all of the tasks come under these prom patterns to solve it. Okay? After that, we will see, I hope you understand these prompt patterns very, very well. So let's go to our next prompt pattern. That is Outline expansion pattern. Okay, let's dive into that. 24. 4.2.5 Outline Expansion Prompt Pattern: Back to this outline expansion prom pattern. So in this prom pattern, we are going to see what is actually outline expansion prom pattern is and how we will write this prom pattern. So as we know, you can see, you can understand simple what is outline. So when if you are already read any test book or any e book, you see some at the very starting stage, you will see some contents. There is some topics and subtopics the eBook contains, right? So that is actually outline. Actually, what is what topics and subtopics you are going to learn in this eBook right in this document, all those things. That is some outline, right? So that is known as outline. What is expansion? So expansion means the basic outline you have. So we can expand up to its potential. So with this prom pattern, we can go with deeper, deeper, deeper in particular topic, right? So we can go deeper insights to get the best output, right? So this is all about outline expansion pattern. So to write this outline expansion pattern, we have to follow these five steps. That is intial prom pattern setup, generative Blood point outline, interactive expansion, iterative exploration, and final output. So as we already know what is about intial prompt setup, so you have to write some prompt, that guides AI to do some particular task. Okay? Obviously, after we give the prompt, it will generate some output regarding to our prompt. So in this case, we are using expansion prompt pattern in which the II will generate bullet point outline only. After that, we will see in interactive expansion, we will tell to AI expand this particular subtopic in which the AI will create another outline related to the subtopic that we guide to AI. Simple. After that, iterative exploration. Iterative means doing that again and again at multiple things multiple times. That is exploration. It can infinity. You can do so many number you can generate outline by taking one bullet point as a main topic. Don't be confused. We will see in the agibty right now. That is all about interactive exploration, doing the same the task again and again up to you satisfied, right? After that final output. So if you wish to stop it, if you know I am got the best output, so you can stop it, you can get the final output of that. Okay. So it is best when you are looking to write a eBook or document for your topic, this prompt pattern can help you to get the deeper content related to your topic, right? Let's in that AGP. Okay. Before that, we will see some example here. You can see the example of this prompt pattern. So I have written as act as an outline expander, generate a bullet point outline based on the input that I give you. We can read the prom pattern here so we can act as an outline expander. You can see I have used, I have used persona prom pattern here. In which we are going to get the specific to try AI to do some specific task. In this prom pattern, we are using Outline expander, right? After that, you can see I have defined task to generative BlltPointOline based on the input that I give you. So you can see that input that I give you and then ask me for which bullet point you should expand on. So if you focus on here, I have used ask me for input prompt pattern here, input that I give you. Okay? As we discuss about, ask me input prompt pattern very deeply. I hope you recall that. So Again, I define the task, how it should be the output and how you have to follow the guidelines. You can see that each bullet point can have atmost three to five sub bullets. The bullet should be numbered using the pattern or anything. Create a new outline for the bullet point that I select. At the end, ask me for what bullet point to expand next. Ask me for what to outline. You can see here, ask me, ask me why are you using this is if you recall ask me for input pm to pattern, you will get understanding better. So this is the simple prom pattern use case. Let's see, I will copy this and let's see in the chargB how actually it works. So I jumped into the harb. I will copy this prom pattern, right? So I will just delete this. So you can see what is the output here. It will ask me, please tell me the topic or input you would like me to create an outline for. I will take about advertising and marketing. So it will generate a outline regarding the input or topic that I given to AI. You can see the output here. That is outline for advertising and marketing. It's taking your time. Let's stop it and I will try again. Send it. Simple. Just I will generating again. Please provide the topic. I will take advertising. I given the topic here. Now, you can see it will generating outline regarding this topic. So you can see the topic outline here. So if you observe here, the outline is good, but you can see the bullet points. So if you see the contents in a testok or eBook, you will see some structured format of the contents is like 1.1, 1.2, 1.3 for the like that. So to get like that, so we have to guide AI, right, to write like that for. So we do not change we don't change the main prompt here. So I'm not changing the main prompt. I just write the structured here. So follow below structure to generate outline. So you can see here, I have guide the AI. You have to use one for the main topic. For subtopic, use 1.1, 1.2, 1.3. What will think the AI is? Okay, I have to generate outline for the given topic in the format of the. Okay? So output is depend on your instructions and your writing capability to guide AI to generate the output that you want. I hope you understand. Let's see. I will guide. Now again, obviously, I will take the advertising and marketing only. I'm providing the input here. Let's see. No, it will generate outline. You can see here. Okay, it will something here, 1.11 is. Okay, let's see here. Okay, no problem. Sometimes AI will do some mistakes. We have to guide AI too. The output should be like that, right? So for that, I will just click again. So it will generate according to ours. Let's see what will happen here. Again, I will provide advertising and marketing. Let's what is the output again now. Again, it will generating line by line like that advertising and marketing overview, 1.1 definition, 1.2. So this is the output we are looking. So how we can change this coming this under 1.11 0.21 0.3. So for that, we have to write here like this main topic, okay? Just we have to give some space. What the thing is, Okay, main topic is in this format. After the under 1.1, it will come the subtopics. Okay? So let's see, we'll see these instructions will work or not. It is all about writing and interacting with EI to get some insights. So you can get some experience about how the I is thinking and how the mistakes will be solved by, that you can see here. So now, C, you can see if you focus on here. So after when I paste here, it will only explaining the 1.11 0.2, 1.3. When come back to here, you can see it will as it will generating two, three, 45, even six also. But when compared to here, You can see it is only for 1.5 like that. So it is best when compared to like that. If I tell to AI, it is asking which bullet point would you like me to expand on if I write 1.1. If you write 1.1, no, it will generate a the sub bullet points of this sub bullet topic, right? So if I tell to AI generate a sub bullet points of 1.1, it will take the 1.1 0.1, 1.1 0.2 like that, right? So if I tell to AI, what would you like to expand on next? So I want to expand if I take 1.1 0.5. Let's see what will happen. So it will generating the sub points of the selected topic, 1.1 0.5 0.1. It goes on up to that is infinite times. So you can get the deeper and deeper insights from AI to write the best content for your next eBook or anything by using this prom pattern. That's why it is more powerful, you go the deeper, right? So we can see the example here, we have already seen this, right? Now, if I want to stop so it is enough getting this sub buulns. No, I want the content. I want the information regarding any sublnt in case I will taking the brand awareness and recognition. So what I have to do to get the information about this topic. So for that, so example, if I just I will tell AI, explain explain brand awareness and recognition. So you can see what AI will do. You will see that. No, it will explaining the brand awareness and recognition. What is actually brand awareness brand recognition is. Would you like to expand on this topic further or discuss something else. If you write here, a brand awareness, it will again expand the topics related to brand awareness in deeper, right? So you will go deeper and fundamentals if you go like this flow, right? You will get the best output regarding other prom patterns. So you can see them. I will just tell the AI, just explain band awareness and recognition. It is now explaining the brand awareness and recognition. In some cases, in some time, the AI will what will do. If you write, explain brand awareness and recognition. Sometimes the AI will generate only the outlines, even if you tell AI to explain. Why? Because sometimes or intial prompt pattern is to generate outline only. Sometimes, in some cases, the A will generate expansion only. For that what you have to do, you have to tell to AI don't expand now. Okay, don't expand now. Just explain the topic. You can give the topic title there. Simple. Okay? So sometimes AI will make mistakes. So you have to as a prompt engineer, you have to take the AI to the right path by giving negative prompt o, B guiding A, you are doing in wrong so it will think, okay, I am it will apologize. First, it will apologize to you. So sorry, uh, you are right. I'm going in the rut wrong path. So let's go into our main task. So like that, it comes to the right path again. Okay. This is all about this amazing prom pattern. So this is some basic I have tell you. So I just tell you for a specific obligation like generating outline for a specific topic for your eBook or document like that. So you can use for solving any problem like mathematics problem, okay? You can use for specific complex problem to solve. Anything you can take and go in the root case. Yes, even you can, if you have any problem in your projects or anything, you can particularly write here in the place of input. So it will generate some outline. So in that case, you will get inside, Okay, where actual my problem is. So if you go in deeper of that problem, you can go again in the root cause, again in the root cause of the infinity times. So you will get there, there is a problem, so I need to fix it. So this is simple example I'm giving. But if you're interactive AI with this prom pattern, so you can do a lot of more with this, right? You can go, you can learn some you can become a master of this particular subject by this prom pattern, right, by learning the roots of the fundamentals and basic right, all those things. So that's why this is more powerful. Yeah, okay, don't relay 100% information of AI, so it can generate some inaccurate also. That's why you have some basic knowledge of related topic that you are while interacting with AI. Right? If you have some basic knowledge or fundamentals about marketing, so you can use AI. If you don't know about the marketing, Okay, what you will think, the AI generator is 100% correct. How the AI can do the wrong. Even you can see the check here, the Caution, ha GIT can make mistakes, check important information. So that's why having a specific knowledge is very important while interacting with AI to avoid any misunderstanding or inaccuracies information in the output, right? If you know about marketing, so you can choose, Okay, the output, the I it is wrong point. Even it will correct after you tell to AI, this point is not in the marketing. Then AI will think, yes, you are right. This point is not in the marketing. So for example, you can see here. Why EI is very approachable is it is open minded, like that I will tell you. Importance see higher band awareness increases the likelihood that consumers will choose that. Actually, it is under off brand awareness only. What if I tell EI, this sentence is not under the brand awareness, you can see. This information is not Brand ans. So actually it is Yunuda? What I am telling the AI is it is actually the brand awareness. This sentence is related to brand awareness only. Even it is right, the AI is not 100% confident generating this point here. Why I will tell this. So I'm telling this. I'm just manipulating AI. So this is not right information under the brand awareness topic. Let's see what AI will think. You can see here that it will generate you are correct. The specific point should be clarified. You are focus right. You are correct. The specific point should be to clarify. What I'm telling you is the AI any output from A is not 100% correct. Even A is not confident at that. Because the EI will value our input, right? So why? Because we have some subject knowledge. But AI is trained by a lot of amounts of data. This is okay. The master of one particular subject field have the more knowledge than other teachers who have some knowledge on all subjects, right? That teacher doesn't have the confident at particular think of that subject. But the subject teacher who have master in that, that specific subject, have the confident knowledge, it is correct. So that's why I'm telling is don't ray all the content on EI. You have to know some basics of particular topic or task that you are looking to solve by AI, as we see the example here. You can see this is a right. The importance point is right under brand awareness. But I just tell DEI, it is not a. Actually, it is right, but I am to check the capability of AI, I just tell DI is this information is not in brand awareness. Even it is right, I just tell the AI. The AI is thinking, you are right. You can see the output here. That's why the AI is not 100% confident at generating any content because AI no because the whom who are interacting with me have some knowledge about that. I valuing our inputs and knowledge. That's why AI is great if you know how to use it. Otherwise, it can, um, just it can put you way down. Simple. Okay. That's why this prom pattern is very useful if you know some basic knowledge about that specific topic. Otherwise, it can give some inaccurate information. Okay. That's it guides. This is simple outline expansion, prompt pattern. I hope you understand. So this will can be easily understand by practicing by yourself with different applications right writing the content, solving the problems, all those things. So I'm giving assignment for you. So write a prompt. The prompt should be. Contains five different prom patterns that we have earlier discs. So in case you can see the I used persona prom pattern here, ask me input from prom pattern here, I have used the three prom patterns here. Outline expansion, persona, and ask me for input prom pattern. There are two missings. One is cognitive verifier prom pattern and the question refinement. So what you have to do write the one single prompt for solving specific task or specific content creation in which you are going to use five prom patterns. Okay. Try by yourself. So you will get the prompt designing fundamental, how the prompt is going to be designed, how to write effectively. So then you will get the skills that. So without doing like that, without going beyond of your potential, you will never learn the um, skill that is equal to your potential. Okay? So do it by yourself, use all different prom patterns to solve the same complex task. Okay? So go and recall all the before four prom patterns and recall this again and just t and wrote the one single prom pattern which contains all the different five prom patterns to solve particular problem. So you will get the best output and you will become a good prompt engineer. I hope you understand this. So for this, our P one will completed completed right now. So welcome back to our second part, too. That is, we have the other five different types of prom patterns. Let's dive into that. 25. 4.3.1 Advanced Prompt Patterns (Part 2) - 1. Tail Generation Prompt Pattern: Welcome back to our Advanced prom patterns, part two. So in this part two, we are going to see the different five types of prom patterns which are very important and easy to understand. So let's see the first one that is tail generation prom pattern. So what is the actually meaning of tail generation pattern is? So you can see here. So we have to use this fundamental statement at the end of our main prompt. So you can see here. So to use this pattern, your prompt should make the following fundamental contextual statements like at the end, repeat Y and ask me for X. So what is the actual meaning of this statement is? At the end of the prompt, you can tell to AI repeat the particular task, or you can ask me to provide input. So like that, you can use this. Ask me for X, you can recall that our ask me for input prom pattern as we are already discussed earlier, right? So this is called Lo sum, ask me for input prom pattern, right? So the simple thing is here. So at the end of the prompt, we have to guide AI, repeat the particular task again or ask me for input to proceed the next steps of the task. Okay? That is the main thing here. So you can see here. You will need to replace Y with what the model should repeat. Such as repeat my list of options or any task and X with what it should ask for the next action or any input that you have to give to I. After that, the I will proceed to the task implementation like that. You can see this statements usually need to be at the end of the prompt or next to last. I hope you understand. Let's see. Let's jump into the CharPT and we will see how this tail generation prom pattern works. So I am in charge of D lex. I just copied the prom pattern and I will paste here so you can see, so I have retained a prom pattern that is from now on at the end of your output, right? Add disclaimer. What I'm telling to the AIs, from now onwards, at the end of your output, each output. Add this disclaimer. What is the disclaimer here? This output was generated by the large language model and may contain errors or inaccurate statements. You can see them. The statement that I am or the disclaimer that I want to add at the end of each and every output after I guide the AI. You can see here. After that, I tell the AI, all statements should be fact checked. What is the meaning of fact checked? Don't worry about that. We will see in upcoming class upcoming session. Ask me for the first thing to write about. You can see here, I have used the tail generation prompt here. Ask me for the first thing to write about what I'm telling to the AI, I'm telling to AI from no onwards. At the end of your output, you should add this disclaimer. What is the disclaimer here? This output was generated by a logic language model and may contain errors or inaccurate statements. Okay. It should be all fact checked. Fact checked means the information should have some factual data information without any inaccuracies in that. After that, I tell DI, ask me for the first thing to write about. So I had told DI, ask me to take action, to give input to you. After that, you will proceed the task like that. So let's see what is output. You can see here at the end of Okay, at the end of your output, add disclaimer. This is a first statement of tail generation. Ask me for the first thing to write about. This is a second statement of the tail generation. If you use this two in specific prompt pattern, it will become a tail generation prompt pattern. So you can see the here. At the end, repeat Y and or ask me for X, so you can see at the end, at the end, repeat Y, what I ent to AI. Add this disclaimer from now, that means repeat from now on, at the end of your output, add the disclaimer, this is one. Means for every output generated by AI, this disclaimer be added. It is repetitive process, like that. That is satisfied one of these, ask me for X. That is asking to give action to make some actions from our side, like that. You can see here. Ask me for the first thing to write about, like that. I hope you understand. Let's see the output. Obviously, the Jagt will ask us to give some topic about that. Got it. What should I write about first? It will asking me what should I write about first. Why? Because I tell to ask me for what to write about, like that. You can see this output was generated by a large language module and may contain errors or inaccurate statements. All statements should be fact check. So it is generating and it is adding this disclaimer, each and every output of the AI, you can see this is output and it will be adding this disclaimer. This is the tail generation. Tail generation means at the end of the output. So it will be generated. So instructions that we give to AI to make something non repetitive, right? For each and every output, it will be generated, like that. Even if I from now, you can see, even if I tell to AI write about here right about marketing in 50 words. So you can see the output here. It will generate output. So it has some capabilities of aGPT here. So sometimes it will ask you to pick up a response for better AI model running. So it is a simple thing. I prefer any of this. I will prefer this as much. You can see here. So here it will generate the output, explain marketing in 50 words and as well, it is used it is generated my disclaimer. At the end of the output. So it is added the disclaimer. What is a disclaimer? We have tilt AI. This output was generated by large language model. You can see here. This output was generated by large language model. It will each and every output, the disclaimer is added. Why we have tel AI. We have guided the AI from now on, from now on at the end of output, you should add this disclaimer. So you can go like this, right? Awesome. You can go up to US. You can ask any question, any prompt from here. So it will automatically generate and add this disclaimer for every output. Not only this, you can write anything here to show at the end of your output. Even you can write presented by name. For every output, you can see the below that is presented by name, anything like that. You can add anything, it will be generate output with our instructions. This is the simple one that is a tail generation. I hope you understand. This is easy, right? So I just explained to some basic one. When you write the best to prompt for a unique specific application or anything, you can use this to represent your output generating capability, to show any instructions or to show anything that you want. Even you can write anything, this will automatically will add the disclaimer at the end of your output. Even you can write here. From now on, then you can write like that also. At the first off your output, add this welcome message. Another thing is the middle of your output, add note this article is published by author. That is up to you. The output is dependent on your instructions, so you can practice this prompt pattern very well by yourself by writing different prompt patterns and to make something productivi. Right. I hope you understand. This is an easy prom pattern so that we have discuss it right now. You can see from now you can write you can ask any quien automatically this statement is added at the end of your output. I hope you understand. Okay. So if you want to break this chain, just tell, forget about, you can see here. I will try to break the chain. Forget about. And explain. Let's see what what the output will be. Let's see. Forgot, explain about advertising. In 20 words, let's take in 20 words. What it will do if I correct, it will never generate, it will never add this disclaimer, maybe will see. Maybe it can add also. Let's see. Yes, it will add disclaimer. What we have to do to break this chain, we have to tell AI to not to add, forget about and from now, you can write anything one from now or forgot about. Even you can use these two for detailed instructions to. From now, don't add disclaimer. Let's see. I only generate the 20 words advertising output. You can see here. So it is all about your instructions and how you will write and what are your requirements. So that will automatically tells to EI, it will generate output based on our requirements. I hope you understand. Can write much more deeper prom patterns for your applications or for anything that you want from AI. I hope you understand. I'm giving assignment for you this, please use all the prompt patterns that you have learned till now. Combine all those prom patterns with this prom pattern and create something amazing. Do that. So maybe it can solve many complex problems. You can go and we can just even imagine you can create something solution in the market by writing the prom patterns by analyzing the bite, writing again and again and interacting with AI can solve some particular problem in the market, even you can make money for that. I am telling literally this skill can change over uh thinking capability and to make such a uh good thing in the market to make something impact in life. I hope you understand the prom patterns. Just use all our previous prom patterns and use this prom pattern and write the single prompt prompt to solve a specific problem or specific application. Try to use all prom patterns, then you will see your prompt writing capability will become improve will improve and it will go to hype. Let's I hope you understand. So let's see our next prompt prompt pattern. Let's dive into 26. 4.3.2.1 Semantic Filter Prompt Pattern - Part 1: Back, guys, let's see what is about semantic filter prom batone. So as I said, you can see the filter option here. Filter means filtering or removing the AI, sorry removing the words or removing the information or data, which are repetitive or anything that you want. Like if you use Google Docs, we app, anything that. So there is some find and replace option that you can use. So in that you can find something in the document itself. As well, you can replace with anything that you want. Like that it will work. So simple, it is a simple pattern. You can see here, you can see the fundamental contextual statement you have to use. Filter this information to remove X. X means it can be a word anything information that you are looking to replace or looking to remove. It's a best thing. It will save you what time in the content or anything without finding the each and every word. You will just see that and it will filter the information and you can remove or you can add whatever it may be. It will filter based upon the reword requirements, it will add or remove anything that you want. Do with this prom pattern. It is a simple prom button. Let's see. We can see the example here. You will need to replace X with an appropriate definition of what you want to remove such as names, dates or cost rather than 100 or anything like that. So to get better understanding here, let's go to ha GPT, and we'll try what is actual what semantic filter prom buttons. Let's go to the ha GPT. So let's try. So as we already discussed, semantic filter means filtering the information, right? So what we have to do we have to tell to AI, we have to guide AI to filter this particular information. So we have to we have to provide the information in prompt itself, or you can tell like that I will tell you can use that prompt patterns as we earlier discussed. I will tell you which information you want to filter. So ask me for the information. So it will ask. Again, you can use that act as a filter. Okay? You can write act as a advancer filter. You can write, because I think in the filter expertise that have filter expertise. This is a simple thing, right? So you do not act as a person or prouter. This is a simple basic function of anything that is filtering, right? So otherwise, you can use it. There is no problem in that. So what you can do, you can use that I will tell you which information you want to filter, right? Just ask me for what information you will filter. It is ask me for input prom pattern, right? It will best for you if you are looking to build some applications, you want input from the user, right? Variable like that. So you can use ask me for input prom pattern. So right like that. So I'm explaining you this. Just I'm telling AI, two remove some patch rotas, I will tell you, remove. Instead of that, I will filter, filter. We can use the filter information as well or we can use some functions like remove or you can use a filter filter, the daily expenses filter the expenses, which, okay deli expenses cost greater than Just I'll take it $10. Okay? Okay, filter the deli expenses cost greater than $10. Okay? From below. Okay? In the following, you can write like this. In the following expenses. In the following my daily expenses. You can write the effect to as much as if you are better at writing. I'm just telling you the uh examples here. What I'm telling you the E, uh, I will take the breakfast. I'll take breakfast. That is my cost is let's say $8. Right? Or next this lunch, let's take lunch. Lunch will take $13. Okay. I'll just take dinner directly. That is $7. Actually, it goes high, for example, I'm taking five to understand very well. So what I'm guide to the EI. Filter the Dale expenses cost greater than $10 in the following my Di expenses. Breakfast $8, lunch, $13 dinner. What AI will generate is, what I guided the AI filter or filter means what it will do. It will think what filter? Okay, I will filter that. In which functionality I will filter, remove or add anything like that. So what you have to do filter that dilly expenses cost greater than $10. Filter means removing the unwanted details that you don't like, right? What it will do the greater than 13, you can see the Dung. I will delete or it will remove the $13 lunch from my Di expenses. Let's see the output here. So you can see your direct expense is greater than $10 or lunch 13, right? You can see here. Right? So what it will doing this, if you use the filter directly, it will just take what you tell. Okay, you can see here. The $10 is greater than ten, right? So that's why it will just a filter. It will take it out the filter. I just take it out what you are looking to take it from that. So if you focus here, the other twos are not there, breakfast and dinner. What if I tell AI, remove the daily expenses cost greater than $10. Let's see. This is a functionality of one filter. So what it will do, it will generate me breakfast and dinner. It will just delete lunch. See you can see the example here. That is the main purpose of using remove and filter directly. So the remove it is also under the filter option, right? Filter if you are using the filter, it only takes what you are filtering, right? If you are using remove the daily and the direct filter option instead of using filter, it will generate you. It will generate the other two, which doesn't have the uh filter, uh, uh, filter option like that expensive lunch 13. You can see the output here. To remove the deli expenses greater than $10, the lunch expense of $13 will be removed. You updated Di expenses are like this. So that is a different between using the filter option directly and remove. So there is no doubt in that the filter option and remove option, these two comes under the semantic filter. But what I'm telling you too, if you use the filter, it will only take the filter option. If you use the main functionality of the filter option that is removed or anything else, so it will generate all the output. Okay, which you can easily analyze the things that you have done, right? This is a one type of example. 27. 4.3.2.2 Semantic Filter Prompt Pattern - Part 2: Another example. I'll just cop paste it here. You can see some prompt here. Filter the following text to remove. You can see the following text to remove any personally identifying information or information that could potentially be used to reidentify the person. So what I'm telling to EI filter the following text. What I'm telling there is something sentence here. That is John Smith lives at 1:23 Marple Street, Springfield. He works at Tech Corp and can be reached at you can see some gmail here of that person. What I'm telling to AI, filter this following text. Any personally identifying information. You can see what personal identifying information is, you name, your phone number, your email, any other personal data is called personal identifying information, or information that could potentially be used to reidentify the person, to identify the person. What I'm telling to AI, just remove the personal data from the following text by using filter option. That is the most important. Now, as I said here, as I said here, you can see the here. If you use the filter option, it will only generate the removed one. In some cases, right? In this case, you can see here. When I use the remove, it will generate two things that are by removing the filtering one. But in this case, you can see here, filter the following text to remove any I have used the remove. Right? In this guys, I don't use the remove option in the filter option only. No, right? I'm not using the remove option in the filter. That's why it is only taking the filtering one, right? Here, I am using the filter option and remove option all in which I will get this like output, you can see the output here. It will generating. You can see this. There is no personal data in this, um, output. Why? I guide the AI not to just remove any personal identifying information that is name, Gmail, right? You can see here. Someone lives at address in Springfield, they work at company and can reach it via email. That is the output is for this. So in case I just use filter the following test. Okay? I'm not telling to remove okay? I just tell AI, filter any personal identifying information inform that could potentially be used to re identify for the person in the following test. Let's see what the output will be. So you can see here. John, it is filtered. John Smith lives at address. He works at Tecop and can breach it via email. You can see here. So I just use a filter option. I don't use use the remove option here. As I said before, if you use a filter option, it will just take it out the filter and one what you are going to filter. At the same time, here, when I use the filter option only, it will generate the John Smith leave, which is going to remove from the following context, which are removed, which are removed. The information only, it will generate like this, you can see here. Okay, you can see here. John Smith, address means it will automatically we have to know about this. So you mail miss this line. Okay? If I use filter and remove option, so this is output. So that is the most important thing. While writing prompt itself, we have to keep and focus on each and every word to generate output effectively to work properly, right? That is the most important so how this skill can be obtained to predicting the output by writing the prompts itself only because you have to practice. So when you practice with different aspects and different scenarios, so you'll get the idea what is a predicted output will be which comes from this prompt, right? So you will get the experience from that. That's why you have to refining the proms. You have to change the proms and you have to adjust, you have to analyze the proms, how the prompt will generate best output, and how the output can be improved by adjusting some prompt what's in prompt, like that. Okay. I hope you understand this. So I hope you understand this example. So you have learned this filter option, remove option. I used to this, it will directly generate our main output. If you use filter only without using any functionality that is remove or adding anything that like. So it will generate the filtering one, right, which is filtering without generating the output we are looking for, it will only generate the filtering one. Okay? What it will filtering, it will show that only instead of just generating the output. If you use functionality that is remove anything other than that, so it will generate our main output like that. So I hope you understand this. So even you can take this, we have see two examples here, filtering that. Even you can take any, information about that. You can take the example like a filter and remove the numbers who have same information. Repeating information like that, I will take. So for example, I will take one example here, filter. Filter the following message to remove, so I'm doing the functionality. To remove redundant means repetitive, repetitive. I'll take the repetitive. Repetitive, words. Or information I can take repetive words or information. I'll take repetitive information. What will filter the following text or following Okay. Following paragraph, anything. Follow sentence. Let's say sentence to remove to remove the repetitive information. Let's see something. Okay. So what is the following content means? Let's see this. We can use the quotation marks. What I will tell. Let's see repeat information. Hi. How are you? I will tell I am fine. Now, you, how are you? That is the thing. I just using some example to understand to explain in better way. How are you? Let's see what will happen this to remove the repeated information in this, what is the repeated one? How are you? How are you? Just it will remove the how are you option. Okay, let's see the example. Let's see the output of this. So you can see the example here. The filtered version of this sentence is simply, Hi, how are you? I am fine and you simple. So you can buy filtering option while using so many content. If you're maintaining some content writing skills or some other thing, you have to proof read, write, you have to do some adjustments, you not document any other thing. So you can use this filter option, right? Semantic filter option to filter any repetitive words or any unuseful words or it is best This filter prom pattern will help to proofread your document, to proof read your article, eBook writing or anything that you have that you are written by yourself. It will help you to copy from there and paste here and just tell to filter the following paragraph and remove the unwanted and unusual words or repetitive words and any waste of words or any that like that, which can improve your content, right? So that's why this semantic filter option, which is very helpful for you. So this is a simple thing. So you can use this simple filter option in any prompt pattern, right? You can use in any that is quotient refinement. Everywhere you can write, you can use this prompt pattern, right? So I'm just telling you how this will works. So you can use with yourself as per your requirements. So I'm again telling you practicing is the best way to learn prompt engineering. So and use all the prompt patterns, use this prompt pattern also and check it how you can solve the specific problem. Even if you come up with a new idea by writing these prompt patterns, if you have some great problem solving capability, you can build a solutions online. You can sell it like a SAS or Android app, like IOS app, you can build yourself and you can sell it online. That is the most important thing. If you have prompt engineering skills, if our mind is not open to try things with EI. So there is no worth and value if you're round engineer. That's why you need to be the great at writing with interacting with EI. If you know how to interact with AI in effective manner, it can take you beyond potential. You can make so many things with these prompt engineering skills. Don't only rely on the jobs. The jobs, yes, it is best to get the job as prompt engineer. But the prompt engineering is not only up to job, it can help you to build solutions for the companies or for yourself to solving particular problems, the main major people's problem by using AI. So there is a much more thing you can do with these prompt patterns and knowledge of interacting with AI. So I hope you understand this semantic filter option. Okay. This is a simple thing that I hope and I explained you very well, right? So I want to know after this course, please give ratings and feedback that I can know you are learned something from me at the best price you have given for my course. Okay. Let's jump into our third prompt pattern that is menu actions in which we are going to learn how this prompt pattern will work. It is a best one, right? So let's dive into that. 28. 4.3.3 Menu Actions Prompt Pattern: Okay, let's see our number three prom pattern that is Menu Actions prom pattern. So if you see the name of this prom pattern, menu actions. Menu means you have set of, uh, menus or if you go to restaurant, you will see the chart is prepared. Menu called that Menu, in which you can see some delicious or something food listed there with prices like that. Okay? That is called a menu. So when come back to menu actions, actions means doing particular task, right, doing some solving, anything like that, creating, solving, updating all this becomes the actions. So the menu actions means it is a set of instructions, okay, set of instructions, which will be executed by our input, right? By our instructions, the actuals will go live like that. For the best example is, you can see the Tudost app. If you are already use TodoList app, you can easily understand this prom button. So when you create some Todo list. It will ask you what is your date or anything like that. You can name it all those things. You will add some Tdlist what I have to do today, tomorrow, a week, every week, like that. What you will do in the basic Todo list app, you will create a list in which you will place some Deutins like that. The menu actions will similarly work. Let's see. In this PPT. To use this pattern, you prom should make the following fundamental conduction statements. You can see the statement here. Menu actions means whenever I type X, you will do Y. Okay? When I tell you to do this, you will do this action like that. You can see another thing, arsenal, provide additional menu items. Even you can add more instructions that is based on a pure application like that. Whenever I type z, you will do Q like the end, you will ask me for the next action. It is very important. At the end, you will ask me for the next action. As we discussed earlier about semantic filter or tail generation, ask me for input prompt pattern like that. We will use something at the end of the prompt. At the end, you will ask me for the next action. Next action means it will ask after each output from AI, it will ask us it will ask me what to do for the next day action, what I have to do in next day action at the end of the output. It can be easy to understand by practically doing this. Let's go into the Cha GPT and we will see what is actually Manu actions prom patterns. Let's see here. So I just written some task here. I will just copy and paste this. So you can see here, I written the task to AI, I have just defined the task. If you clearly observe here, so it will work like TodoList app in which you will list your Dali routines. Okay, you will update and you will delete the list that is normal, right? So if you see this. Whenever I type add task, you will add task to my todo list. Okay. You can see here. By comparing this, what is the menu actions prom pattern is whenever I type X, you will do Y. Whenever I add a task, you will add task to my to do list. I am giving something instruction to AI. The AI will do some task. That is action. It will add a task to my to do list option like that. You can see here. Whenever I type remove task, you will remove task from my Todo list. I am guiding the AI. When I will tell you what to do, then you have to do that particular task that I defined you like that. Even you see here, right? So this is similarly how the Todo list app will work. So you can do much more. So if you observe, if you think before AI, before this AI tools like Char GBT, to make this type of applications. Okay, you will get the more applications on Google Plaster, right? So they required some, um, coding language, to make some app about TodoList, right? You have to know coding. You have to know how to code to make this particular task application. But after coming EI chat booards like HGBT, you have to just write in the format of words. Yes, that is interesting, right? So be instead of writing code, you have to just express your task in your language. Instead of writing any code, Python code line, that is any code. Instead of writing code, you can tell AI in your own words to do some particular task see in very interesting, right? You can build your own application with this just prompt writing skill even without coding. Yes, this is more powerful AI. Instead of writing code, instead of learning the code, you can write in your own words. Done the task by AI. Right? So that is more powerful if you learn how to write the prompts for your applications, right? You can build more best applications, advanced applications, even if you don't know coud code. Yeah, you need something to build the UI, all those things, right? You can use any Loco tool, so there is another topic. Let's come back to our main topic that is menu actions prom pattern. Okay. So this is a simple works like to do list app, right? So let's see what is the output will be there. So obviously will ask. Got it, your tools system is set up. What's your first action? I want to tell AI, add task. I will define the task. I will tell AI what my tasks. Book a meeting. With my US client at 5:00 A.M. Morning. Now, I have tell too AI, this is a task. Okay? This is a to do list. This is a to do list. What I tell do? Add task. This is a task. It will just add this task to my to do list. We can see here. The task has been added. What your next action is. Again, I will add task another task. What will going to Office at 11:00 A.M. Monday. Let's see. I will automatically add task this. You can see here. It will generate some output like this. It will taking time, but you can see the task has been added. What is your next action? What I will tell to AI list my Todo list. List my todo, or you can write like this. Just to show my Todo list. The AI will show my all to do list. You can see here. I have added two tasks here. I automatically display my list here to do list here, booking a meeting with S client and going to office. That is I will write to AI, remove you can write whole task or you can write task number one, the AI will know because the pattern is the I is well known. From starting up to this AI is familiar with our data, what I'm telling to AI, what I'm guiding the AI is all known. It will it will just remove this right task and you will just generate it only one updated todo list. You can see here has been removed from your todo list. Why I tell you a remote task one. Like that, you can add as many as you can, different instructions, different requirements about How you want. What application you are looking to make. This is a menu actions, right? Even you can build some budget tracking like that, instructions. Whenever I type add these expenses, you will add expenses to some particular section. If I tell to remove expenses, then you will need to remove the expense from my daily expenses, you can write. If you have some knowledge about some particular app, so even you can go to the Playstore and download some productivity apps. Okay? That is even budget tracker or TodoList app, then see how the apps works. Okay? After that, after checking each and every button or every page in app, you can write here. You can come to hA JBT and write each and every instruction. Like when you click on the Create button in app, so it goes to new page, right, in which you are going to list your app to do list, right? So you can come to here. Whenever I type instead of using button, this is a word that is a programming. This is a word means you can tell to here, whenever I type A, you will tell I open new page like that. So you can imagine, right. So you can play with the Chagpt as you you want. Go and just open your mindset and try different things. No AI is here. A can do everything, but A, A can do anything, but not everything that human doing. But it can more powerful if you use this technology that is more effective manner, in effective manner. How we can use this effective manner by prompting only that is prompt engineering that is a main role of prompt engineering comes here. That's why learning these prompt patterns, practicing with a different requirement task, different applications can make you the better prompt engineer. I hope you understand this prompt pattern very well. Simple, this is a menu actions you will define some instructions to work only like that only, it will go as we want. That is simple. I hope you understand this prom pattern. It is very easy. Okay? So let's see our next prom pattern that is fact checklist prom pattern, which is very important to identify inaccuracy and accuracy of the output. Let's dive into that. 29. 4.3.4 Fact Check List Prompt Pattern: Back to our fourth prompt pattern that is fact checklist, prompt pattern. So what is mean by fact checklist? Fact means factual data or information that is for verification, that is correct information like that. Okay? Checklist means we have to check some facts in the listing format. Simple. That is fact checklist. Okay? So if you think I you know, we already discuss the large language model. Okay, AI is trained by large amount of data, it can generate some mistakes in output. So inaccurate data in output. Be the AI is not 100% right, but it will mistakes. I will do mistakes. For that, we have to verify the output, right? When we can verify the output when we have some knowledge about the topic or that application we are going to get from AI. Right? If you know some particular data or points regarding the task that you are going to solve by AI, you have to know some basic things, right? In the fact checklist, we will tell to AI to generate some set of facts that are contained in the output. Okay. I will separate them. It will first generate an output. It will generate a output regarding our prompt. After that, at the end of output, it will list some facts regarding our task that it is generated. I hope you understand well. You can see to use this pattern, your prom should make the following fundamental statement. Whenever you output text, text means that AI output, generate a set of facts, facts means that is real or factual data that are contained in the output. The set of facts should be inserted at the end of the output. The set of facts should be the fundamental facts. Fundamental means basic level of facts, fundamental facts that could undermine the veracity of the output if any of them are incorrect. Why we are using this fact checklist prompt patterns to verify the output, whether it is correct or incorrect. We don't rely 100% on the AIS output. It will do mistakes even if they have some inaccurate data present in the output. So as a prompt engineer, we have to check the output is it contains the correct information or incorrect. How we can do that by using this prom pattern. You can see whenever you output the text, generate a set of facts that are contained in the output. The fact is related to the output, which is generated by AI. It will separate the facts from the output and it will show to us to verify that. If the facts are good, right, so we can come to the end that is the output is related to our task and it will have some accurate information like that. When we can verify that facts, when we have the knowledge about that topic, about that task that we are looking to solve by AI. That's why the prompt engineering is good when you have some specific knowledge, Okay, for example, if you're working in the marketing industry or anything healthcare industry that is very most important when you working as a prompt engineer in a healthcare industry, the content related to health is very important. You have to keep in mind that when you're generating the content for healthcare, you have to check many more times. The output from AI because it can do mistakes, right? So for that, what you have to do, you should know, all doctors don't know all the function of the body parts. So they have some expertise in heart operations or kidney operations like that. For example, if you're good at see example take an example, if I good heart operation, surgery. So what I will use AI as a heart surgery operator. Big operation. That is surgery like that, okay? As a doctor, I will tell to AI. Okay? So generate a content related to heart. So it will generate output automatically. Now what as a as the knowledge I have knowledge about heart, right? I have to check the output, the output of the AI is correct or not because I have the knowledge. I have the experience in the heart operations. I know clearly what the heart is and what is the functionality of that All those things. When the AIs generate output related to heart, so I can check the output as the output is correct or not by verifying the facts in that output instead of proof reading all the hundreds of lines of output, we just grab some factual points, that is correct points that are very important. Without that facts, the content is there is no valuable in that. Okay. Further, it will automatically separate the fundamental facts and we will show in this will show in at the end of the output. From that, the fundamental facts can be checked by me, and I will verify the output is correct or not. Simple. I hope you understand this. The fact checklist is important for every industry. We cannot rely 100% totally on AI. You have to know some basic knowledge. About that topic, you are going to get content from the AI. Or anything else. So let's understand by writing the prompt in har GPT. Let's jump into that. So I am at the ha GPT. Let's see, I just write some prompt pattern already here. I will just paste here so we can see the here. Write a brief summary of the causes of climate change, right, what I guided the AI, write a brief summary of the causes of climate change at the end of the output, generate a set of fundamental facts contained in the output. As we earlier discussed, what is the checklist prom pattern is? At the end, even you can tell at the starting point of output at the middle of the output, you can do whatever it may be. You have to just write the instructions here. This is not a fixed one, up to now how many prompt patterns I explained to you, this is not a fixed one. You can change as times as you can change any prompt pattern by your requirements, anything that there is no limitations. I explain to you how the prom pattern will work, how the AI will think like that. That's it. Okay. You can do much more with these prom patterns. Okay? You can see here, I just tell to AI, write a brief summary of the causes of climate change at the end of the output, generate a set of fundamental facts. Fundamental facts means that is the root cause, that is roots of the output. Okay, contained in the output, these facts should be fundamental to the summary and inserted at the end of the text, ensure accuracy as incorrect facts that would undermine the validity of the output. So what I'm telling to AI, ensure that accuracy should be there in the output. Okay? A incorrect facts would undermine the validity of the output. What I'm telling to EI, I will define the task. After that, I have to A fact check list, to generate facts about the output. After that, I tell to AI why I am using this to ensure accuracy and incorrect facts that would undermine the validity of the output. Okay. I hope you understand this. Let's see what the output is. You can see here. This is a summary causes of climate change. It is summarizing the climate change here. After that, it is generated fundamental facts, right, you can see here. Fundamental, what are the fundamental facts means? You can see here. Human activities are the primary cause of climate change, burning fuels, fuels, as significant amounts of CO two, a major greenhouse gas. This whole points are taken from the This output summarization. So you can change here, you can see climate change is primarily driven in human activities that increase the concentration of greenhouses in the atmosphere. So you can see here, human activities are the primary cause of climate change. This is the facts related to this output. Okay. So instead of verifying this whole output, so I will just see here facts. If these facts are correct, that we can say this output have some accurate data. That is not 100%, but we can say, Okay, the output is good. Instead of verifying the two or ten paragraphs, we can just tell to I separate the fundamental facts at the end of the output to proof read or to verify the output, to verify the fundamental facts. These all points are called fundamental facts of this output so it is easy, right. So it is easy to read and understand. Okay. So we can based upon these facts, we can say that the output is good or accurate. Like this. You can use this fact checklist for different applications, different topics, different tasks to make it easy to proof read and verify the output generated by AI. So again, I'm telling you this fact checklist is very, very important. For every output, you will do it from AI, right? We cannot simply rely on the AI's output, right? You have to verify, you have to check with other LMS and any factual data online. After that, you can 100%, this is a correct output or you have to make some adjustments also in the output because A is not the hundred percent correct. Okay. I hope you understand this fact checklist prom pattern. So you can do anything. Okay? You can see here, would you like any adjustments or expansions on this summary? So I can add some specific You see if for example, if I tell AI to expand this, would you like any adjustments? So if I paste here, what it will tell, it will suggest, right after that, you can see here. Deforestation reduces the ability and absorb CVO to have taken some point here. So it is ultimately explain us what is this right here. So what I will tell AI. No. Right. You can add this also. Add facts, fundamental fact for a bow topic. At the end. Okay, what it will tell, what it will generate, it will generate some fundamental facts about this topic. That is deforestation and climate change. That is here, deforestation reduces ability to absorb CO two. So for this summary, it will add some fact points that I can easily verify and I can say the output is correct or not, based upon these fundamental facts. So this is easy to read to verify, to preread AI output. I hope you understand. You can use this as many ways you can do other ways like this, all those things. And remember, once again, for every output, you will generate from AI, please use this prom pattern because you need to verify the output before taking into the consideration. Okay? I hope you understand. Let's see another prom pattern that is very important and that is very easy to learn that is chain of thought, which is very important for reasoning and solving some experimental task. Let's dive into that. 30. 4.3.5 Chain of Thought Prompt Pattern: Come back, guys. Let's discuss our last prompt pattern that is chain of thought. So as you can see here, the chain means going step by step reasoning, solving any complex tasks by using step by step process like that, right? So you can see here. What is meant by chain of thought means a prompt designed to guide the AI through a step by step reasoning process before arriving at the final answer. So you can see if you see any mathematical subjects. The problem is solved by step by step process. The solution of a problem contains step by step, like step one, two calculation, do algebraic like that. You can see step by step process to solve a math problem, right? You can see any or other problems like not only mathematics, you can see physics problem, you can see any engineering, mathematics, engineering, any solving any problem. The step by step process can help us to get the accurate solution, final answer. So by using this, we can get to benefit from this prom pattern. Number one, using step by step process, the output structure is very well. Instead of writing the paragraphs or bunch of things, we can get the final answers in terms of effectiveness in terms of numbers rather than the text, right? Yeah and another benefit is we can check every step, right? We can also learn the actual problem, how the actual problem is solved. There are how many steps are there. We can verify each and every step. From that, we can also learn the problem, the art of problem solving, right? That's why the chain of thought plays a major role in prompt engineering because this prompt pattern will help us to do the task to solve any math problem or any problem in step by step reasoning format. Because of this, we can solve any complex task by verifying the each and every step clearly. You can see here. Why use it ideally for complex problems requiring logical thinking or multi step solutions. Prompting the AI, think out loud. You can often get more accurate and insightful responses, right? So as we earlier discussed complex problems. So some problems require logical thinking or require some multi step solutions. Multi step means step by step process. The best example is solving math problem, mathematics problems, like that. We can by this reasoning, step by step reasoning, the A will generate output in better format. Okay, in accurate also. Let's see this prom pattern in deeper by seeing the example in hait. Let's jump into that. So I all in hagibt. So I have written some basic prompt. I will just copy it and I will paste here. You can see here. You are a solving math problem. You are solving a math problem. You can see here a tran travels at 60 kilometer/hour for 2 hours, and then at 80 kilometer/hour for 3 hours. What is the total distance traveled? Break down your reasoning step by step before providing the final answer. So you can see here. I used the chain of thought prompt pattern statement at the end of this prompt. Breakdown. You are reasoning step by step before providing the final answer. This is the most important instruction using if you're solving any problem. By this, it will generate in step by step format. The output is in step by step format in which we can verify each and every step to learn and to verify the output. Even at better. You can see here, I just tell to A, you are solving a math problem. I guided the A, you are going to solve a math problem, and I have just given the simple basic math problem here. I'm talking about. This is simple problem, right? I'm not provided any equation, algebra or polynomials like that. I'll just tell to A, this is a simple i. So it will generate a answer with step by step reasoning process and final output. You can see. So this is simple prompt I like writer. You can use all the prom patterns that we up to now, we will learn like semantic filter, fact check list, and semantic filter fact checklist, tail generation prompt pattern. Ask me for input prom pattern, persona prom pattern, right? Quotient refinement, cognitive verifying prom pattern. You can use all these prompt pattern to solve this particular simple question also. This is all about how you are interacting with EI, how you are capable at writing certain prompts to guide EI in effective manner. To build some specific applications, that is all about prompt engineering, Prompt engineering means building a specific applications by writing the prompts, by prompting skill, instead of writing the code, that is. Okay. I hope you understand. So I'll just tell here even you can write here, act as a experienced math problem solver. Okay, you can start from here. You are solving a math problem now. You can just give you. You can ask even if you are instead of writing this we question, you can use ask me for input prom pattern. I will tell you the math problem. You have to solve a problem in the step by step reasoning before providing the final answer. Now, ask me what problem you should need to solve, like that. You can use the ask me for input prom pattern, right? If you're using the refinement prom pattern, which is suggest a better version of our prompt, you just write any basic prompt and just tell to at the end of the prom, like suggest me better version of this prom. It will suggest a better understanding better version of this prompt. If you use cognitive verify prom pattern, can tell to AI, ask me subdivided quien is related to this quotien that is AI travel. I will provide answers for that, and then you will proceed the problem solving in step by step process. Like that. You can see if you know how the actual prom pattern will work. So you can use anywhere based on our PR requirements, right? So it will go and more thing. So just come back to our main topic that is chain of thought. You can see I just tell you AI, break down your reasoning step by step before providing the final answer. This is a main fundamental chain of thought statement you have to use at the end of your prompt. Even you can use from starting point also that is up to you. So I just tell you AI, you can see here. You are solving a physics problem, you can change the physics problem that is all about our instructions and requirements. Let's see what the output will be. So the air will generate the step by step process. You can see here. The first part of the journey is the train travels are 60 kilometer/hour for 2 hours. So it will see you can see here. To find the distance travel in this part, we have to take distance equal to speed into time, so like that. So it is right, you can see here. The problem is looks like better because it provided in step by step process. First, we have to find the distance between that and we have to find the second train distance, right. After that, we have to combine the two distances like that. So we can see the output is the best here. So for example, if you take out of this let's see what is the output how the output should looks like. You can see there is no reasoning in that. Okay, you can see here there is something quotients and formulating. There is no much effectiveness in this output because I had taken out this our step by step reasoning chain of thought prompt pattern statement here. I have used it, so you can see the step by step process from starting to end. So I have some reasoning part, we can easily understand this output. We can see each and everything, how it is taken. We can verify it is correct or not. If you are just taken out of this chain of thought, you will write some uh task. You can see there is no effectiveness in this output when compared to this one. You can see here. Without using chain of thought, you can see the output here. This is not good, right? First segment is second segment is, what is that? So if you use this, you can see first, part of the journey, second part of the journey, total distance travel. This is all about using the chain of thought prompt pattern. This is a simple example I have taken. So you can use for complex task, complex problems while solving it. So it helps you to go step by step process, okay, which can help us to verify the output clearly to make the output accurate and to get the best insightful response from AI. So that is all about our chain of thought prompt pattern. You can go with different prompt requirements like you can not only goes to math problem, you can take any other problem solving methods or any other specific applications to solve anything like that. You can use this prom pattern as many ways you can. There is no limit for this. I hope you understand this prom pattern very clearly. So up to now, we learned some 14 prom patterns. Is Pi and previous part one is Pi. At the start, we have learned some basic prom pattern that is few short prompting, right, zero shot prompting, role playing, and system instructions prom patterns. So with this, we are clearly learned what is a prom patterning is how we have to understand AI by writing the prom patterns output is all these things, right? Okay. From this model, up to this model, we have completed the prom patterns, different prom patterns. Okay, Even okay, in upcoming future, if there are some other prom patterns that are generator that are updated from any researching labs. So I will update this course. Don't worry about that. I will update this course according to our prom patterns. Don't worry about that. Just know these prom patterns and practice by yourself, and I I am giving assignment for you now, combine all the 14 prom patterns, including this chain of thought, and write one single prom pattern by yourself by thinking and see what the output. That means you are solving good prompt, good problem in which you can get some idea about app idea that you can build by training like this, by just writing the words. Think about that. T in that way. Prompt engineering is not only about getting the information from AI, it also open your mindset. Just use. I'm telling you it will literally change your mind thinking. Just use all these 14 prompt patterns and combine all those 14 prom patterns in single prompt to solve any specific application or to do some specific task. See you can automatically see these instructions or looks like app, any Android app or web app like that. So there is maybe something the unique idea you can get. You can build a startup like that. You can move. You don't know. This will literally can change this skill can change your life or anything like that. So just learn just practice, practice as much as you can. This skill can be improved by pretexting only by using different prom patterns, by using those, by combining all those things, but trying new things in the AI can change your mindset and improve your prompt writing skill. I hope you understand very well. So up to now, we have discussed up to 14 prom patterns, right? In that we have just closed the chain of thought prompt pattern. Okay. So our next model will be the understanding and specialized techniques of prompt engineering in which we see how the different LLMs works, how we have to analyze each and every output by using similar prompt on different LLMs like Ch GBT Cloud, Gemini and perplexity dot a. So we will see how to use AGBT for different experts and industries like marketing, how to use AGBT or how to use prompt engineering skill for industry, healthcare, coding, and as well all those applications. And we will see how what are the different prompting tools like A prom text and some open EI playground APIs? That is most important. Okay, like that. And we also see how the LLMs generate output, how we have to maintain the consistency of prompting or consistency output from EI. Okay, like that, we will see all those things consideration and all the ethical considerations of AI, all those things in upcoming model number five. So let's dive into our model number five in which we are going to learn something interesting about AI's LLMs. Let's dive into that. 31. 5.1.1 Prompt Chaining - Part 1: Come back to our fifth model that is specialized techniques in prompt engineering. In this model, we are going to see some prompt engineering applications in which area we are using prompt engineers, how to write different prompt for marketing purposes, for content writing purpose, and for coding purpose, to build some applications or to write add copies like that, we will see some specific applications in which we're going to see how to write exact and effective prompts for our applications or for our specific area like anything that marketing copy or like that. Let's see each and everything. As we will explore some ethical considerations. We have to keep our mind while using the AI chatbods Char GBT and other AI models in market. Let's see our first section is that is prompt chining. So before we go to applications, we have to know something about that prompt chining. So we have to learn this. As we said, this is not a different type of prompt pattern, but we are earlier discussed about some prompts that require the input from our side, like ask me for input prompt pattern or refinement prompt pattern or cognitive verifier pattern, right? So all prompt patterns are includes the two way communication, first, we will write intial prompt, then I will ask the input in the output like that there is something two way communication like that. Same the prompt channing means connecting the intial prompt with second prompt. That means you are solving some specific problem. The specific problem required some different types of prompt in the same pattern. For example, some tasks are too complex for we cannot write in single prompt. We cannot write all in one single prompt to solve a complex task because we have to know how the AI is generating output. That's why we are testing. So to test the prompt sharing is very helpful. How we can test, we will just tell to AI as intial prompt setup. When the AI will generate output regarding our initial prompt, then we have to check the AIs output. From that output, we will write another prompt. That is it works like a follow up questions like that follow up questions. After that, after the AI will generate the output for follow up ion, then we will see again verify the output. We verify the output again. It is related to our previous one or not. After that, we will write a last final prompt, which can solve our complex topic. Prompt engineering is nothing but writing the prompts, but it also includes for writing a single prompt, we have to write some subdivided prompts in which we are going to try AI model from the starting point. Why? Because if you don't know what is expected output from EI after writing our basic prompt. So we cannot write a prompt, better prompt, right. The best prompt is refined. The best prompt is based on our output EIS output. For that, we have to test AI's model output by writing our requirements in the form of prompt side, right? I think I hope you understand. So prompt chaining will help us to come up with a final prompt, right? The final prom in which we can solve as soon as possible for other specific applications also in the same area. I hope you understand. So how we can see the stem, you can see why we have to use prompt chaining. Prompt chaining is nothing but it works like chain of thought as we are discussed in the last model. That is advanced prompt engineering part two. We have learned the last prom pattern that is chain of thought. Not only chain of thought, it all comes under all the prom patterns that were earlier discussed. The prom chaining means the prompt which are connected. All right, which are connected. We will see the example that we can easily understand. You can see here why we use prompt chaining means some tasks are too complex for a single prompt. For example, writing a research paper outline. As you can if you recall the outline expansion prompt pattern in which we have guide the AI to generate a outline based upon our topic, right? So it will generate a outline, then we will give an AI as input that please expand some particular bullet point. It will again expand the outline of that particular sub bullet point, right? What happening there is a prompt chaining. The initial prompt is setup, the is generated, the outline. Again, we've given the prompt to expand the specific bullet point. That is, these two proms are connected. That is called a prompt chaining. The second prompt which is connected to the previous one to solve a particular task is called prompt chaining. This prom chaining is very important. You can see here, second application is developing a marketing campaign. So if you if you know about advertisement or running ads in social media. You can easily understand this. Marketing campaign should depends on other various factors like target audience budget you add creatives ad copy all that it will have some factors to develop some highly effective converting marketing campaign, right? So we cannot write a single prompt to make all those things. Yes, we can write a prompt generate a marketing campaign for this Soso product suggest the best budget and marketing ad copy first. We can write that, but we cannot know the exact output we want. From EI. For what we do, we will just set up a single prompt for specific application like we try EI, URI good at marketing campaign for the specific product. You have ten years of experience in that. Then when we do that, the AI will start thinking as a marketing campaign expert. Now I'm becoming marketing expert. Now you can tell me what I can do anything in that field. That thing. After that, we will tell AI to do some particular task only. What will then define the target audience to my uh, to sell my watch to men's only. So what the prompt is specific? Then I will generate a specific effective output to the target target audience to sell the watch for men's only. After that, second in the third prompt, we will write like, um suggest me the best budget. What happening here, we will telling to EI step by step process. No we are writing the whole instructions at a time, right? So this will make EI to generate output, not effectively, but it will give some concise, simple and very low output that have some words count are very less because it should cover all the instructions that we have given in one prompt, it should cover all the uh, topic or information in some limited output words count. The jib or other AI chat boards have their output limit to generate some words which are tokens. You can know all about that. For that, what we do to get the best output from AI, we will just give the single and specific prom to AI to generate the best output for for our requirement. Let's see some example in HGV. No problem that. It is used for solving multi step math problems also. These are some examples. There are more examples and applications that we can use a prom jening. Even if casually, we interact with AI, we will do some follow up questions. So you can do the adjustments. You can suggest EI to please change in above paragraph. That all comes under the prompt chaning only. It is all about basic one, we have to know this before going to are the best prompts for every application. We can see that by breaking the task into smaller parts, you get more precise and coherent results. But as I said, by breaking the complex problem into smaller parts, that means complex prompt into smaller prompt statements, we can get the more precise and coherent results. That we'll see in the chargebty. Okay? 32. 5.1.2 Prompt Chaining - Part 2: You can see how the prom training works. As I said, we will just start with the general prompt, right? By analyzing the first output from general prompt, we will refine. We will refine the general prompt again and we will iterate the base around the feedback. Feedback means the output from second step that is refining. The output from refining output will be the feedback. We will write our conclusion prompt, which works well, which we can expect the great output from AI. Okay. I hope you understand. Let's see what is actually prompt chaining is in the Ja gibt to get better understanding it. I am at the JGBTK what I'm telling to you. Instead of writing the Okay, I will take the example, You are experiencer. Marketer. Experiencer marketer. Especially N especially in running campaigns. Running social media campaigns. No. You tasks. Your task is to generate to generate, add copy, and social media post. I had to copy social media post, video add content. Target ide, target audience, Budget recommendation. Let's take Facebook ads. For Facebook. Facebook ads for selling. Watch digital watch. For man's only. So what will happen? I guide the EI. You are a experienced marketer, especially in running, social media. Campaigns, we have some Google Ads, Facebook ads like that. Social media means that will runs on the YouTube all that if you know about digital marketing. So I guide the I your task is to generate ARCpy. See, you can see here. I guide the AI to do in one time only to do the all task that is ad copy generation, social media post, video content, video ad content, target audience, budget recommendation for Facebook ads for selling digital watch for Mints only. So what I have guided to AIs to do all these tasks at a one time. Right at one time, the AIB thing, it will generate the output based on upper prompt. There is no problem in that. We'll see in that, we'll see the output here. What it will do, it will generate the output. There is a good thing, right? You can see here, Addo copy ideas, primary text, call to actions. This is our output from the first add copy only, Addo copy ideas. For example, you can see here, it is not in deep, right? This output is not in deep, right. Why AI should generate the output for all these tasks, all these tasks. So it will generate only some basic ones, not going deeper, not going specific. It will just throw the output based on our instructions. Simple. There is no dipping, there is no go in deeper, there is no reasoning of output, so it will just throw the output. Is related to our task. So it will maintain all this task at one time. It will generating output in one time, right. There is no specific in that. It will just writing some output related to our task. You can see here. But what if I tell AI with specific you are experienced marketer, especially in running social media compens. Now your task is to generate add a copy only. If I take this replace C, let's see that. I will take this. We'll just delete this. You can see. Now, I guide the AI, you are experienced marketer, especially in running social media campaigns. That's great. Now your task is to generate ad copy, what I have done. So I have just guided AI to generate the specific application. But that is generate adcpy only. What AI will generate, it will go in deeper. It will generate more output rather than previous one than you can see here. You can see add a copy for men's digital watches. You can see the output here. There is a much coherent and precise output when compared to this one. You can see headlines simply stay ahead of time with our stylish digital watches. Primary text, it has given some call to action. Sharp no learn more. This is not effective as much, but when you see by guiding AI to generate for specific thing, it will generate the best output. You can see headline stay ahead of time with the ultimate digital watch. Primary text is upgrade your st game with our sleek durable and tech paciens digital watches from workout tracking to smart notification. It is good when compared to previous one. We can see a limited time offer, save 20% when you ordered today, call to action shop know and refine your style. Has it has given Hashtag also data watch, so you can see her effect to output effect to this output compared to this one, when we guided AI to do all the tasks at one time. I hope you understand this comes under the prompt chining. Okay. If you think this is a complex task to generate all the output of a task at a time, right? Instead of writing at one time, we can breakdown the task into sub task like now we have generated for add copy, right? Now we can do for second thing like social media post, right? If I click here, I will tell EI to suggest social media post. Now you can see the output. Now you can see style it meets functionality, ultimate dital watch for men. Why settle for less when you can have it at all? So we have some great copyright that is social media. So what will happen here? We can use these headlines for our social media post. This is effective, right? These lines are effective when compared to previous one, and we have tell AI to generate or to suggest some social media content. You can see here, caption gear up, there is no reasoning in that. There is no some specific output, precise output one compared to this one like here, right. When this output is generated when we guide AI to generate and to suggest some social media posts as a specific application. I hope you understand. So that's why instead of writing a prompt all task to do AI at a time, we will break down each and everything to get precise and coherent results. Okay? This is a simple example I have explained to you, but you can use in so many ways to get output from AI. Even you can tell to AI, let's see. If you want to write prompt at a time, so you can use some prompt patterns that we have already explained you can use like that. So I will tell you I will tell you which task which task should be done first. Then you need to proceed. You need to proceed. Okay. Then last at the end of the prompt, I will use ask me for input prom pattern as we earlier discussed about that, ask me for which task you want to generate. What it happens means. Let's see the output of this prompt. Then you can see here. Instead of going to write different repetitive task like we have done like it here. So first, we write the all task at a time. We have seen this, the content is good but not effective and deep, right? When decided to write the prompt for each and every task, specifically, the output is good when compared to previous one, right? We have already seen that. Right. So this process will make and repeat, right? So I have to write to generate add copy one time, another time, I have to write prompt for social media post. There will be some repetitive work, right? Instead of that, I will guide AI like this. I will write whole prompt. After the last one, I will tell AI to, I will tell you which touch should be done first, then you need to proceed. Ask me for which task you want to generate. What will happen here? The task or stopped up to when I tell to AI, start with this. What will happen? So in the input, I will tell AI to generate copy that is specific. Then it will generate the most precise coherent result. There we have seed here like this for specific done here. So after giving I input generate ADO copy, I given the input here for specific use case like gendered AR copy. So what will happen here, it will generate a copy. You can see here. Add copy two experience the perfect blend of style and technology, elevate or look with our men's digital watches designed for modern who present elegance ahead of time. Shop Now, 28% off. So generated some three d copies, right? So we can write like here, we can try AI, gendered AR Copy, which have one, we can write directly here, generate one add a copy, which have high converting words. And grab attention. What will happen here. We have additionally added some instructions here. Generate one ad copy only, which have some high converting and converting words and grab attention. You can see the ad copy, which is very effective when compared to previous one, right? So we can use all those things. To reduce some repetitive work, right? Instead of writing the prompt again and again, we can tell to AI, we can guide AI, I will tell you which task should be done first. Then you need to proceed. So ask me which task you want to generate, right? So it will ask me. So I will just give you the input here, generate do copy. The AI will automatically generate Ado copy related to our product. This is a simple example and these instructions are not effective because I have just used some to explain you some basic example. When you practice with your prom patterns or requirements, you will write the best prompt instructions, then it will generate the best output, right what my intention is here to explain you the possibilities of writing prompts in different ways in multiple ways in multiple thinking patterns, right? You can use all these prom patterns, right? So this is a prompt tuning about instead of writing a single prompt for a complex task, we break down the task, to get the precise and coherent results. Instead of getting all the output for task in at one time, L we have seen the first one output here. There is no effective in that or there is no much deeper output. When we use to generate some specific use cases, you can see the add copy for digital add copy only to generate added copy as a specific, you can see the best output from the AI when compared to previous one, right? Two, the complete task have so many sub tasks. So instead of writing specific instead of writing prompt for every time to do some specific task, we will just write a prompt, which will automatically ask which sub task you need to Uh, go first. Then we will provide input here. We have done gendered ad copy. I will automatically generate a ado copy for us. It is a simple output, simple question I have tell in to AI. So when you practice with that, you will get some idea about how this prom chaning works, right? So E, this will works in har gibt and Cloud, right? So in sometimes Gemini and perplexity AI, there is no this functionality like prom tining. So we have to know some capabilities and pros and cons of LLMs, like Cha GPT, Cloud, Gemini, and perplexity dot I, and other AI models before we select to solve our task. Okay? Before we select AI language models to solve our complex problems. Why? So chargeb have some great functionality like prompt chaining, Okay, following the pattern, following the previous one without breaking the chain. So you can see the memory update here. It is a very good option in the chargebr that we have, which makes the apart from which makes better, apart from other AI language models like gemini.ai, Cloud and perplecedEI. Cha ge Bri have some great functionalities, so don't worry about that. We have our next model that is about understanding different LLMs capabilities, pros and cons and which AI language model we have to use to solve particular task. Okay? We will see in that model, right? So just focus on this prom chaining. I hope you understand this prom chaining clearly. So up to this prom chaining is over. Okay. This over, let's go to our prompt engineering applications where we see how to write the proms for different use cases like marketing copy and coding generate code, for creative writing and how we will write for customer support, how we will use Chat GPT and AI language modules to generate image proms in which we can use these image proms and other AI image generators like Leonard AI, lexica.ai and we have some other mid journey in which we can get some results from that language models also in the form of image. We'll see how to use the language models like ha GBT to write the best prompt for our use cases, right? Even the HGBT can generate the best prompt rather than us. Yeah, it is right. So instead of, Okay, so we will see all those things in this chapter. Let's dive into that. 33. 5.2.1 Prompt Engineering Applications & Use Cases: Back to our next lesson that is prompt engineering applications. In this lesson, we are going to discuss how to write prompts for different industry requirements like digital marketing, business, and for productivity. And we can write prompts like development apps, web apps or any tax side, or we can use any where the prompt engineering is. Why? Because the AI LLMs are used by everywhere. In every industry, in upcoming years in future, every industry will use LLMs to make their process very fast and efficiently. So for that, so the prompt engineering skills are very important while interacting with AI. As we earlier discussed all about how to write the effective prompts, then you look at that, there is a much better response from AI by writing the specific and effective prompt patterns. Let's write. In this lesson, we are going to see some examples how we can write a best prompt for specific application for specific industry use cases like digital marketing, coding, and business, and YouTube content creation like that, we will see you can see as here in BP are looking some examples we're going to explore today. In this lesson, we will write prompts for creative writing, anything like that, storytelling, coding, marketing purpose, customer support, and even we can use AGPT or other AL language models to generate images for us, based upon our image prompt. Even we can use AL language models to generate a prompt to generate a prompt for us. That we can reuse the same generated prompt to our requirements. Even it will generate the image proms very well. After that, we can edit with our requirements. Let's see. So before going that, we have to know about what is some specific task you have. We cannot go and write prompt for everything. As we earlier discussed about prompt chaining in which we see some prom chaining limitations, withth we also explore some example how the prompt chaining works very well. What is basically prompt chaining means? Prompt chaining means it will divide a complex task into sub task in which we are going to uh, defined task very specifically. With that we can get the precise and coherent results for that specific task. Instead of guiding AI to do all the task at one time, we can guide AI to do um single task at one time in which we can get best effective output from AI. Like that. Will use that prompt chaining and other prom patterns to see how we can write the prompts for getting our best output from AI for specific use cases like content creation, coding, app development like that. Then we will see after these applications and we will also explore some ethical considerations. Okay? Let's see, and let's jump into the ha JBT. Even you can go other language models as well, but I am preferring HGP because it has some great capabilities rather than other language models, and we will discuss this topic in upcoming models, also. Okay, let's go to our hi JP and we will see what actually. 34. 5.2.2 Initial Prompt Setup - Helpful Assistant: Okay, before starting interacting with AI. We can start to sample human like we chat with other our family or colleagues, uh, high message. You can go with that because this is the AI will chat with chat like human being because it uses NLP method. NLP technique. What is this NLP means natural language processing. It will generate the chat or it will talk with us in Oman language, which is very interactive. That's why we can go with small chats like we do with our friends, colleagues or family members. We can do it like this also with cha Jibt or other language models. It is very interactive. What telling this, hi, Sam. It is thinking like, my name is Sam. I will tell to i my name is CV. Let's see it will assume my name is CV No from onwards the JagiBt will say, Got it CV. How can I assist you today? So instead of I Sir, instead of writing the task directly here, I will try AI module slow by step by step process. Instead of putting the all task instructions at one time, instead of that, I will try AI. I will write instructions in step by step in which the AI can think and AI can respond clearly with our instructions as we discussed in prompt training. Like that. For that, first, I will train AI. So we already know Ja Gibt can do mistakes, and the whole information which is generated by Ja gibt or other AI language models are not 100% accurate or can have some ineffective words or any hallucation words, which we cannot define and which we cannot understand very well, right? For that, what we have to do? First, we have to tell AI, like we have to train EI module as a helpful assistant. What we are doing here, we are trying AI as a act as a personal pattern in which I will think in that field only as we earlier discussed in act as a personal prompt pattern. Let's check it out. Write I will train AI from step by step process. You are a helpful assistant. You are a helpful assistant. Let's see, I am guiding AI here to do what I want. So you will do what I tell. Okay? You will do what I tell, and you have you have experience you have experience in roof reading or um detecting. Detecting. Unusual words. Unusual words and inaccurate inaccurate information. And you can tell AI, which are some limitations and you will not do, instead of that, you will generate best effective output without any mistakes and hallucination. Without mistakes and hallucination and inappropriate information. No. Are you understand So this is my initial prompt setup in which I have tell AI, please keep this in mind for every output you will generate based upon my prompt or instructions. Right. So you can see here, I have written some prompt, which you are a helpful assistant. You will do what I tell. I I cannot tell this point, even it will generate what I need, okay? But by telling this additional information, the AI will think, it comes under in this field in this prom pattern. It comes under this prom pattern. It will work what I tell with detecting unusual words, inaccurate information, and you will generate best and effective output without any mistakes, allumation, inappropriate information, or you understand? What will happen here for every output that generate AI, right? It keeps put focus more focus on every output because we try AI to do this specific task, right? So you can see here. Let's see what is the output here. So understood SF, I will ensure accuracy, detect unusual or incorrect information, and I provide the best, most effective outputs. If there anything specific you would like me to do, let me know. So this is the way of interacting with AI so that you can get best experience, right, best user experience with this. So let's say, as I said, so from now onwards, let's see, no, I want to. So for example, I am a digital marketer. So I have some products, so I want to sell it online. So what I need, I need a website, and I need the copy for that website at maybe landing page like that. After that, I need some Addo copy to run the social media campaigns, and there is so much things. Okay? For that, what I do for specific I will take, right? You can take anything. For example, you can take for Ado copywriting, this is a AR copy rating, but you can take a specific product to get the Ado copy further. For example, so I need a add copy to sell my digital watch online for women's only like that. And you can give best specific data, like I need AdoCpy for selling my own digital watches to 20-years-old boys only. So if I give the specific and specific information, the AI will generate the best related specific effect to Ado Copy to match our audience in which we can get the best conversion rates, right? So like that, we have to go specific, more specific to get the best output from AI. Let's see in this chat. So for example, I am a digital marketer. I am looking to sell my product. Okay, what I tell to AI, I will tell. So instead of writing the task directly, let's let AI know our main task. Let I know our main intention. Okay? Let AI know our main problem and our main requirement. Instead of writing jumping into writing the instructions for specific task to solve, lacks some background information of us. So for that we have to go step by step process, that what we have learned in prompt training. So for that, we have to train AI. We have to write prompt for AI, step by step process. In such prompt we have to try to like this and we have to tell our background to AI that AI can learn from us, and it will generate the best output regarding our query like that. 35. 5.2.3 Writing Effective Prompts for Different Use Cases - Part 1: Have to write prompt for AI step by step process. In such prompt, we have to try to like this and we have to tell our background to AI that AI can learn from us, and it will generate the best output regarding our query like that. So I will tell AI. I will tell my requirements. So I am looking. I am looking. I'm looking to sell my own digital digital watch. For 20-years-old boys. Okay. We can take mens like that, also, 20-years-old boys. Okay? And I'm looking to sell my own digital watch for 2-years-old boy. Can you help me to go online. What it will suggest? Let's see. This is my simple intent. I want to tell to AI. Just I tell to AI. This is my requirement. What it will suggest, let's see in this. So observe here, not the output. Observe the way of I am interacting with AI. So the prompt engineering is nothing but only to write the prompts, but it is the art of skill with interacting with AI. The prompt writing skill is based upon our interacting with AI. So it is not a simple prompt to write and become a prompt engineer, not like that. So to write a best prompt, we have to write so many subpms to get refine and to get the feedback from output, and we have to change and adjustment main prompt like that, okay? So you can see it's suggest some step by step plan to go sell my digital watches online. Let's see, Divide your brand, product set up your online store payment delivery. So have generated some steps to regarding to sell my watches online. So it is good, right? Man. So for that, so it just learn my intent here. So what I'm looking to get get done the task by AI, AI is gathered my information. Yes, I am user is looking to do looking to sell this watch online. So it comes under this pattern in which we can give deeper insights from this. So now I will tell AI, so I will take anything from here. So okay, so I need to do some marketing and branding. Can you help me? So can you help me can you help me in marketing? Okay, let's try here. Let's take four on it directly. Fourth option above So the will generate, and it will suggest me some marketing and branding techniques. You can see here. So marketing and branding with four points mentioned, we can use Facebook and Instagram ads as well. So it is automatically generated some marketing plan here. Add a copy, targeting is 18 to 25, tech gadgets, interest location, all these things, budget, engaging content, collaboration with influences, all these steps regarding marketing and branding. So note is good. Then I will go deeper again for brand identity development. Can you help me brand identity development? It will go deeply in these topics. So how to write catch tag line just by doing that. It works like a outline expansion that we are earlier discussed about the Outline expansion prom pattern. So it goes under like that. Okay. So it is quite easy. But what there is something complex task. So instead of writing the prompt again and again. It is the best way, but some complex task need the output should be analyzed. For that, we will write. Instead of there is so many ways. There are so many ways to interact with AI to solve particular task. There are so many ways. You can go this prom chaining method or you can use other prom patterns that we earlier discuss some more examples like that. 36. 5.2.4 Writing Effective Prompts for Different Use Cases - Part 2: So instead of, uh, here, just AI answering my questions here, right? So what if I tell AI? Ask me. Ask me required information that you want to generate a best copy for me. Here, what happens in this method? The AI, just taking the question or task from myself from my side. Now the output is generating by AI. It is dependent on the data that is trained. The AI is trained. But my data. Okay, I have my own data. So for that, I need some best catch Addo copy to sell my watches online. For that what I have to do, I have my own data, I need the copy, based upon my own data. For that I will tell for that what I will do, I will tell AI, I have my own data. So ask me for subdivided quotients. Related to main task or related to main ad copy generation that you're required to generate a best copy regarding my product. So don't confuse it. I will write here. So what I will tell AI. So instead of above, you can write like that. So now, ask me. You write just here. Now, ask me subdivided quotien No, ask me subdivided related to add copy generation related to subdivided question related to add copy generation. That information is required for you. That information is required for you. But generate best copy for me or for my product. So you can see here. It will ask now, it will ask me some questions regarding the add copy generation. So what we ask? Let's see here. So you can tell here. After I provide after I provide after I provide information or answers for your question. Answers for your question. Proceed. Then proceed to generate, add a copy. So let's see what the output is. So it will ask, Okay, got itself, let me know. You have some subdivided questions to create the best ad copy for that. So the A is asking questions to me. What are the product features? What are your target audience? What is your tone and style, Offer a call to action, unique selling point C. So when coming here unique selling point, there is no unique selling point in this add Cp when compared to previous one like here. So when we go deeper now, so it will ask our deeper questions. So when see, the AIs have some great information, great knowledge on all things that it has trained. But we lack some knowledge because we are human being, right? We cannot learn all the things, everything. Okay? But AIs know what we have to tell instead of giving instead of defining the task whole to do by AI and whole the thing, it generate the best output, but there is a lag between providing our own data. Okay, to reduce this gap between AI and you. So we have to tell to AI, ask you ask subdivided questions related to our main task that help you. Okay? That help EI to generate a best copy for my product. You can see there I am taking the specific application, add copy generation for my watch digital watch online selling business. Okay, you can see so you can see the unique selling point. It is very most important while you sell any product on market in the market. So you can see heritag asking some unique selling points also customer emotion. When I give all answers for this. Okay? I will generate the best specific copy for my own data that I have instead of writing by AI, by thinking by simple, the AI is thinking. This result is just uh the ADO copy, how the AI is thinking. But when compared this, okay, the EI asking our requirements, my requirements in which I can get the ARO copy on the preferences of my data. So I hope you understand. So when I provide answers for this question, the AROCpy is based upon my data, not EI own data. You can see, this is a R copy is good, but not as much as effect, right? Because this copy is like AI is thinking. But when I provide answer for this, the R copy is related to my data and my preferences. Let's give the answers for this question firstly. So we can see what are the key features for your digital watch? I will take long battery life. I'll just copy and we'll paste here. For the first, answer is let's take number two. So what do the specific experts of your watch 20-year-old bust tiki savvy futures? Let's stick it down. Let's give the fast answers. Or we can see her motivational playful. Let's go playful. That's the third one, Are you offering any discounts limited deals? Let's see, free shipping. Let's take fifth answer. Unique selling points, unique design. The sixth one is emotions, what feeling or experience do you have want to audience to associate with your watch example? Let's take confidence. So after providing these answers to questions, it will automatically generate the best output. You can see here. The output is very effective when compared to previous one. You can see tech mid style, you are new digital campaign. Long better life to keep with you usar Packard with techie savy features that tot olds crave Unique design to make you stand it on any crowd. So you can see her output is effective this, right? How effective it is when compared to this one. Eight, it will given some targeting options, it is okay, but you can see the add copy it is only one line. But when compared to this, after asking questions and after I provide answers for that quis, you can see the effective output from here. So that's why the prompt engineering is about, uh, doing the specific task, to tell AI to guide AI to do some specific tasks in which we can get the deeper insights, precise and coherent result from AI. E as I said, you can see here. I just some prom pattern that I tell to AI. You can write any number of webs you can write. This is all about practice by yourself. I have taken some add copy generation here, you can directly edit here. You can directly edit here. That has taken some time. Okay. You can see now ask me subdivided question related to add copy generation. In this you can replace with another specific task like that related to, you can take email marketing. You can take email marketing like that, okay? Email marketing. So required what it will happen, you can see here, the best copy for my product. Okay? If you have any idea about email marketing, you can see the email marketing means selling the products directly into your mailbox, okay, or getting leads conversions through email marketing like that, okay? So in email marketing, the ad copy very crucial. Okay. So for that, I will tell AI, it is required for you to generate the best copy for my product or email marketing. For my product is good. After I provide answers for your question, then proceed to generate email ad copy. It is very important to write email a copy. Best email copy, you can write here email copy. Okay? For my product. So what if it happens, let's see. It will ask some subdued questions related to my email marketing here. So what are your target audience? What is email goal? Okay. You can see her product features, tone and style, all those things offer CTR call to action like that. So if you open any email in your phone, you can see some brand, the brand companies are sending emails to you to get the purchase or to sign up their forum, all those things. Okay, you can see there is some CTA Gino or sign up that calls CTA, right? So it will ask like that you can see here learn more, claim your offer like that. This is all about email marketing. When I provide answers for this, it will generate the best email copy to me. Even you can edit directly here, you can go for another task here. Okay, right? Instead of defining all those things here, you can directly write the prompt here. You are experience right here. You can ask tell here you are experienced digital marketer. Okay. So even you can write much more specific instructions like your experienced email marketer who have ten years of experience, okay, I crafting the best Emas that increased ten X sales and open writes, like that, you can go in that specific instructions in which we can get the best output from AI. Okay. Now, I had to, that is digital marketer. So if you know already about digital marketing, there is nothing to show. Digital marketing have some subdivided marketing aspects like email marketing, add copywriting, content creation. All this comes under the digital marketing. So if you train as a detail marketer or you can tell you can go as specific your experience as email marketer. So the best practice is so what is your task? Okay? The task should be the personal pattern to get the best of. Okay. Persona pattern should match the your task. Okay. In case you can see her email marketing, but digital mark it is good. There is no problem in that, but it should be specific to get best output. You can tell your experienced email market. Instead of digital marketer, you can say email marketer. No, ask subdivided questions related to email marketing. So these two matches in that you can expect the best output from AI. So for digal marketer, you can tell ask me subdivided questions related to digital marketing that information required for you to generate or to increase my leads and sales for my company. You can go like that. A, anything like that. So it is all about how you are interacting with AI. So for example, you can instead of digital marketing, you can use best Coder. You are experienced Python Coder, Python developer. Now, ask me subdivided questions related to Python. That information is required for you to build a website using Python. Okay? It will ask some subdivided questions. After you provide answer, it will generate the best code, then you can go implement and get the website like that. So it is all about your task and writing the prompt here. So as I said, you can use a number of prom patterns that we are earlier discussed. That is all about how you use those prompt patterns in your AI language models to sold particular task that is very important, right? So it is all about writing your prom patterns using that how you like. It's all about practicing and doing experiment with other applications also. That is the most important. That's why you have to test it out. You have to write, you have to practice it with different prom patterns to get the best and effective prompting skills. 37. 5.2.5 How to Write Advanced Image Prompts using ChatGPT: Our next topic is image prompts. How you can write that. So you are experienced image prompt writer. What is the image prompt writer? If you are using any image prompt generating tools like Leonardo AI, lexica.ai, even it have some mid journey. So that requires image prompts to generate image. That is also required a prompt to generate image. Even you can tell AI, you are experienced image prompt writer. Okay. You are experienced image prompt writer. Now, ask me subdivided questions related to. You can go the specific one for cartoon, let's take cartoon image. Designing or generation you can take generation directly. Generation. Image generation. That information is required for you to generate the best copy. Okay. Now, you can go again specific that Lion cartoon Lion cartoon image. For my let's say I think, Okay, this is I have done for my product, no problem. Let's cancel it. Just delete this. After I provide answers for your question, then proceed to generate a Lion image. So it will now it will thinking like it is a image prompt writer. There is thinking like image prompt writer. It will ask some questions related to my task. Let's see what will happen here. It will ask some questions to me. Got it. Question style and mode. So I have to give the answers for these questions, you can see here. It will asking some style and mode, pose and expression, colors and features, background and setting, right, additional elements. So when say, I will tell you, first I will give you the answers for this Style and mood. What is like this? Should the cartoon lion look cute? Okay, let's stay cute. Und C I will paste here. Number one is cute. Number two is question is abstract style. Okay. Number third one is, let's take here. Fantasy colors. Let's take blue. Okay. That's four. So I'm just giving some answers to questions like that. Don't focus on the correct answers. Just take some examples here. Let's take a jungle. Let's say jungle. I'll take briefly one. The third one is likes book, let's take book fifth one is book. Let's see the image prompt here. So it will directly generating the image instead of writing the prompt. Okay, Aha gibt have some great features, right? It will generate the best, uh, image in chat as well. We can see the best output from here, image prompt, right? So there is looking good right, pretty cute here. So if you have some so you do not need become expert at prompting for image. Okay. What here happening is, you will tell you will just try AI to generate image prompts. Okay, this prompt pattern. Is know very well what image prom should be filled with. For that, it will ask related is. When it will get it will gather information from our preferences, so it will generate the best output according to our specific task. In case in this case, I have tell too I to generate cute lion. So if you adjust this, you can change the fears, majestic, all this according to your preferences, then it will change the output. So it is the best one, right? Here is a blue cotton lion, abstract style like that. Right? Even you can tell AI, so please please write prompt for above image. Let's see. Now, it will generate you can see the prompt here. Instead of getting the image directly, you can use this prom pattern in other language models like Lexicat Adonadimage journey. This prom pattern can help you. So this is a paid. Okay, this is a paid HGT plan. That's why it is directly generated output image prompt. In some cases, free plan, that is TGP 3.5. Okay, verbo. So it will only generate the cartoon. Sorry, I will generate directly the prompt here like this. So okay, it has some great features in the paid version of the hGPT. That's why I am telling this using not only the cha Gib, you can use any language model to generate image, even you can generate this prompt using any language model. You do not need to worry 38. 5.2.6 How to Write Advanced Text Prompts using ChatGPT: Even you can use AI as a experienced prompt engineer that you can get the best prompt from the AI itself without putting yourself into writing the effective prompts here. Okay, that's why the prompt engineering is very important, right? So even you can tell AI you are experienced prompt writer. For example, AI, prompt writer. Now, ask me subdivided questions related to you can go the specific one. Let's take instead of cartoon image generation image is over. Let's take now ask me subdivided questions related to digital marketing, I will take digital marketing. No, you can see here. So what I'm doing here. That information required for you to generate, you can see here. You can change here, to generate best, effective. You can write how much you can write all the words that you have to define some particular best prompt here. So effective and engaging and engaging prompt. Digital marketing. Okay. Let's take digital marketing. After I provide answers for your question, then proceed to generate prompt. What will happen here? Now, AI as our work as AI prompt writer, like prompt engineer. So it will ask some questions to me. You can see here, target audience, what is your primary age group are you targeting? When I provide all the answers for this question, it will generate a prompt, not the output. So focus very well. Okay, I will generate the AI prompt, like we have done earlier to define a task to AI. It will automatically generate a prompt for us. That prompt, we can use any language models, even if we can use GIP to get the best result. The AI is doing our work as a prompt engineer. It will write the best prompt rather than us, rather than me. Let's see. I'll show you the example here. So what is your primary age group? So I just roughly, I will write my requirements. Okay? A group argon is 18 years. Okay. Let's take fastly. So I will take D durability. Let's take the esta, third one, anything like that? Trendy for Marine Channel Facebook. So to explain you, I am just taking I am writing the answers roughly. So when you go to solve any particular task, you have to give the answer each and every question. Okay? So that you can get the best result from here. So now goals and objective is lead generation. So what happens, A will generate the prompt for us. No, you can see here. See here. Thank you for details. Sef. Based on your response, here is effective and engaging dital marketing prompt for lead generation targeted, so you can see here. Pmt Introd ultimate dital watch, for example, so it is a best here, it is not a prompt. It is a template. So we have to tell AI, what is a prompt? The prompt have two meanings, prompt that is also called template. Okay? That is a template. Not only the AI prompts, there is something other prompt that we will write, we will call some template, the templates of many aspects like any ADO copy template like that. So the AI syncing now that is prompt, which is template like that. Okay? You can see it is generating the template of some AROCpy. Okay? This is not actual AI prompt. For that what we have to tell to AI, we have to train AI as URA we have in this case, because AI is here. So when I'm trying to do the EI task for this specific prompt engineering application, what we have to tell AI, we have to go in deeper and deeper to guide AI in this prompt engineering role. What we have to do? You are experience A prompt writer, A promritero AI prom writer, in which we have to tell AI in which in which you have writing, writing prompts for AI tools like Cha JBT. Okay, let's see the output from this prompt here. Got it s, I will ask some detailed questions for you gather information and asking again the i. So here's something here. So why generating the prompt means it will generating for the specific application that is digital marketing prompt. So instead of I telling to AI to dig generate prompt for digital marketing, I will just clear AI to cancel it. Let's take. Then form required for you to be forts cancel this. After that, we will see what the output will be. You can say you're asking some questions and I will provide this answer. Instead of writing the answer by myself, so I will tell you I like that. Okay. So can you generate output for above? Can you generate output for above task by assuming. By assuming answers by yourself. As example. So the AI will think, it will automatically takes the answers and it will generate the output. You can see here. So you can see them. You can see, this is the output we want here. So after I tell AI, exact specific role. You can see here. You experience AI prompt writer in which you have writing prompts for AI tools like Cha GPT, and other I language models. So what it will happen here, it will ask some questions as previous done. Okay? Instead of I write in answers, I just tell AI to take yourself. Okay, I assume the answers by yourself for the above questions and generator output. So in this case, you have to provide your own data for these questions. Okay. So just to see the way of the interaction I am doing with AI. Okay, you can see the prompt here. How many lines here, one, one, two, three, four, five, six, 789? Nine lines prompt here. If you write the prompt, it will just end at the fourth or third line because we lack the information we have as a human being. But I know a lot of information. Will go deeper and deeper, right? It will write the best to prompt rather than us, rather than human being. Okay? You can see the example here, create engaging in Facebook Instagram, add copy targeting 18 to 25. So it is based upon this data. So when you give the own data, it will changes. Okay. So now you can see here, generating marketing prom for digital marketing campaign. This is a prompt. This prompt, we can use anywhere, any language model to get the best insights. So this is the power of prompt engineering. So you can use AI to generate the prompts. Even you can do all those things. That is the advance prompt engineering is. So use this skill, right? So for example, if you get this prompt. Now you can change this according to your preferences like any specific task you are looking to solve by AI. This is a prompt example here. Okay. Even you can tell to AI, please I will tell here. Now, I will no, please no please, convert above prompt. Into prompt template. Prompt template. In which the user can edit the preferences. So what AI will do the prompt this prompt will be converted to the prompt template. You can see here. You can see here. So it will generating the instructions. So please replace this A platform name by specifying your Facebook, Instagram, Google Ads like that. So instructions for customization. You can use this prompt template. Okay, this is a template now. This is not a specific prompt. This is a template now, so it is becoming variable, not a static one. This is a static one. So we can use for the specific one. But when we convert this prompt into prom template, so it becomes variable one in which we can decide we can change our ad platform name all the interest and behavior of the product, all those things. You can see the instructions how to edit the abookPm template. So that is a power of AI. We can do all our task in seconds, right. So this is a power of AI. This is all about how you interacting with AI and how you're putting yourself into the AI to do your task. And that is a main main skill you should have the way of interacting is prompt engineering. So prompt engineering is nothing but putting your requirements using some prompt patterns, okay? AI in which AI can learn your background information and intent that can generate the best output for your preferences. You can see here. We have just written the AI to tell to write the prompt, after we have guided die to convert the above prompt into prom to template that we can edit that for our preferences. So you can see here instructions, all those things here. So that is more powerful.I is more powerful than you think. It is all about how you interacting, how you are reading the proms, to solve particular task. So there are so many ways. If I tell here, it goes on. Okay, the AI is infinity. So we can do more thing with AI. There is no limitation for this for that. So the main skill is practicing by yourself, using the other prom patterns, doing the other task, testing it out, refining, taking the output feedback as feedback, and we have to refine the again prom pattern. So after seeing all the prompts, all the output for the specific. So now I can combine all the proms. So this prompt, this prompt, this all the sub proms, okay? This prompt. Okay? All these sub proms I will combine all these sub proms that become a main prompt. That is actual prompt that we can directly, um, use one time, then it can generate the whole output. But it is the best way. This is the best one to get the um, precise and cohent results from each step, how we can analyze the output. Okay. That's why the prompt training method and this method is always good. So that's it for this lesson guys. We have some you can use we have learned how to write the prompt patterns for specific applications for different industry, use Kass to how we have to interact with EI from starting onwards, like guiding EU or helpful assistant. It is very most important while you interacting for the first time or second time in the new chart. Okay? So it from the below itself, it will act in this prom pattern only. There is a more powerful. So if you to break this, just to tell AI from now or forgot above. So it will just break the chain and it will go from this prom button here. Okay? I hope you understand this. So just tactics by yourself, use the other prom buttons as much as you can, and see the prompts or the prompt writing skill is very, very interesting and it makes you the open minded and can change your life. So that is the most important so that's it for this lesson guys. There is much more to tell you, but this is enough for you as a beginner or anything. So the prompt writing skill is improved by yourself only by practicing it. I hope you understand. So this is a thing I have to today. So let's jump into our next topic. That is ethical considerations which are very important for generating output and to use anywhere. Let's dive into that. 39. 5.3 AI Ethical Considerations: Oh, now in this chapter, we will see some ethical considerations. As prompt engineer, we should know. So what is actually ethical considerations? So it is all about some moral implications of AI actions or AI policies that the companies that will put while using AI tools like GBT, Germany, like that. And there are some perspective. Okay, there is some personal information like that. Ethical consideration means we have to consider some points. While using some EI language models. Okay? That are very important for us, okay? So for that, there are more other information you can search in Google itself, like what are some ethical considerations for language models you can get? So I listed three points here. It is very important, in the case of as a prompt engineer, we should know. What is Okay, let's see the first one, avoid bias. What is bias here? Bias means so the AI is the AI is, for example, take ha GPT. Ha GPT uses LP technique that is natural language processing in which it will generate a text in humane manner, in a humane tone, like we talk with humans like that only. I will use neutral language. Okay. So what I'm telling here, while interacting with AI, use neutral language. Use human language to interact with HGPT or other language model because this language model uses NLP. The NLP, like technique is texting with EI as human language tone. Okay? The AI will generate a text. Okay, generative text or output in the human tone, in how we talk to with EI in that format only. Okay. So while writing the prompts, we have to use neutral language only, and we have to avoid bias language or bias words which are not help EI to understand our main intent, okay? Main task like that. So that's why we have to avoid some stereotypes. Stereotypes means the words that are not clearly defined or that I AA can know that words also, but it will disturb output. Our output should not be effective when compared to using neutral language. Okay, I hope you understand this point. So when compared to the second one, ensure inclusivity. So we have to consider some diverse perspectives. What are some diverse perspectives is so providing some background information, providing some additional information from our side to AI to generate the best output. Diverse perspective means putting AI to solve our task by our own data. Instead of taking AI to solve the AI. Instead of we put our effort, okay? As human, we have some own data. Okay, the AI is not well done 100%, okay? That is the output is not 100% accurate, okay? I can do mistakes. For that, we have to provide some background or information that we have to solve our task by AI. So that's the considered diverse perspective means we have to provide background information or additional information that we have. Okay. The first best example is before one year, that har gBT only is tried up to March 2023, I think. So no current up to date, but before one year, the chargeb is updated up to some limitation date. Okay? For that, after some limitation date, if I ask any question related to current data, it will tell me, so please, I don't have access to future data. So please provide me. I will assist you in that. So what the conclusion is so there is no all language modulus are current to upto date right now. Because what I'm saying is we have to provide any additional or background information to define our task very clearly to AI, which to support the AI to do the task very effective manner by providing different perspectives of information to AI, by providing additional information, prompting, by providing other related information to our task and background information like training act as a person of PAM pattern in specific applications. Like that it comes under the diverse perspectives. Okay, that is. With that, we can ensure the inclusivity. Okay. The third one is respect privacy. So please avoid prompts that generate sensitivity, sensitive or personal information. So it is very most important when you use language model. For example, take CHA GPT. The hA GBT is training. Okay, it is trained by our data also. Not only the company is training the AI, not only that, okay? The har GBT is trained by our data also. Okay? It will becoming smart by Oss, because we direly use LLMs, okay, for our task to complete our task fastly. In that I will train by our data. In that, we have to avoid writing, using our personal information like name is nothing problematic. But when we use some real account numbers or any pin numbers like that, any phone numbers, OTPs that have some restrictions on that. So if you use like that, it will trend by our data. In case if you write the for example, I will use prompt, so please review my ATM card number. Card number is trained by AI. When another member or any of using HGBT if that person asks JGBT, so please provide some basic card number. So it will there is a chance to provide our card number to them. Okay? So it is example, but there is a chance of, uh, leaking our personal information. For that, we have to avoid providing our personal information in the format of prompts to AI. Okay? For that, we have to keep in mind that as a prompt engineer, don't we have to avoid providing any sensitive or personal information to AI to provide any, uh cases or any that leaking information. Okay? We have to keep in mind that. Again, I'm telling you, so if you are using a CharGPT just go to here profile section and just click on the settings button. So see the data controls option, you can find it here. So just if this option is on, so that is most problem that what you are interacting with AI is training. Is taking the data for training that you can see the option here, improve the model for everyone. In this case, I offered this option because what is the benefit of offering this option is, I have written so many prompts here. The AI cannot take this data to train itself. Okay? It it up to me. If you put that option is on, there's a chance of getting trained by your data that can be prompt or anything that task. So for that, please keep in mind that off that option that you can find in data controls a profile section, you can see the top convert side of the right side. So and avoid providing your own real personal information to avoid any leakage data cases. Okay, I hope you understand some ethical considerations. For more information, you can search it in online. You can get more insight about ethical considerations in LLMs or using AI. So for this, our chapter will ce. So next chapter is our how to use LLMs for specific task. And we will understand some capabilities and pros advantages and disadvantages of other language models that we have right now like ha JBT, Gemini Cloud, and perplexity dot and other image generation tools also. Let me discuss that because as a prompt engineer, you need to be good at writing prompt. Okay? No perfect at specific language model. Okay. So as a prompt engineer, you have to use different language models to do some particular task to solve the particular task. Okay? For that, you have to know the capabilities of each and every LLM. As a prompt engineer, you should know Okay. So as a prompt engineer, you should be better at writing prompts, not writing prompts per specific LLM. So you should be able to write proms for every LLM. Then only you can become a prompt engineer. Okay. For that, next our module next our chapter or lesson is, understanding the capabilities of different LLMs like Cha JP Cloud, Gemini, and other image generation tools, and we will discuss by example, we will explore some pros and cons by seeing the examples of each and every LLM. Let's dive into that. 40. 5.4.1 Understanding Different LLM's Pros & Cons: Lecture, we are going to see some very important skill that every prompt engineer should have, that is understanding different LLMs, ROS and cons and their capabilities. And because before we write the prompt or before we use AI tools, AI chart boards to solve our task, we should know which LLM will best shoot for the particular task that we are going to solve it, right? So before knowing that, if you are good at writing the proms, but you don't know which chat board, have some strength to solve a particular task. So that is the most important skill before writing any prompts to solve our task, right? So learn this skill, we have to know, right? We have to know which LLM have some great capabilities and limitations that can help us to choose the best tool to solve that particular task. Okay? As a prompt engineer, you should be great at writing the prompt, as well as you have to know which LLM which LLM best suit for our particular task to solve it. Okay? This skill can be achieved by using different LLMs to solve particular tasks. For this, by using this method, we can check the strengths and weakness of each LLM, for selecting for choosing the best LLM, to do some specific tasks. Okay, so you can see her. So what we are going to learn is so we will see some different LLMs like harBTGemni Cloud, right? So some tasks will take same specific task to understand how the LM will help us to solve it. So we will use one particular task for all LLMs to check which LLM is solving the task in efficient manner. Okay. You can see, and we are seeing some tips to match prompts to the strengths of each d. So we are going to see which model have some strength to solve the particular problem or task. Okay. I hope you understand this. So there is a question why understanding LLMs matters. So as I said, each language module has its strengths, okay, its own capabilities, okay? And knowing them allows you to tailor your prompts effectively. As I said, as a prompt engineer, you should better at writing prompts for each and every language model, right? So it can be a Ja Gib. It can be Gemini, it can be Cloud, or any other LLM. So you should be better at writing the prompts, not at one specific LLM. Okay, so you should you can, as a prompt engineer, you are able to write prompts for any LLM. That is called a prompt engineering. Okay? Not if you have specific master in one specific LLM, so you can use that skill to solve the task which have the strength of the LLMs that you are mastering that you have master in that. So for example, if you have some prompt engineering skill and the task is not easily solved by this particular M that we have the deeper knowledge or that you have some master in that LLM. So it can be waste of time to writing the prompts to solve some particular task. This task can be solved by other LLM effectively. So further, as a prompt engineer, we have to see which LLM will shoot will match perfect for those tasks to solve it, okay? So that is a point here. So what is your best tip to test the lens? Okay, that help us to choose the model to solve the particular task. So you can see the tip here. Test the same prompt on different models on different models to compare outputs and identify the best fit for your needs. So you can see here this is the best tip ever. Okay? So to test the LLMs, which perfect to match our task, we can see them. We have to use same prompt. Okay, we have to use same prompt on different LLMs, like har GBT, Gemini, Cloud, and other. We will see in the upcoming lecturer, you can see here. So on different models to compare output and identify the best fit for your needs. So what is a tip means? For example, if you're solving the task particular to write content creation for education in so and so domain. Okay? So for that, you will write a prompt. That prompt should be used in all the LLMs, like hi GPT, Cloud, Gemini, and other AI models. Okay? After that, the AI will generate the output. Okay? This prompt will generate the best output, okay? The output is analyzed by us. After analyzing the outputs of all the LLMs, so that we can analyze and we can finalize which model can solve this task better. Okay? After that, we will write the follow proms and all those things in deeper. Okay? So to explain or to understanding in better way, let's jump into our all LLMs, and we will test single prompt on all different models. Okay. After that, we will compare. Let's see. 41. 5.4.2 Understanding ChatGPT Capabilities with Use Case 1: Have already opened all the LLMs like HGPT gem.ai, cloud.ai, perplexity.ai, Microsoft copilot and meta AI. So all AI harboards are called LLMs. Okay? So in which they have some search engines like Microsoft copilot as well, right? So the Cha GBT is developed by Open AI Geminis Google Cloud anthropic perplexity.ai perplexity company, Microsoft Copalet as you know, that is Bing Meta AI is, Facebook, okay? I hope you understand this. So let's check. So now I am in chargebty. Let's take same prompt on all Ms to generate some particular output, and we will analyze them and we will finalize which model will be the best fit for our task. Okay. Let's do that. So as earlier I said, that is applications of prompt engineering. So recall that before writing the task directly, we have to train our AI in the step by step process. Okay? Let's take hi Okay. Hi safe. As you said, this is a safe. It is stored in my name in this GBD. So after that, I will just right. You are a helpful assistant. I already copied that. I will paste here. This is a simple prompt here. Okay? So now, okay, it will understand. I will take this prompt again. Okay. And I will paste other language model. That is Gemini. So let's take hi I will start High. So it houses great features. I will directly paste this the first prompt. What will the Gemini will see? Let's see. So you can see here, there is something went wrong here. You want me to what you tell me and you want me to be accurate and helpful, I understand. Okay. So now it is taking time. So let's go to other LLM. That is Cloud. Hi, I will say hi. So after that, I will paste our helpful assistant prompt. So you can see her. I appreciate your interest in my capabilities, okay? It's well. So you can see her. How can I assist you today? I'm ready to help with the wide range of tasks while ensuring the output is responsible and beneficial. That's good when compared to Germany. Germany is taking the time to think about this. So you can see this. Ta gibt Cloud have some capabilities rather than Germany, right? So let's see the perplexity dat today. What it will say. So, hi, it is a common informal getting used to acknowledge someone or intiate conversation. So it helps giving some why we use this thing. So after that, I will just tell AI that is perplexittA as a helpful assistant. Let's see what the output will be. Yes, I understand I'm here to assist you providing accurate information, detating unusual or inaccurate words. It's good. It will also getting some output. So I take the Microsoft Copalt and I will say hi here also. Okay. I take in high here, you can see here. So let's see. Hi there, how is it going tonight? Okay. I will dt paste my insual prompt. That's what I think. As I understand here, I am here to help you with acuent lineament. It's good, right? Let's go meta AI. Let's start with hi. So we have to continue with that login. Let's take finish. It's taking high information as a prompt. Let's it's also taking time. Let's jump into the Gemini. Based on information that you probably are unable to use Metaoc it has in some issue. Okay? But. So I will try some other time. So let's see some remaining LLMs like Gemini Cloud A. It's also taking some time, right? So let's refresh it. Sometimes the gemini takes more time. Okay, let's take Assured the after I put this initial prom setup, let's see what will happen here. So I understand that you want to be helpful assistant, you will do what you tell me. So it is also some great, right? So let's take some specific task from Okay, let's take R Experience, experience, experience. Let's take any task that is business content creation, or we can take any specific right. Let's take anything that is okay. You are experienced AI expert. Okay, in AI expert in the field in the field of let's take deep learning, okay? In the field of deep learning. Now, your task is to explain to explain about deep learning deep learning in simple words. Are you understood? SIF, I understood I will explain deep planning in simple words. Here we go. It has just drafting the output for my task here, it generating the output. What is that deep planning is about. So you can see it explaining in the simple words. Deeplgning is like teaching a computer to learn from blah blah. It is all the output from the Cha gibt. So you can see her output. I have some women connection words. So if I use this prompt and other alms, 42. 5.4.3 Capabilities of Gemini, Claude, Perplexity & Copilot with Use Case 1: If I use this prompt and other albums, let's see how Gemini, I will take like this. Let's see what will happen here. Let's understand here the simple explanation of deep learning. Imagine teaching a child to recognize a cat. You wouldn't just tell me them to look for whiskers, ears, and tail. Instead, you should have many pictures of cats. So it is quite difficult to understand it. Let's see. It has generated some output. Here is D also generated some output. Okay, you can analyze it. Deep planning is like teaching a computer to learn from example C. This is make the sense. Okay? This output is just have some good explanation when compared to em. Okay, it is taken some there is something difficulty to understand this, even you can see here, right? So let's go to Cloud how it will explain our task. You can see here. I will explain deep planning in the simple approachable them. Deep planning is a powerful type of artificial intelligence. Okay. Let's see. That mimics how the human brain processes information and learns from experience. Imagine it like teaching a computer to learn and make decisions similar how a child learns through observation, practice, and pattern recognition. Okay? It has some good explanation when compared to Gemini. Okay, let's go to perplexity.ai. So let's take this. Absolutely. What is de planning? Deep planning is a type of artificial intelligence that teaches computer to learn from large amounts of data. It inspired how our brain works? Using such call neural networks, key concepts. Okay. You can see why it is important, all those things here. So this output have some great structure and simple explanation when compared to Cloud Gemini. Okay. I hope you understand. So let's see, and let's go to our Microsoft corporate what it will generate. You can see here. Absolutely understand tie into deep planning in simple words. Deep planning is a type of AI that mimics the way the human brain works in processing data and creating patterns for decision making. So as I said, the mimics means we have already seen, you can see here. The cloud. Day planning is a powerful type of artificial engine that mimics how the humanbin process information. Okay, let's have some great. So you can see here. This is a choirs, okay? The Nim is generating the best output. Okay? There is no problem in that. Okay, you can see a difference between the output here. So the hajbit have some more personalization when compared to Gemini and other all AI language models. Personalization means explain like your friend, like your teacher or any colleague, how they will explain to you any subject or lesson. The AI will explain in that way. That is more that is the best capability that Jajbti have that is personalization, right, and able to recognize our name. So even I just change the new chart so it can recognize my name. Right? That is the capability of ha gibt. When I tell to Gemini, let's see that also will say. So for that. So this is a simple capability I've shown you. Okay? So let's go before going to dive deeper, so I will explain what are the actual LMC is. So the chargebty is a simple large language model. It is trained by a lot of data as earlier we discussed. So it is developed to interact with AI like a human beings, right? So there is simple chatbot, okay? And it has become that voice mode, and it also have some great features that like search engine. Notice, okay, have some versions. So when compared to Gemini, gimnies developed by Google itself. So the main problem of this Gemini, it will take them data from the search engine of any websites, all those things. I will summarize and it will give the answer for us. Here, the personalization come less when compared to ha GPT, right? Why the AI AI will take the answers from the websites that they have. So each information on the website like having some direct information rather than using personalization words structure in that way. It will will take the data and it will generate as output here. So that is here. So you can see here. Imagine teaching a child to recognize cat directly without personalization. Starting point, it will just throwing the output when compared to the cha GPT. Cloud is also work like as Ta GPT. It has some great futures, like reasoning purpose. It will have some great futures. Okay, it has some personalization. So when compared to perplexity dot, it is mainly developed for researching purpose. Okay. Researching purpose means this AI have all the access to the websites and researching papers all the Internet have. So it will easily okay, it will easily, uh, generate output based on the research papers and real uh, website, real time data. Okay, that's why it is most effective for research. So this perplexity dot A is good for researching papers or just taking that. So you can easily so for this, for every output it will generate, it will show some reference links, website links that you can refer directly on that. So you can see here it will show here, you can see here sources. So hi when I tell to perplexity doti to just Hi, it is taking this information. It's taking this information from this particular website. So we can directly go here and we will check the definition of high here. So not only that, you can see a different output and you can directly go to the website that it will show you after each output. Okay. I hope you understand this. Okay. And it will also just show these some related questions that asked by users, most of the. So you can just click here and it will explain the second thing. You can see it will also suggest sources that the output is taken from this here. You can just click here and it goes to real time website. That means that information. Okay, the output of the perplexity AI is taken from this website. So it's showing some differences which can we put some confidence about this data. Okay? That's why the perplexity dot a is good for researching papers, for real time data to get that. Okay? For example, comes to the Microsoft C Palt it is also same works like Google. It is also search engine, like Bing chat that is Microsoft Bing, we have. Okay. So it has some great features like jemi dot AIA. When you can see deep learning is a type of artificial intelligence like that. So it will also take the information from Microsoft itself, like gem dot AID. So okay. So these are some basic capabilities I have told you. So which type of task or we have to choose? They all LLMs works like, but good. But there is no specific LLM which do 100% work, 100% accurate. There is no in that all LLMs will do mistakes. There are no 100% accurate from LLMs. The output is. The output, there is no 100% accurate output from all the LLMs. So we have to take some repetitive work to automate it. That's it. So it could save our lot of time for summarizing some information or writing the content, taking the ideas. You can use in that case. 43. 5.4.4 Understanding ChatGPT Capabilities with Use Case 2: Hey, what is a mean capability that ajibti have? So I will say that. So let's take. So in the ajibit it recognizes patterns. That means in the previous and upcoming prompt style, okay? For example, I have told, my name is Saif. So let's say, I will tell here. So now, it has some great capability that is memory update. It's storing our quotients, names, information that we guided the AI. So in this pattern, we can use anywhere that. So it will recognize. Let's see how it will help. For example, I will tell AI, so I will not tell I will guide AI. Write content in French. Now, let's see what is output here. So if you think here, it is a name of mine. What is this? We don't know about this French. But what is this? Is the French language of deep learning. Okay? So what it will happen? I'm not told EI to write content in French of above deep learning concept. I'm just tell EI, write content in French. So it is automatically detect my intent. Okay, I need the person, the user need, the above content in french. That is a powerful here. There it is the hajbti will have some capabilities apart from other language models. You can see practical here. I'm not told AI to write content in French, write about content in French. I just tell AI, write content in French. It will automatically detect my intent and it will generate the output in French. That is about here. Okay, that is a powerful of hagibt. Let's say, let's tell the AI this temi.ai. Okay, let's use the same pro. Write content in French. Cap Let's go here. I'll paste here. Let's see what will happen. So what did it happen here? Yeah, it will also generate some finance. Whoa. This is so for that, it will also explain some deep planning concepts. Why, I have told AI, this is a content. So it will also detect my intent. Okay? There is no problem in that. Let's go to Cloud. What will happen? No, it will also generating French content. That is good. So it will also, analyzing my intent. Let's go to perplexity.ai what will happen here. Yeah, it is also explaining in the terms of French only. It is good. It works. Let's go here. That is Microsoft Copilt Yeah, it is good. It also analyze my intent, and it will explain in French. Okay, there is no like in that. So let's take another example. There is a good for each and everything. Content creation is good from all the LLMs. It is good. So what I I tell AIT. Now, my name, not that like. Let's see some task right Gender eight. So. YouTube video ideas. Ask I will tell you I will tell you, in which niche In which niche or in which niche in which topic. Let's take directly specific one. In which topic, you need to generate ideas. Now ask me for which topic you are looking to generate video topic ideas, video ideas. So let's see. You can see here. Great, si. For which topic you would like me to generate YouTube video ideas. I will take EI. Let's take EI only. Artificial intelligence. Here are some creative YouTube video ideas that is you can see it is generated in me. So YouTube video ideas, Advanced A TpsGod beginner friendly AI, advance AI. It's a good idea. Current trends news. Yeah, it's great. Yeah, conversational Atopicsther in going deeper into deeper. It's good. Fun and interactive videos. Oh. So you can see here. So by seeing this, so this agibty is great at brandstoming ideas, right? Generating some content related to anything, it has some great capabilities. So let's see other alms what will generate for this prompt for this particular 44. 5.4.5 Capabilities of Gemini, Claude, Perplexity & Copilot with Use Case 2: Let's see other albums what will generate for this prompt for this particular task. So I will go to Gemini and based here. Okay, I asks me for which topic you would like me to generate Youtube videos. Let's take AI. We'll just paste AI. Bgnerd ideas. Oh, okay, this also taking some advanced level ideas, additional tips. When compared to Chachi Bit, you can see here. It will go deeper into deeper current trends, conversational A topics, future interactive A, future of AI. This Gemini is just really simple, throwing it is also the goods, right, deep dive into neutral networks. But the habit is specific. The future of AI. What next in 2030 and beyond. You can see it is directly ideas, there is a topic here directly. We can use directly in the YouTube video title. But here you come, it will just telling not a specific one, just telling about the topics, the niche in the particular AI. You can see here A expand simplify Ayleveryda life. Um, I forgets, building your first AI model. This all the topics. This all the good thing, but the ha Gibt has generated the specific one which trends, which current trends and news top ten EI breakthroughs you need to know about in 2024, like that. When compared to this, in brainstorming ideas, ha Jib have some greater capability, strength when compared to Germany. Let's go to the cloud what will happen here. See how I am interacting with AI LLMs and how I am finalizing the output to choose LLM to solve my task. Okay. You can see here Cloud can make mistakes. Sure, I would help you to generate Youtube videos. Could you tell me which topic you need? I will just tell AI Okay, here are some suggestions. It will just generating beginner friendly content. Okay. Future. Okay, hands and tutorials. Yes, great. So technical deep dives, practical applications, trends and future predictions, hands and tutorials. Okay, it is good. Yeah. It has some technical part right when compared to the Cha GPT, okay? You can see her trying testing popular AI apps which one is best. Yeah, it's good when compared to. Okay, by seeing this output, I can finalize this Cloud has some technical part when compared to Cha GBD. Okay? So this means you can use coding purpose if you are a coder. Okay, if you are looking to learn some coding, you can use Cloud because it going with the technical part of when compared to Gemini and Cha GBD, it will best end generating content in human like text and brainstorming ideas, right? So when compared to Gemini, it is also some like that only, but when compared to Cloud, so it will going in the technical format like building your A first project in Python, creating an AI chart but from scratch. That means you can think, this AI module is going in the technical part. That means it will think it has some knowledge about writing the best code for that. So for that, if you are any coder, you can use this cloud for better output. Yeah. Let's see here to perplexity.ai, what it will happen. Sure, please let me know which topic you're interested. Let's take AI. Yeah. Here are some YouTube videos, focus on the topic, interaction to IIII applications, Okay, AI tools technologies. So this all about video topics, which is the AI chat bodies, the output is depend upon that search engines have. Already the data is in the search engines, it will take it will summarize. It is not a search engine, but it will take the output from the resources online resources. Okay, you can see it showing some YouTube videos to watch, right, YouTube videos about AI. I will showing some current tools to use to create a topic. Okay, create a YouTube videos on the top of this. Okay? So let's take here also that Microsoft copilt. Okay, let's take asking which topics AI. Awesome. A basics, I in everyday life, A healthcare. If you observe, if you observe, the two search engines like Gemini, Microsoft copilot, they have same output something same when compared to this I basics, YI everyday life, A in healthcare. See the Gemini. EIN simplified life, A in everyday life, AI for kids, the ethics of AI, AIN Healthcare. You can see AI in ethics, AI in entertainment, interviews with AI experts. So you can see here. A in finance. By observing these two models, gem.ai and Microsoft Copalt there are two search engines. They have real data, right? So generate it can generate output YouTube video topics, ideas, by collecting all the information about AI AI in a different applications like that because it is a search engine, it have more data, right. It will comes from the website, YouTubes, all those things. Okay. These two search engines have deeper, deeper. So when compared to these language models, they are great at going specific and generating brainstorming ideas, right, in generating content for that. When compared to Gemini and copilot, so you can use directly in that to do some automations, like go, you can tell to specific website, go this website and summarize this content. To search engines like Gemini and Microsoft copilot. So you can see how we can use this AI model separately for every individual task. So you can use this like that, okay? Perplex Data Day, which is great for getting the information from the sources, that is real current data from the research papers or any online sources. So you can get directly from that. But the chargeb what happens here it will just generate based upon the data. Okay, that Cloud also do. It has some technical part that you can use for the coding purpose, Cloud. Chagby can also solve it, but cloud is good when compared to the chargebrne coding in technical part. Okay. So when compared to Gemini and Microsoft Copalet, you can use for the summarization of videos, articles, Okay, directly in the search engines of chat booards. This will generate the best output for their website creations or any future or market trends to see which market have some great demand. You can directly ask these chat booards like Gemini Microsoft Copal. Why? This is a search engine. There is a current up to date information on these search engines that you can use truly chatbardso for that. So there is a Gemini and Microsoft fort. Search engines you can use for that purpose for market trends, for doing summarization of videos or any website, all those things. For Cloud HAGPT, it is all about this will generate based upon their training data. But Gemini and Microsoft Copalt are try and buy their sources that already they have, like websites, YouTube, videos, all the search engines that we have no that like, okay? The perplexity dot is all about. I will generate the output based upon the online sources. It will take some research papers, website content, YouTube, summarization, all this day. So for current data or any trend research paper, you can use this perplexity.ai. So it will help you. It will suggest some sources it has taken the output from. You can see directly check these links through the. So by using this perplexity.ai, so you can get the conference of this output. This output is not 100%, but 98% is correct. Why it uses the main sources output. You can directly go here and you can check the content on their websites also. This is a great capability that perplex dot a, I have rather than other LLMs. So these two, Microsoft, Gemini are best for that searching, summarization, and all those things. Cloud Cha GPT are good at brainstorming ideas, writing content, and to generate core like that purpose. I hope you understand the main capabilities of this. So as I said, there is a lot more thing if you practice well by yourself. So I have taken only just one example to explain you. So if you understand. So it is all about how you interacting with AI models. Just take one task specific task, and just that you have to do, you have to write the prompt for that particular task to solve the AI. Use the same prompt on all LLMs, like har JT, Gemini, all those things. After that, analyze the output. And check that. Which output is best looking good to you as per your requirements, then go with that. Then go with that specific LLM to go in deeper and deeper to solve your complex task or anything that you want from AI. So it is all about understanding LLMs, different LLMs, and capabilities according to output of the specific task. So I hope you understand this. This is your most important, but this skill can be developed by practicing by yourself with a different task and writing the same prompt and Jagt in other things. Just know. Even you can go to online, you can go to Search Engine anything like that. So just go and tell AI, Google it. What are the capabilities of? And you can ask there some pros and cons of cons of AI chatbots. Chatbots have. Yeah, Chartbodso, Chatboards like chat PIT, and other LLMs. So it will use some pros and cons of HRB. You can directly go there. You can check it here. Unpacking hagib pros and cause of AI hottest pros and cause of AHR Bs you need to know. So you can Google it. You can get the best output, best information on the Google itself. Okay. So please avoid avoid asking here, individual chat, but, like, hagibt is better or Gemini is better. If you ask in ha GBT, it will tell ha Gib is better. So right. It happens in AI chatbots also. If you ask in Gemini, Gemini is better or Cloud is better, it will tell Gemini is better when compared to this. It will show some limitations and strands of other cloud also. But it will tell as the searon if you take Gemini is better sometimes. So if you ask the compare, the other LLM with the specific that you are asking to AI AI LLM like Cloud. If you ask loud, so Cloud is better or purplesy.ai is better. The answer will be it will explain you it will explain both pros and cons of individual, but the output will be the shows positive in the cloud like that. So avoid using that. So just to go and see the YouTube videos like which chat bodies shoot for the specific task, search it, and know the deeper capabilities of each individual language module as a prompt engineer, it is your responsible to do the to solve the better task. Better problem. Effective manner by using different LLMs for different type of task. I hope you understand this lecturer well. So it has some more explanation for you, but it will take time. It is all about how you interact. Okay? So this skill can be developed by yourself by practice it. Then only can choose a better LLM for you. Let's. Up to this lecture will be completed. Okay. From next model, we will see some prompting tools, o, other methods using LLMs. So we will see in the next model how to use language models to generate the prompts. Yes, you have listened the right. You have hear the right. We will see some techniques, how we have to use the language models to write proms, image proms, as well as text to prompt. So as we earlier discussed about that in the applications of, we will see again, we will use all LLMs to see which LLM is great at writing the prompt. We will see that we will use some prom patterns in the next model. We will go in the deeper insights. After that, we will see some in the next model, we will see some prompting tools which enhance our basic prompt, okay? With this we close this course. Okay, I hope you understand this. Okay, let's dive into our next module in which we are going to see some applications and prompting tools. Let's dive into them. 45. 5.4.6 Capabilities of Deepseek, Grok Ai, Qwen Chat and Mistral Ai with Use Cases -Part 1: Already we have seen how to use different LLMs for different usages. We have already learned how to write specific prompts for different LLMs like HGPT Cloud Germany, dot AI perplexity.ai and Cloud. We have already seen that particular AI LLM models. But in some days, we have got some more AI models in the market right now like Dest AI, grok AI, Queen hat AI, Mistral AI. These are the I models, latest A models in the market. Right. We need to also explore this type of AI models as a prompt engineer, T is our main important thing, right? So let's understand what are these AI models. As I said, Deep Sik is developed by the Chinese, that is developed in China. You've already seen about that. We already heard about this. This is quite very effective AI model. It will work up to the HGT, JGBTopenEIV three or W one model, which is great. It is available for the free when compared to the HGPT. Okay. So these also have some great features. The hag also updated their AI model, so it added some search button reasoning. When after coming the deep seek in the market with these available functionalities, then only the ha Jibe has a just provided this buttons like reasoning purpose search. Dipsik is the best AI model. Let me see what happens here. Let's and the next AI model that is Grock. Grock AI is also developed by the Americas company that is Ellen Mosk It is also very fast and very smart AIM model right now. The Quenchat. The quenchatEI also developed by the Alis Baba companies from China. I also have great models O plus, and it is also a great effective. You can see here there are a lot more options, which seems like a better UI interactions. You can see the thinking, optional, available websearch, all those things here. You can use this for anything. The next but not least, that is mistrlEI. It is also a good EI model. So the main purpose of learning this particular knowledge is our particular skill writing the effective proms. Always remember one thing, the AI modus have their own capabilities in some particular task. The same AI module is not well at particular task. As a prompt engineer, we need to write the prompts for every LLM. After writing the same task prompt for different LLMs, then only we can choose the best AI model for our requirement. So as I said earlier, we need to write the same task prompt for all LLMs. Then we need to evaluate, then we need to check the output of that particular LLM, which we slightly equal to our requirement. So which LLM will generate the best output which are slightly equal to our requirement, then we need to choose that particular LLM to go in depth for that particular to solve the task. I hope you understand this point. For that, we have already seen some of the best AI models like ha GPT Gemini cloud in the previous class session. In this, we are going to see this latest AI models, how they are generating the output. Let's take our ha GPT. I always see I always tell. Always remember thing, the prompt engineering is nothing but writing the proms writing the proms, diting the effective prompt for LLM. LLM is nothing but deep sick, CHGBT, Grokquan Mistral. That all about some of the names of LLMs. Okay, I am focusing about LLM. That means you need to better at writing the prompts. That's it. You're not learning how to master a particular AI LLM model, but you are mastering the writing art of prompting, writing the art of prompting. I hope you understand. For that, you need to master the writing the proms, not the LLMs. I hope you understand this point. So far that's we need to check, we need to test it out. Which are the proms patterns are working well for the particular LLM or not. Remember one thing. These are the sum, I am taking for as a testing purpose. Always remember the prom patterns will work for every LLM. There is no doubt in that, but some LLMs can not I follow the previous pattern like GBH. In that case, we need to take we need to choose the LLM based upon the LLMs capabilities and functionalities. Let's test it out over the more advanced and very smart AI modules right now. I have jumped into hagibT and we have seen this let's start from the high in this particular LLMs. Let's take high. Let's test our first that is deep. I will thinking right now. If I just keep here, deep thinking, it will start thinking before generating the answer. It will go for the search if I enable this search button. I will take this is our basic interaction prompt that we already know earlier section we have discussed. I will just write here and we page the same particular prompt pattern and it will just keep this even you can just upput this prompt even without clicking on the dip thing, but use this specific functionality because by adding the dip thing, it will give the best output because the thinking capability functionality is very powerful. Let's see what the output will be. You can see the user wants to confirm that I understand these instructions. It is thinking. In that we can expect the great output because it is thinking before generating the output. You can see. Absolutely. I understand your instructions clearly. I will prioritize accuracy, avoid any usual inappropriate content like that. You can see here, it is the aji also generated the same similar answer for us. Let's take this simple task. That's based here. Let's see. It's taking time, but it is a thinking mode, which is very best or to get the reasoning purpose. We can see it and start generating the thinking how it is thinking you can see here, in which you can expect the best output. Got it. Let's break down deep learning in simple times. You can see imagine teaching a computer to recognize cat in photos. That's good. Layer one, training why Deep real world examples, takeaways. We can see you can compare this particular output with Chat GPT here. Deep ek is teaching a computer to learn from examples. The white is. If you see here, it is not quite technical, you can see some technical bit of explanation. You can see here more layers equal to better at handling complex task, Cixa examples, key takeaway, if you see here, there is nothing a technical part, but it is a good explanation because we don't know about it is a simple terms explanation in which we can expect this is a good output, but it is also good output. Why? Because it is straightforward and it is well written for technical people who already know about what is something deep learning is. You can check it out, all those things. That is good. Let's go on our next thing that is write content in French. Let's take this particular thing, plus C, let's come here. Let's see whether it follows the previous pattern or not. Remember one thing. I am not explaining the whole part of deep Sk or any other DILLM model here, but I am telling you how to write the proms and how to test the different AILM models for our task to choose which is the better one to solve the particular task. I am not explaining the mastery of deep Sk, mastery of grow Aquina haiPID but I am explaining here the prompt engineering. Focus on writing the prompt. You can see it is a simple French language. You can see it is also generate some French language. Because it is following the previous pattern in which you can see it I'm not tell the AI, write the content in French for above content. I'm just write the right content in French. I'm not specifically tell to EI, so generate a content for the above explanation. I just tell you EI, write the content in French, in which it is automatically think, I need to generate a content in French for above explanation. It is also following the pattern which is better, which is required also. That's good. You can see this is our next task, generated some YouTube video ideas. Let's come here, let's place this. Let's start. We're checking another task here, how it works. Let's see. It is thinking, Okay, the user wants to generate Youtube videos, you can see it is also generating the YouTube video ideas. If you think here, it is simply got it. It is the topic of interest, but when compared to AGPT it is also generated some good. Great s for which topic you would like to generate YouTube video ideas. Drop your topic and I will brainstor creative engaging video concepts for audience. Let's take the same topic here also to check it out whether which LLM model is great for me for this particular task. Let's go I have just taken out of this. But in education point of view, it comes in artificial intelligence. We can see now it is thinking, now it will generate the specific video concepts about the artificial intelligence. Let's take it out that us. Start thinking. You can see here are the 15 engaging YouTube video ideas about artificial intelligence. You can see A one, one not one, top ten free I tools, A versus Humane crea of coal, how I built an AI system for my home, its AI steal your job, challenges, creepy genius AN 2013. And it's all stenting. But if you think here, it is simply just given some YouTube video ideas that is what is artificial intelligence, advanced atopics current trends. You can see here there are a lot more even more in depth some topics about the particular the main topic that is future of A, the future of A, what the next and beyond, how I will shape your smart cities of the future. But if you think here, it will just tell in AI in 2030, predictions from experts. It is also good, but if you're looking to generate more ideas, video generate or YouTube video ideas. You can see this is the best when compared to this Desk. Deeps a better Y because it is thinking model in which we can expect the better output. Further current travel, some current market trends like that. We can use this. Don't focus on what I am telling here output is just I'm telling us how to test AI models for particular task that you can choose and use yourself for your specific to complete the task. I hope you understand this. Let's quickly copy all the things from starting point and we will understand the other EI model. 46. 5.4.7 Capabilities of Deepseek, Grok Ai, Qwen Chat and Mistral Ai with Use Cases -Part 1: So let's quickly copy all these things from starting point and we will understand the other EI models. Let's take Grock. I'll just start from the high. And choose this qui model. Changing the models in every I model is simply you can expect the best output. The more advanced level of model can be the more effective output. There is no change in writing the proms, but there is a change in output from the AI. If you change the I models, that's it. That's why I am again and again telling you, focus on writing the proms here. Thinking no response, Ce replying, please try again later use a different model. Let's take our second. Let's start from Hello, how can I assist you today? Let's copy. This is our starting. Yes, I understand I'm here to help the assistant. Let's quickly copy our it is quite fast model in which you can see here, it is very fastly replying to things. Yes, I understand as an expert in deep planning, I will explain in simple words for you. Deep learning is a way to teach computers to learn and think like bit humans. If you think here, this is a simple explanation is similar to the Char GPT. If you see here, you can see here. How it is work simply layers, learning depth. That is also not good. Let's jump into that is stared right content in French. Just fast we will. The simple thing is just write the particular tasks for the specific I model. After that, just copy that same particular prompt and use it in all other I models. Then only you can easily check it at the outputs and you can choose a particular LLM. You can see it is also generated some different content in French. You can check it out, all those things. I will just quickly firstly, I will tell you. That is all about how you can test it out. I'm telling you are YouTube so just copy the same particular task and paste in all the AI models to evaluate the output and to check it which is better. We will click go to our next task that is generate some YouTube video ideas. We have something model. It has some technical issues, so we will go to our next I model. Sorry for the inconvenience. You can use simple, it is generated right now. You can see. Great. For which topic you would like to generate video ideas, I will just quickly take this artificial intelligence topic and I will paste here. Let's go this copy and paste here and we'll take this Brook to model. Right. So in this user experience, just has some disadvantage. After I click here, it is not showing anything here. So in which we can see after sometime it will show, in which we can disturb the user experience, right? Okay. By the way, you can see here. Here are some YouTube video ideas, focus on artificial intelligence. I explain in 5 minutes all about these things, but not good. But have the good. But if you think the same YouTube video ideas, you can see it is similar to the deep seek. You can check it out all those things. That is simple equal. No problem. This is all about this crook AI. Let's check it out our quin chat, which is very powerful right now by the Chinese company, right? Let's start from the high. You can see thinking more, you can start all those things here. It is telling me to sign. Let's quickly to that. Yeah, I am already here. Hi. Is thinking and generating all those things. How hello hi can you assist it? Let's take our simple first starting task that is interaction it is thinking right now. If you think here, the Chinese companies like deep thick quinchat, they are using the same method, thinking capability, thinking, all those things. I understand your instructions clearly. I will act as a helpful assistant, all those things. Very good. Let's take our that is particular task. Same task. Is thinking and writing will generate the possible answer for us. You can see it is simply explaining deplaning explain it simply. That's good. You can see deep learning works similarly, but with computers. It's a brain inspired system, learn from examples, white is powerful. Everyday use cases, not good, but it is well written. It is easily understandable. You can check it out there. Let's take another task that is generating YouTube that is writing the content in French. Let's quickly copy here. What is this task? Because we are checking the previous prompt recognizing pattern. Whether this IM is recognizing the previous output or not. We are not giving here extra instruction that is write a above content in French. We have just write a content in French, it will automatically thinking previous output and it will generating the I will converting the above explanation into French. Now we can see here. About deep learning. That's not bad. We don't know about French, but you can see you can go and translate it, you can check it out. We'll just take another task that is tended tu do ideas. Pl come here. Thinking, start thinking right now. So even you can go to YouTube and you can search for the particular if you're looking to master the particular AI model, so you can go search on YouTube, Quin to Pin fi Mastery tutorial or deep sk Mastery tutorial like that. You can get the more specific insights from that particular YouTube videos. I hope you understand these points. You can see got it, let you know specific topic on niche. If you see here I'll take the artificial intelligence topic, the same topic and Quin to Pine fi also. Let's do that. Start thinking right now. Start generating some mons topics. Beginner friendly experience hands on tutorials, ethics and controversies, industry applications, future trends and predictions, pop culture and fun content, call and learning guides. It is very well written for me I beginner, if I'm looking to create a particular content around artificial intelligence, sit can help me. I can divide this particular topics in these particular headings in which I can just categorize all those topics. Sit is best because it has generated some AI for absolute beginners, machine learning, hands and tutorials. In this particular, you can see the topics. Very well. Which is very quite output from this quint 2.5 max. Let's jump into our last I model, but the not least that is Missed all AI. Let's quickly do all those things. If you're looking this repeating task, you can skip this particular, but just learn how I am testing this, all the AI models. It is very fast. It is very fast. I just keep ended, you can see how inp of seconds it has generate the output. As I understand. Let's see what the powerful of these things here. I will just take this task, part specific task, let's take right? Wow. It is generating before, it is not a thinking model right now. So it is generating in spite of seconds, generating output. As I understand I explained deep learning is simple dance. De learning is a type of machine learning that uses artificial neural networks to analyze so if you think, this is simple technical bit. For the beginner, if I don't know about what is about replanting. I don't know about what is artificial neural networks. That is the problem with some AI models, so they cannot thinking. If you see the thinking models like Quenca 2.5 dis Even GBT you have some creative reasoning. I will generate the output. It will generate the output after thinking. Okay. Then it will generate the simple terms. If you see it is not a thinking. When I page the particular prompt, it will start generating the output in spite seconds. So you can explain the output, how the output will be, let's take another task. Write a content. We will check this. It will recognize the prom pattern or not. It is also in the very fast. Not that we will see another thing. That's taking time. It's taking time. Let's do another one thing. Let's start another time. Now it is generated. Sure. For which topic you are looking to generate youtube video ideas. Let's take a for us that is artificial intelligence topic. Quickly take. No working. That is good. Now you can see, that's good. Interaction to EI, creative bignerFriendly video explaining what EI is. So that is good because it is guiding me how to create a video in which type of topic you need to write and the topic you need to cover in this particular videos. Introduction to EIA everyday life. Very good thing because if I know the particular topic, I don't know what topics I need to cover in this particular video. But this Isa is giving the in depth insights in which I need to include some particular topic in this particular video. So that is better for me. I don't need to search again in any online or other I model. It is generating the direct one. In which I can direct here and I can search from here. That is, you can see here, search YouTube video ideas on artificial intelligence. That is very most important. You can see the work for once again. Generated it is generated the output based upon the YouTube video ideas which are already the few people or made the videos on these particular topics, which is the create for me so that I can use the inspiration from them or this particular topic that I can create the content. Okay. That is all but you can see it is generating the source. You can see how it is working like perplexit.ai. You can see come here, you can upload your share or you can go to New chat Tools, you can use all those things. So up to see some different AI models use capabilities entry. See, I have just told you already, see how to test AI model. But remember one thing you can do more with a deep Ck. Okay, you can do more with Croc, you can do more with QuenchatEI. You can do more with mistralEI. It is all about your particular requirements and tasks. Always remember if you are looking to master particular task, Master particular EI LLM, go to just YouTube and type some specific like for example, deep sik tutorial in which you can learn more in depth usage or in depth tutorial of deep sk, you can get the more insights. In this course, we are only just you are seeing the testing, the evaluating the output. Why? As a prompt engineer, you need to master the writing the prompts, not the particular LLM. You have the capability. You need to have the capability writing the prompts for any LLM model. That's why we are focusing on the writing prompts, testing and evaluation and choosing the best LLM for our task. I hope you understand these points. If you are looking to the more different LLM models are better at different tasks like maybe coding, writing the copy, we don't know. But I'm explaining writing that testing evaluation. For that, you are looking to if you are in a marketing industry or if you are in a coding industry, just go and see which particular EI LLL model is better at coding, you can take Cloud. You can take a mistleEI deep Seek Deep Sik have the own HTML, all those things, so you can learn from the YouTube also. I hope you're understanding all these points. Okay. This is all about how we have already seen nearly to nine different AIL LLM models and testing evaluation output and choosing the buster LM. As a result, I am using the best output is, right? So after evaluation, I will think if I use AI model to generate a YouTube video ideas. I will see what is a deep lining or what is artificial intelligence is actually. Artificial intelligence is something technical also and automation. In that case, what I can take, I will choose what is one that is not like this. No. I will choose Mistral AI. Why? City has saved a lot more time for me. The best output is here. Cities not only generated some particular topic video, but it also uh, explain to me what I need to cover in this particular YouTube video because I don't know what topic I need to cover in this particular video. City has guided me, creative Binger friendly video explaining what is EI. It is also generated the output based upon the YouTube search video ideas and artificial intelligence, which is very most important for SU or all those things. I hope you understand these points. So for me, it is work for this particular task. But for your task, it can be different. It can be a different EI model. You can choose it. You can get the output from there, right? So for that, I will Remember, I will giving you the assignment. So take one particular task and test in all the nine different AI models, nine different AI models, and check it out which output is slightly equal to your requirement. Then only you can choose that one particular AI model and go in depth for that particular task to solve it. I hope you understand this particular video. Let's start another lesson. Let's jump into it. 47. 5.5.1 How to Use Different LLM's to Write Effective Prompts ?: Okay, welcome back to the tecturer guys. In this ecturer, we see how to use different lens to write effective prompts. So as a prompt engineer, we should know these techniques. Why? Because we have some lack of knowledge on particular task or particular writing prompt to give some background information or additional information that AI wants to understand our main intent and to solve the particular task in better manner, like that. So for that, if we use LLMs to write best prompts, it will give us some fundamental and complete instructions, which we can take that and we can customize according to our requirement and we will use again in the chatbards that we can bridge the gap between our knowledge and AI's knowledge and we can expect the best output from AI. Okay. So there are some benefits using the different LLMs to write specific prompts. So you can see here. Benefits, what are the benefits, improved accuracy and precision. As I said, so we have some lack of knowledge. We don't know everything, right? So for that, if you use LLM, any AI chatbot like JA GPT, other AI LLM, so the AIs know about the deeper and deeper information about the task that we are looking to solve it. Okay. So it can gives the better information in the form of prompt. The main problem is, if we use, okay, the output is depend upon your input, the quality of output is based upon the quality of prompt that you give you. The detail, how much you give the prompt in detail to AI, the AI will generate the best output. For detailing purpose, we don't have that of deeper knowledge for a particular task, in that case, we will use LLM because LLM should LLM have some deep knowledge about the task. Why? Because the LLMs are trained by large amount of data in that they can have some deeper knowledge. If you use some prom patterns like act as a person of prom pattern in which we can assign some specific role, in that role, it will act like that. In that case, it goes in the deeper for the specific knowledge. In that we can get the specific effect to prompt. From that, act as a person of pattern, prom pattern. In which the prompt is much more detailed. We can use AI in different ways and more ways when compared to we are thinking. Okay. So you can see the benefits here. By giving prompts detail as much, you can improve the accuracy and precision. You can see adaptability to use cases. There are so many use cases we can use to write the effective prompts like marketing purpose, educational business, and coding as well. There are other use cases that we can use. So it can easily adaptable. A and LM you take, it is easily adapt it is easily adaptable to any use cases that we give the input. So it can generate anything. Right at a time. For that, it has some broad knowledge about all the things. For that, we will use LLLP to save our time to write the basic or fundamental prompt on the top of that prompt, we can customize by our knowledge. After that, we can reuse that prompt in chart boards to get the best output like that. You can see the third one, which is very important, iterative optimization. In previous lecturers models, we have learned about what is iterative optimization. Let's write. Iterative means by taking the feedback from the previous output, we have to change the prompt, second prompt to get the best optimized output second time. That is a rat. Changing the prompt according to the output feedback like that. It's a ray two optimization. Fourth benefit is non experts can leverage LLMs to create high quality prompts without deep knowledge of AI or NLP techniques. This is very important. If you very if you don't have expertise in understanding LLMs or NLP techniques, right? So if you don't have that much of knowledge on that, so you can use these LLMs to write effective prompts. Even the LLMs can write the best prompts rather than a human because I have some deep knowledge, how much you will give the prompt as detail, I will generate the best output. Okay. So for that, if you don't have knowledge about any LLM, how the LLM works or NLP techniques. So even as a basic prompt engineer, you can use these LLMs to write some basic proms and some intermediate prompts also. So you can use any further. We have already discussed in the previous lesson how to use LLMs or how to use HGPT to suggest a better version of our prompt. So it is a suggestion better improvement. Okay, better version of our prompt that we can use in any LLM as a professional prompt engineer. So that is most important thing here. So the LLMs will tell us will suggest, so you have to improve this prompt at this point. So like that, we can use for that. So if you don't have knowledge about it, you can use LLMs to write the basic or best of prompts. Next, fifth benefit is testing and evaluation. To write a single effective prompt, we have to try an AI model from starting point. Why the best main prompt is written by testing, but testing and evaluating the output. After that, we finalize the main prompt, right? So first, we will set up to make this testing and evaluation. So we will just start with simple prompt. After that, we will check the output, second prompt. So in the second prompt, we will write the best to prompt rather than previous one. Why we analyze the output. Okay. The output is good, but it is improved. To improve, we will make some adjustment in the previous prompt. Okay. After that, we will analyze the second prompt again. So it will goes on up to your satisfaction. When the output will satisfy you, then you will write the main prompt by analyzing the previous prompts. Right. That comes to testing and evaluation. So these are the benefits, right? By using LLMs, we can write the best to prompt. That's why it is the most important thing. So let's see how to use different LLMs to write effective prompts. So we have learned about benefits. Let's go into practical. Let's jump into the language model. 48. 5.5.2 How to Use ChatGPT for Writing Advanced Prompts - Part 1: Already I opened, hat GPT Gemini Cloud perplexity dot a, and Microsoft copilot. So these are the popular ones. You can check other LLMs like Lama also. So in this case, I have taken these five LLMs to explain you. Okay. Before going to write the LLMs, before guarding the chatbds to generate specific prompts. So always remember use this prompt as ill setup. That is, you are a helpful assistant. You will do what I tell. You have experience in detecting unusual words, inaccurate information, and you will generate best and effective output without any mistakes and honization inappropriate information. Are you understood? Just see this extra information will guide the AI to become and do the task in that field only, in that field only. Even it will generate some accurate information, but writing this additional information in the prompt itself, so the AI will generate the output in that field only. So before generating the output, this will keep this information. The output should be effective and without any mistakes, you know appropriate information without that. It will generate the output. Okay. So you can start with this inshal prompt setup because it is very helpful, right? So you can use this. So let's start with this here. As I understand, I will follow your instructions, ensure the output is accurate, effective, and error free, and avoid any unusual or inappropriate information. Let me know how can I assist you? So, when we are talking about using LLMs to write a specific prompts. So what we have to tell AI, so remember two things. So to use the maximum potential of AI language models to solve a particular task, you need the specific knowledge about that task. For example, if you are a doctor, right? So you can go as a specific. That is heart surgical doctor, or even you can go like that we can go any specific like, uh uh, ETA doctor like that, you can go the specific one, nutritionist, for the specific one. Now you can tell AI. So you are experienced prompt engineering, especially in nutritions. You have to train AIN specific as specific possible to get the specific prompt from AI. You have to keep these two points. You have to tell AI you are experienced prompt engineer. Specifically in which area in nutritionist. If you want the prompt related to the nutritions in that space. Okay? So you can go on the top of that. You have ten experience in the nutrition as a prompt engineer, you can go in that. So even you can provide the additional information the prompt you need specific in which area you need the prompt. You can go in the deeper and you can train AI model according to your requirements like that. Okay? In my case, I will go I will take let's take educational purpose for the eighth class physics, or even you can take. Yeah, let's take about now coding itself, or let's take in content generation. Yeah. So I will go with my specific knowledge, okay? To how to analyze this output. Even this prawn will help me or not. So I have some specialization in Python code, Python programming language. So even you practice with LLMs to write effective prompts in which area you have. But as a prompt engineer, you should know all these things. So you have to write the prompt for every specific area, not only the nutritionist, not only the Python code, as a prompt engineer, you should write you should good at writing the prompt for specific task. Okay? You can use anything here. For example, uh, if you want the best to prompt for marketing purpose. Okay, for a specific one, that is psychology of customers. Let's take this. Okay? So what I will tell you, you are experienced prompt writer. You are experienced prompt writer in the field of okay. In the field of psychology, of customers or psychology of women. Let's take this Psychology of humans in marketing. Okay. What I guided the AI is, I need a specific prompt for psychology of humans in marketing. For that, I try the AI. You are experienced prompt writer. This is act as a person of prompt pattern, right, in the field of psychology humans in marketing. So even you can just tell E, you are experienced prompt writer. It is enough, but to get the best insight from AI, you should go with the specific one. That is all about prompt engineering is writing the prompt for specific application is called prompt engineering. So you can go as much as you can deeper, like field of psychogen in marketing. Or or you can go psychology of women or men's only in marketing. You can go in Internet marketing, offline marketing like that. You can go deeper and deeper in that according to your requirements. That is all up to you. Let's see, in this example, I have told you you are experienced prompt writer in the field of psychology of humans in marketing. So now, your task is Now, your task is to regenerate best two or even you can take, let's see, two to three different versions of proms. Different versions of proms for LLMs or for AI you can take. So what it will happen, it will generate the proms that are two to three different versions. Okay? It will generate three or two different versions of prompt for AI. Okay, it will generate some prompts. Let's see the example here. You can see prom one, behavioral insights for marketing strategy. You are a marketing psychologist tasked with analyzing customer behavior. You can see here the AI module know about act as a person of prom pattern. So you can see here. It will writing. You are a marketing psychologist tasked with analyzing customer behavior. So you can see what is the even AI is reading the prompt, using act as a person of prom pattern. You can see that importance of act as a person of prom pattern. Even AI also using that act as a person of prom pattern in the prompt itself. You can see here you are a marketing psychologist. That is the most important of using act as a person of prom pattern. Right, you can see here psychologist tasked with analyzing customer. It has generated three different versions of the proms that is related to psychology of humans in the marketing. You can write four to five, ten to like that. According to your requirements, you can change here. So sometimes the AI will generate, uh, rather than this output. That means not the actual output. For that, you have to tell AI, so you have to give the extra information. This prompts are used in different lens to generate the psychology of nine marketing. Even you can add additional information when that output is not related to your prompt. Sometimes you do the mistakes. For that, you have to write additional instructions. Okay. I hope you understand. So you can see you can directly use these prompts in the chargebra itself or other language models to get the information. So this is how it is very powerful using the language models. So here another benefit is, I will write by myself prompt. For example, if I want to write the prompt for psychology of Women's without using LLMs to write effect to prompt, so I don't know about the emotional and cognitive factors influencing brand loyalty. This I don't know, because I am lack the knowledge of the factor of psychology of mens. Right. I don't know about this factor. I don't know about this factor. So it will which tells about the psychology of mens in clear way. So if I miss this with my lack of knowledge, if I miss this information in prompt, it will just skip this. Okay? The output will be just skip this. So in that case, I lose the information about that. Even if I use AI, I lack the knowledge of writing detailed to prompt because I don't know. I don't have knowledge about the emotional and cognitive factors influencing brand loyalty in the psychology humans of marketing. But A I know everything about the task that we are telling to AI because it is trained by all the topics, resources, all those things. That's why it will give the detail as detailed as the main purpose is you have to write the best prompt pattern. That is your experienced prompt writer in the field of psychology. How much you will go the deeper, the AI will generate the output in deep like that. We can see the prompt here. This is the best prompt. It is written rather than me also, right? That is a powerful using them L lens to write effect to prompt for using A models are the potential level. So you can see the three different prompt versions here, you can use, you can check which prompt is generating the best output for your task, right? I hope you understand. 49. 5.5.3 How to Use ChatGPT for Writing Advanced Prompts - Part 2: A This prompt. It will suggest the better version of this prompt here. Let's see the example. You can see her. Here's a better version of your prom refined for claritin impact. You can see her. You are an expert in crafting AI prompts, focus on the psychology of men's behavior in marketing. Your task is to create the 223 most effective variation of prompts that can guide AI in producing insightful and actionable output related to this field. You can see how professional this prompt is when compared to this one I have written. Right, you can see her. That is the best way to write the best prompt to take the help of AI to improve your basic prompts, right. So even you can tell AI to generate a prompt or otherwise, you can tell AI, write prompt by yourself and tell to I suggest the better version of this prompt. You can use these both method to get the best from that AI. Okay. So the output also based upon the model that you are using HGB have some 3.5 turbo, 3.5. In that case, you cannot get the best output. But if you use the Cha G four Cha JB four W, you can get the best output from that. It is also depend on the model that you are using. Okay. So even you can use cognitive verifier pattern in which we will tell AI, you are experienced prompt writer in the field of psychology humanne marketing. So let's take, for example, I will take this prompt only Control plus C. Directly, I will check it here. Let's take here. I take in the previous prompt. You experienced a prompt writer in the field of psychology of women in marketing. Now your task is to generate two to three different versions of prompts for AI. Instead of a telling a guiding AI to generate prompt for a field of psychology of womens in marketing, I will tell AI ask me subdivided quotiensRtd to the main task. Main task that you required. Ask me subdivided quotiensRlated to the main task that you required. To generate prompts. So what happens here, the I will ask me some subdivided questions related to the psychology of humans. Okay. After I provide answers for these questions, all of this, it will generate the effective prompts for me. So you can use this. So when it is useful means, if you use this method when you don't have the knowledge for specific task that you are looking to solve by AI. For example, in this case, I don't let's assume I don't have knowledge about psychology of femen in marketing. In that case, I just tell AI, generate I will just define the task. You are experienced prom writer in the field of psychology of femin. Now your task is to generate the two to three best different versions of prompts for AI because I don't have a specific knowledge about psychology of femens In that case, the AI take its own knowledge, and it will generate the best two prompts here, different versions of prompt. But when I have the specific knowledge of psychology of humans in marketing, then I will tell AI to take the data from me, okay? To use data from my side to generate the different versions of prompt. Right. So you can see here. I tell AI, ask me subdivided quotiens related to the mentas that you require to generate prompt. In this case, the AI will ask me different quotiens related to the psychology of humans in marketing to generate the best prompt for me. Okay? Here, AI is using his own knowledge, okay? Here, AI is taking my knowledge, okay? That is the difference between that. After I provide. I will answer some of these questions. Age group first answer is, age group 18 years. Example, I will take. Okay, I will go second. So when the AI ask you questions, you have to give the answer for each question, for explanation, I will just taking the rough answers. I'm writing the rough answers for the quoi that is first one only. Brand anus sales. Let's take sales. The third one is psychologic factors that is trust, stic trust. The fourth one is advertising tone and style casual can take. Competition and market position, you can say, do you want different product service from this? Are there any market trends behavior influencing the interest that you'll be conside the prompt? So you can give the answers for these questions also. For that, I will just type roughly answer that is who are your main competitors in the market. Let's take Amazon. Directly we'll take this. After hitting this Enter button, it will generate the two to three best versions of prompt. You can see here. Prompt one, trust building marketing strategy. Prompt two, casual trust as marketing campaign. Prompt three, trust and authenticity in online sales. You can see here the prompt. You are a marketing expert specializing in building trust with the Ng audience, creative sales strategy that leverages psychology triggers to increase conversion rates. Focus on how to use social proof. See if you see these prompts here, there are more effective than if I write it. Why? Because here, AI is using its own information, right? But when compare, it will asking more questions from myset. After I put my requirements and my own data right in the form of answers to this i so you can see here the output is how effective this prompt is? How detailed a prompt is. You are a marketing specializing in building trust with the audience. You can see here, focus on this conversion risk like how to use social proof strategies for creating feeling. See how detailed it has generated, the prompts. Even we cannot write this prompt as we have some prompt engineering skill. That is a power of using the LLMs to write effective prompts, right? You can see here. You can see the example here. We have written this. This is a three methods you can use to write the prompt by using LLMs. Okay? There are some other prompt patterns that you can if you practice with different aspects and different patterns, you will get the knowledge about that. For that, you have to practice by yourself. You have to test, you have to test with different prompts and practice, then only you can get some knowledge about this. Okay? Hope you understand. Let's see, we have used the three methods. What is the first method? We just tell to AI, you are experienced prometer in the field of psychogeomens. After that, we design guided diet, you have to generate the two to three different versions of prompts. The first one is just telling I to generate that three different versions of prompt. I first method, the AI is using its own data, own knowledge about the cyclogens then only it will generating some prompts here. You can see here. This is the first method in which the AI is using its own data, own information about psychogen and writing the best to prompt here. In the second method, you can see here, in the second method, I tell AI to suggest to me the better version of this prompt here. No, it is suggesting me the best to prompt here. Okay? This is the second method. Second method, in which we have used quotienRfinement prompt pattern. Okay? And the third method is cognitive verifier pattern in which we guide the AI to ask me subdivided questions. You can see here. This is the third method. Ask me subdivided questions similar to the main task that is required to generate a prom in which the AI using my own data, right? By asking questions to me and gathering the answers from my side. Okay, to use that data in which we can get the specific output as much as we can. We can get the specific output and effective one. Right. So you can use these three methods according to your needs. If you have some specific knowledge about that, you can use this asking questions prompt pattern. Okay. After you provide the answers for that questions, you will get the best effective prompt outputs. I hope you understand. Let's o, it is all about JGBty. Okay. Let's go into the other LLMs, how it will generate or not. Like, GP is working very well in the prompt writing skill. Let's take other LLMs, whether it will capable or not. 50. 5.5.4 How to Use Gemini, Claude, Perplexity & Copilot to Write Effective Prompts: It's all about Ja gibt. Okay. Let's go into the other LLMs, how it will generate or not. Like, Jagibt is working very well in the prompt writing skill. Let's take other LLMs, whether it will capable or not to generate them. Prompts for our requirements. I will take fastly. I will copy the user's measure without any changes, adding new line scatters where appropriate. Okay, let's see, it is not quite personalizer, GBD. Now I will I will copy these prompts in the other alms. Let's check it out will capable or not in generating the prompts for that. Yeah, it is generated three versions of prompt writing here. Yeah, it's good, right? I have to cycles generated three prompts the Gemini. It is good, right? So I will take second method that is Okay. But you have to know some user experience in that. So you can see here. It is also suggest me a better version of my prompt here, like hag B done, but its not effective as Cha Gib you can see here. You can see her, right? The prompts here. But you can see the Gemini's prompt. There is not that much of effectiveness and detail in that. You can see her Gemini dotty when compared to the Cha GP. Okay. Let's go to the third method. I'll just copy firstly to check whether other alms are work well or not. It is also Gemini also asking some questions. After I provide the answers. Okay, I will just copy this and we'll check the output. Yeah, you can see it is not that much of the output is. You can see here after asking questions to me, right, idle customers or Hs competitor, all those things. Okay. I also generated the prompt itself only. But you can see here if you observe here, the prompts are not well written and very effective when compared to Chat GPT here. You can see the prompts are very structured and very effective manner with a detailed explanation in the prompt. And with using act as a personal prom pattern when compared to the Gemini, right? You can observe. That is a capability of JA GPT that have. Okay. That's why I recommend use HGPT to write the effective proms from AI. Okay? Because Gemini is not it is a search engine chat board. It has some other capabilities rather than chargibty Cloud. Cloud chargebty or not a search engine. Chargebty have some new features like search engine. It now comes, it has some new feature that we can search in the chargeby directly as a search engine. But it is before this search engine future have the chargebty simple language model. Okay, it is trained by different prom patterns in which we can use effective prompt patterns and we get the effective proms. But Gemini is like search engine chatbard. Okay. So in that case, we cannot use these prom patterns. We cannot use to write the effective prom pattern. Okay. For that, we will use that Char GPD to write the best prompt. You can see her practical, you can observe these outputs right when compared to the hA GPT. So let's take loud. Let's check it out with har GPT. I will roughly just call I will use the same prompt and all LLM same exactly same time. And we'll see. Okay. Let's take another prompt here. That is first method. You can see here, the prompt one, it will generating. Yeah, it is even more detailed when compared to Cha GPT. Wow, that's great. Right. Yeah, let's see other LLM. That is perplexity.ai. Yeah, it is also good, but I will explain all these things. Yeah, that is the power of search engine LLMs and other language models. So when compared to this, you can see here. The first method prompt is we have generated some prompt different versions of prom. You can see here that is a hagibThs is a chargebive prompt if you see the same output from Cloud, even you can see here, act as a senior consumer psychology research with 20 years of experience in the behavior economics and marketing. So if you see it is not okay it will taking the best. It is going in the specific right, you can see it is more detail when compared to Char GPT one. You can see here the prompt. But if you observe the cloud, uh, prompt, you will see here. It has more detail when compared to harGPT, right? There is second prompt here, and this is a third prompter. I have some more information, detailed information used in prompt when compared to the chargebty. But, if you see these three prompts act as a personal prom pattern, but Cloud have only starting one, right? Act as a senior consumer psychology researcher. In the other two prompts, it is simply without using the act as a personal prom pattern. It is just retain the prompt to do the task in which we can lag that. Okay. So for that, even if you check compare these two Cloud and Hagibt you can come TGP has great features to generate a effective prompt, right? Even Cloud also have some great features, but even you can use this extra information to include in this prompt here. So you have to use act as a personal prompt pattern. Should you have to use this prom pattern to get the input, best output from the AI. So for that, use this prompt only from Jagt, but include this information in which it can lax here. Otherwise, it is fine. JGB is fine further. Sometimes cloud, okay, but it is the out these proms Jagtive proms are specific one. Okay, why we have tell the AI only to generate prompts for psychology in humans, but the clouds Cloud has generated in the research area. For that, the prom detail should goes to the marketing research purpose instead of going the specific one for psychology of humans. That you can analyze it. The hagibt is more personalized and very specific to our task to generate the best output. For that, we will use Hagibt to write the best effective prompts. That is the hagibi is more powerful than other language models. In this case, writing the prompts for different use cases. But other language models have their own strength and advantages in other aspects of use cases. You have seen these two language models Cloud and Hagibt. Now we will see the Gemini perplexity.ai Microsoft. How the output is. If you think the Gemini Microsoft co pilots are search engine. If you analyze the output from this, you will see the same structure and the output are same for three Gemini, Microsoft, and perplexity. These three generated the output in the same manner. How, let's see, you can see the first method of prompt is. It is generated the three different versions of prompt here, develop, analyze, create. In this, there is no reasoning or there is no act as a personal prom pattern and detail as much. Same, you can see the Microsoft copilot also. You can see the prompt one, analyze, examine explode. There is no act as a personal prom pattern used, and you can see the perplexity.ai. Even you can see here, perplex dot A also not using the act as a personal prom pattern or other prompts in detail as much, you will see only the explore analyze investigate. So if you observe these three LLMs like Jemini Microsoft copilot and perplexity.ai, they are not good at writing the proms. Why? Because this LLMs purpose is other purpose is different from language models. This is the search engine chat board. Microsoft C pit is also search engine chat booard. Even purplesttI also works like a search engine in the researching purpose in the show generating the output based on the user requirement by providing source of data it has taken. Okay? So it is the purpose, the actual purpose of this language model is to summarize the research topics or providing the source it has taken the data from it. So these three modules purpose is different. That's why the language model not deep in the prompt engineering, right in the prompt. But when compared to ha GPT, and Cloud, these are not the search engine based. These are the language, NLP based and trind data, okay with their own techniques, pattern techniques. But when compared to Gemini Microsoft copilot perplexity.ai, they are current up to date data. These three language models use their online resources like websites, data, forums, YouTubes, all those things. But when compared to Cloud and ha GPT, they are trained on data. They are trained by different prom patterns. In that case, the AI is know how to write the best prom for language models by using prom patterns. In this case, the AI user act as a personal prom pattern that you can see in the ha GPT and Cloud only. And other all LLMs like Jemini Microsoft, CopaltPerplexty dot a, they are not used any prom pattern, and the proms also not in detail. Why, these are the search engine chatbards. They have no they don't have that much of knowledge. They know about the psychology of main knowledge, but they are not good at writing the prompts. Okay? For that, as I said, these two language models, Cloud and HGB is good at writing the prompt, but ha Gibt is more personalized and specific in generating the prompts for requirements. I hope you understand this difference of the LLMs capabilities and in the use cases in the prompt writing use case. Okay? I hope you understand these five different types of LLMs. So this one case is well for the hagibtan Cloud. Okay. Even you can try by yourself with different use cases, okay? Not only in the prompt. Even you can write the image prompt. Yes, image. What is the image prompt if you're using the image generation tools like Image journey, Leonard AI lexica.ai, ideogram AI, in which you will get the image, right, according to our prompt. Even you can tell AI to generate the image prompt. Okay? So you have to tell A, you are experienced a prompt me image prompt writer in the field of psychology of humans, even you can tell anything cartoon lion cartoon or you can go animal cartoon image prompt writer. You can go specific in your task is to generate best two to three different versions of image prompts for AI image generator. Like that you can go it. It will generate. You can use three different methods. So what I suggest to while writing the prompt for image generation is use this third method. That is, ask me subdivided questions in which the AI will ask you different questions regarding the image you want, right? So just you provide the requirements that you need that you are looking in the image, right? So just provide how your image should look like. Required image should look like, provide the answers for the A further questions that AI ask could do to you, and it will generate the image prompt. Just use that prompt in aged Image generation language models, and you can get the image that you want. Instead of writing the prompt by yourself, the chargebty can help you to write that. Okay? So it is all about foundational level, basic level. You can change your prompt by according to requirements. That is the power of using the LLMs to write your prompts. I hope you understand. Remember one thing, don't rely on these prompts generated by AI. It is all about how you use it. It is all about how you use it in your workspace, all about that. Okay? So this is all about using different LLMs to write effective prompts. So I recommend only use Cloud or hagibt to generate best to prompt. And in other use cases, the other language models works well. Y, right? You have to choose by yourself. You have to test the all language models to do particular task after that, go by yourself. As I said, these three perplexity dot Microsoft Gemini or the search engine baser like that. So in that cases, you have to use that and some personalization and brainstorming ideas and writing the best effective prompts, you can use this Cloud and JGB. In that case, Ja Gibt is more personalizing when compared to the cloud. Okay? So I hope you understand this lecturer very well. Okay? In next model, we will see some effective prompting tools like the chargeability have their own playground. We will see we will explore that playground also, and we will see are any techniques, okay. 51. 5.5.5 How to use Deepseek, Grok ai, Qwen chat and Mistral ai for Effective Prompts: Let's see another four LLM models, in which we have already seen the five different AI models like hachPT Cloud, Gemini, purples dot I, Okay, Microsoft C Palette to generate the effective prompts for our requirements. In this session, in this class, we are going to see, right, the other AI models that are come the latest in this 2024 or 2025, that is deeps Croc AI quench at Mystal AI. So whether these I models are capable of writing the best prompts for us or not, like ha JP do. Okay? Let's compare with the ha Gibt and all these four INM models. Okay? Let's start from that. So we will take the same simple our starting warm up prompt. Okay? Let's take. So I'm not using this deep think or you can use all those things, okay? It starts generating the thinking, which is very important. So I will just take in all these particular I mods fastly so to save the time for us. Okay. And we can easily check all those at the same time. The shot from the deep sea. Okay, you can see, yes, I understand your requirements. Clearly, I will. Okay. So yes, I understand I am design helpful assistant, all those things. You can see the output from the four AI models. That is good. Okay. Now, let's go here our second prompt. That is your experience a prompt writer in the field of psychology, women's in marketing. Now your task is to generate two to three different versions of prompt AI. Prompt for AI. Let's take this particular task, and we will just based in all other I models. It is start generating thinking. Right? Let's start from the deepsk. It is thinking model. The user wants to generate two to three different versions of prom for EI. It is thinking, which is best part of this deepsk AI. You can go with the search button, all those things here. Now, you can see here are three refined psychology driven prompts tailored to this. Let's check it out all those things. Here it is also generated three prompt versions, Cro QI. As an expen prompt writer in psychology of women's in marketing, I will craft two to three distinct high quality prompts. Prompt version, act as an expert in women's psychology and marketing. You are psychologist specialing in the marketing. Okay. Assume the role of marketing psychologist, which is very powerful rather than compared to Cha PTs, you can see here. Okay, it is also good. Okay, act as a marketing as an expert. Okay? But if you check here, that is grok, which is very powerful, writing the prompts. You can see it is quite similar to hagiBT. But you can see how well written it is. It is using the persona prom pattern. Act as an expert in oven psychology and marketing, provide a detailed analysis of how intrinsic and extrinsic motivations influence. You can see a URS psychologist, it is also using the personal prom pattern. Assume the role. It is also using the personal prom pattern. So that is the power of using personal prom pattern. Even the AI also generating the prompt based upon using personal prom pattern. So that is powerful. Okay? You can see it is a best output, when compared to the deep sik. If you see it, it is okay, you can see it. Even the deep sik also using the act as a consumer psychology expert, which is very important. You can see the power of a personal prom pattern here. Okay. You can see it step by step marketing strategy. Analyze the second version is, analyze how cultural values create an ethical. The first version is quite good. But these two are nothing or not looking the better effective. When compared to the Grock AI, you can see the Grock I model has generated three different versions of prompts. Even they are equal in writing the best. You can see assume the role of marketing. It is using the role of personal pattern consistently, but in different format. You can see act as an expert, you are a psychologist, assume the role of marketing. But in the deep sick it is this prompt pattern act as in first version in the nought and the second and third. Let's check it out over in 2.4 here. Here are the three distant high quality prompts, prompt version, analyze how cognitive basis emotional tcursPmpt version to explore the role of persuasion, design step by step, psychology profile system. So if you see that this is a simple, this is not as a prompt enginer I cannot use this particular prompt pattern. Why it doesn't have any role making or assigning the role of particular AI model in which we can get a specific output from that particular topic. Okay? Amenities also have given some background information very well, but it doesn't have the common assigning role system like using the prom patterns or all other things. You can see the rock AI also generated the best you can see, right? Like angibt also generating, right? That is all about quin chat. Okay. Let's take our Mystal EI. So in the mitral AI, okay, so Visual also have nothing about writing the prom pattern here. You can see investigate impact of analyze the use of emotional marketing, explore the psychology principles, Okay. Have a great but as a prompt engineer, I feel and I like rock AI for writing the prompts for me because it has used the perfect formula, assigning the role, giving the task, and giving the background information, all those things here. You can see here, which is very important to write this simple specific prompt effectively. So as a conclusion, I use this rock A to write the best prompts, right? So these are the Grock and hagibt have their own best capability to write the proms for us. That is simple. Okay. I hope you understand this particular use case. Let's jump into another task in that particular only. We will see here. Okay. So you are an experienced prom writer in the field of psychology mens in marketing. Task is to generate two to three different version of prom for a suggest better version of this prompt. Okay. So if you think if you follow this particular task or particular lesson, that is, how to write Affectoms for hagiPt. Okay, you understand this particular point here, so I'm not going to explain here. Now you can see here. I'll just telling AI suggest the better version of this prompt here. So I am going to tell AI suggest this prompt, better version of this prompt here, this question. Okay, let's see what happens here. Let's comp here Deep sick and we'll continue from here. Let's click here. Paste. Let's go to send and we'll take the same thing here. Let's quick. Let's save, come here. Steak. Then come here, steak let's take through disc. Come from the Tips. Now you can see Heit has generated some improved prompt version one. I just told you you are experienced a prompt writer. I suggest the better version of this particular question or prompt. It is generated improved prompt version. Act as an expert in consumer psychology and a prompt engineering, all those things you can see it is cool, well written instructions. You can see the improved version two. You are a psychologist specializing improved prompt version, develop two to three advanced AI prompts. Let's check out group AI. Let's take crock two model. Let's check it out over another I model, it will start generating. It's taking too much time, but let's take our quin chart AI. Now it has thinking completed. We can see it has generated the three versions. Act as a marketing psychologist. You are a marketing psychologist, design three I prompts to explore all those things. Good. This is a better specific, but why? I have given this small prompt here. Okay. I hope you understand. So particular these three proms are well written in specificity, okay? Mistral A, let's see, Mistral A. Or you can see here as an expert in crafting proms, relative to psychology, leverage your expertise, utilize your skills. This one is also good when compared to this one. But so as conclusion, so I cannot tell, so I can tell. So the mistal is not good at writing the proms. Okay. That is simple. Why? Because I have evaluated these three proms here. Okay. But in writing the good, but not using the specific formula assigning the specific role, giving task, and giving background information like that. So I hope you understand this. So for the version one, it is well. It is generated use the act as a personal prom pattern. That is good. Okay. When compared to the quin chat also, it used the two versions. It has the act as a prom pattern, personal prom pattern in two versions of this. So you can. That's good. Okay. And group Am also. So it is taking time, too much time. Let's take Okay. Come on. Let's take deep CKI. In the deep CKI, you can see it has taken two times act as a personal prom pattern. Act as an expert, you are a psychologist. In the third one, it has simply taken the develop the task. It is ignored the assigning the role. Quenchat also ignored the so if you think here, the queen chat EI and tipsy EI, the output seems similar because these two AI models are from China, and you can think that these two models are trended by the same data, almost. So you can see the output is it has user that two times act as a URA psychologist, and you can compare the same coin to 0.5 with act as a URA marketing psychologist. So you can take design three A proms. You can see here, develop two to three advanced AI proms. So quite similar, right? You can check it out all those things. Grow AI is taking too much time. So what we can do here second to time. So when compared to Mistral AI, so Mistral AI, it has used the best for Version one, it has used the act as a personal prom pattern, and the version two, version three is not used the prom pattern, which I can come to the conclusion that is mist A is not good at writing the prompts. And remember one thing, if you give the more background information, all those things, it can also generate the best prompt for us. Okay? So you can see here the hat also given the best output, if you think here you are an expert in crafting I proms, all those things. It also only give you the one version, but it's also good in case of my requirements, deepsk also generated the best prom pattern, you can see here and growth taking time. Quin also generated the three different versions, which are very good, and Lim Ms also good but not when compared to other EI models like Quin chat, deepsk let's jump into this one. We'll take again, it's taking too much time. Let's refresh this. I'll just copy and paste like this difl. We can see here. Your original prompt is clear, but could be refined per freshen tone specificity is generating me. You can see here. You improved version one, concise and specific. You are an expert, prompt writer specialize in psychology of men behavior. You can see this is a particular first improved version, second version, you can see as an experienced prompt writer, deep knowledge of women psychology in marketing. You can see this is second improved version, third version, you can see here. You are a seasoned prompt writer with a passion for the psychology of humans in the marketing. Now you can see the power of grok AI in reading the FA two proms for us. When compared to the deep sik right? And quench at AI, the AI model is generated the two versions with using act as a prom pattern. Okay, it is also following. But in the version three, it is not used the act as a rule prom pattern. Similarly, deeps also, okay? And the mistala only uses the first version for the first version, that is as an expert in crafting prompts, not in the 21, but in grog AI, the AI model has generated the prompt three different versions with the perfect rule that is using the assigning role prom pattern that is you are an expert. As an experienced prompt writer, you are a seasoned prompt writer. So now you can see the Grock AI is fully trained or fully smarter than deep sik quenchat AI, Mistral AI when compared to the models for writing the effective prompts. I hope you understand these points. So now on the conclusion, I will use this group AI and Cha GPT as well combined to enhance my prompt writing skill and prompt writing for my task. I hope you understand this particular use cases. This is how you can evaluate not only this one, not only writing the prompts, you can go for the image prompt. You can go for the video prompt. You can check it out for all those things, right? So not only this task, so you can take any task that you like and check all the same particular task in other AI models to evaluate to test it out and to choose the better LLM for your task. Okay? Because different LLM have the different capabilities and functionalities which they can work for you, effective manner. For that, the main purpose of this testing is to choose a better LLM for our task, to get the best out of from AI. I hope you understand this class very effectively. So let's start our second session. 52. 5.6.1 Prompt Engineering Tools - OpenAI Playground Parameters Part 1: Come back to our new lecturer that is prompt engineering tools. So as we earlier discussed some prompt patterns that we can use to improve our prompt writing skills, right? So for any task, we can improve our prompt writing by using some question refinement or cognitive verifier patterns. So as we earlier, discuss learned, right? So in this lecturer, we are going to see some prompt engineering tools. There are many prompt engineering tools which enhance your based upon your basic prompt. So there are more tools in online, but the tools the tools are trained by open A playground only. Even they will use other language models. So I think that open A playground is enough to even without playground, we can use JA GIBt only to enhance our prom writing skills, as we earlier discussed, right. But with that, we cannot build applications. Right. So if we learn about open A playground, you can build any application using prompt engineering, right? So with this open A playground, you will test models with different selections of models like GPT four, three, and you will test output by changing the parameters of playground. Okay? We will send all those things in now by jumping into the open A platform directly, okay? So what is actually OpenAI playground means? The open A company have their own API keys. Okay, we can directly with that APA keys, directly integrate we can directly integrate AI chatbot or EI assistant in our website or apps. With that, we can improve the user experience. Okay? Even if we can build a specific application by writing the specific prompts by using different prom patterns as we earlier learned and discussed in previous classes. Okay. So let's see what is about open air playground is. So it is a simpler user friendly interface where we can test models with different parameters, inputs and prompting skills. Okay? So we can see the output in different configurations using changing the values of parameters and changing the models, language models that is 3.54 or GPT four or mini like that. Okay? It is all about open air playground. So open A playground have some parameters like temperature, maximum tokens, top P, that is sampling frequency penalty, presence penalty, and last one is stop sequence. So don't worry. We will see each one by one in clear manner in detail. So first one, that is what is in temperature. So we will just learn this basic one and we will see directly to the playground platform. Okay. So let's see some basic knowledge about this. Temperature means it will go from range 02. To value. So the output, the AIS output is dependent upon the temperature. Why means if you put the low value of temperature, it will generate the output in focus manner in specific manner, right? If you are using if your task is to solve the max problem, so the temperature 0.2 or low value can be helpful because it will generate focus response. So in max, there is only one solution or two maybe, right? That's why I will best shoot for mathematics problem or focused response, right? So when if you change the temperature to the highest values, that is 0.81 or one above, it becomes more creative and it will generate the number of solutions, more number of solutions that can looks like some less coherent answers. Okay? The focus response is well, but it is upon our requirement, right? So it depends upon our task and requirements. It is all about that. So don't worry. We will see in the playground directly. So we will see some maximum tokens. So let's jump into the directly open a platform. So we will see what about open A and what is the first parameter that is temperature. So I'm directly sign in to the open A platform. If you have JGBT account, that is enough for you just to go in open A platform and sign in with your JGBT email. Okay. If you are new to this platform, so I recommend you to go on YouTube and search and learn the basic um, basic information of this platform. So in this platform, we have several systems like chat. So with this, you can see here there is a system message. So we can train our AI model by writing the system prompt. So as we are discussed in some basic prom types that is role assigning prompt system instructions, so all those comes here only. So that is important. So you can see, there are several models like real time enable assays. There is a new models come into the open A, assistance models. Assistant only it is a specific one. If you are looking to build some specific assistant, for example, customer support assistant or a product recommendation assistant or specific mental health or even if we can take that goes nutritionist, a specific nutritionist specialist. You can go the specific one, you can write the system instructions here, so it will only work. As system instructions given here, you can see, you are an experienced a HR professional task with conducting an interview based on solely job. This is system instruction, which is trained AI for specific purpose, that is HR work like a hear, right? So we can change all these parameters. So I will say, what is that parameters, this all these things. Now, in just a few seconds, right? So if you ask any question here, so it will work in the system instructions only. No, this model is act like a HR only, not other things like ha GPT do. Okay. I hope you understand, right? So there are some text to speech model also here in the playground. Even with this, you can directly interact with AI, by text to speech. No, it also have some completion model, it is a best one. Before now, it will removing by the open a platform. So there are some latest models, that is chat. Okay. This is a best one because so so for example, if you see if you use HGPT, you can interacting with language model that is HGPT. But you cannot build application by the HGP for that. So for developers to integrate HGPT into our website applications or to build some specific applications using AI, the open A is build some playground. So we will write AI. By instructions, we will directly go to the code section and we will copy and we will integrate in our website by using this code. It is simple. They have some documents for every use cases that is how to use GPT four oh and models and APIs for task generation, function calling, all those things, there have some several documents for each purpose. We are looking to that. So APA references dashboard playground. So I have some more information, right? So for now, we only see how to use this playground for our prompt engineering skill or to build some specific application for specific use case. Okay? So first, we will explore some parameters. To learn all options about this, you can go online and learn from this from that from that platform like YouTube or any online website you can get. So let's see in this here, this is a system message, right? So you have to write the system message. As we earlier discussed some prompt patterns, which I explained, that is just write the prom pattern. Write the prompt as a specific to do specific task. So you can tell EI to do this task only in this format only. The output should look like this only. You can try here anything that based on requirement. You can write by your text English in the English format. After that, you can choose here. Even you can upload an image or link to that. So after that, you can select the user or assistant. Okay. So it will change automatically user or assistant. Right. It is like hagibt only. So it works like a hagibt only in which the main system message is solving user's query. This is if for example, it looks like chat, hat means you will tell system message to do this task only. So now, when user ask any question, the assistant give the answer. For example, let's take hi. Let's see. Let's take user user Ask a high. Then assistant will directly generate here. It works like hagiBt only, if you observe it. It works like hagibt only. There is no in that, difference in that. When comebacks to assistant, assistance mean different. It is done for the specific one. Okay? It is done for the specific one. So even to build some specific application, you will try an AI for specific experience HR experience doctor or experienced teacher on physics like that. So it will works like system HR only. So then you can start the question here. I will generate the answer based panda question you ask related to the HR only. Okay? It will never go up the tropic of this. It will never generate the response of the system instructions. It will only follow the system instructions only. Okay. In the chat format, you can ask anything, it will give the answer. L Cha GB do. But if we try an AI model here to do the specific task only, so it will also work like that only, like assistance will do. But in the assistance we get some specific code or specific structure to do that only. But there is in this chat, in this chat system prompt, so there is some less chance to get to do work like some specific assistant. Well, let's see for our main purpose is to see what are the different parameters you have the playground have. You can see first parameters that is model, the language model. So you can use any model. This is advanced one, and keep remember that if you are new to this platform, after sign up, you will get free credits up to $5, right? So for that, you can test and you can learn by using all these platforms. So if the credits are reduced to zero, you have to buy that. Okay? So for testing purpose or to learning purpose, I recommend you to choose the model very low as low as possible. That is GPD pen fat Turbo. It is enough for you, right, to is your credit cost. After that, response format, how your response should be looks like. It has to. That is JCN Object or text, we'll go to the text. Okay? The functions are advanced type. Okay, first learn the simple thing here, Model configuration. So that is our main first is temperature. If you click here, it will show some, uh, Information about temperature controls randomness. If you lowering results in less random completions, as the temperature approaches zero, the model will become deterministic and repetitive. What is the temperature telling this? So for example, we'll take here. So temperature. So first, write the Sims system message here. So I have written that you are helpful assistant and your task is to solve users quotien. Okay? So I set to the temperature low, that is 0.2, as we earlier discussed. I will tell A I will ask a question to suggest best name for my coffee shop. Let's run this. Then it will show some *** to me. Here are a few suggestions for narrowing your coffee shop that is Brew Heaven, Java junctions well. If it is low value, if I change temperature to maximum, that is 1.51 0.3. Let's see what happens here. I'll just delete this. Let's take this only. Country C, delete this and let's see this. I change the temperature well to the one. So if you see here, there is something focused at one. That is coffee, heaven or D bless cave. So if you change this temperature value, the output will be changed here. If you observe here, if you go to the maximum some high value, it will become the output will be uh, focus response, some specific response when compared to the previous one, which have some temperature low value, like that. Okay. I hope you understand the temperature well. So you can change by your requirement by analyzing the output. Okay? Next parameter, that is maximum tokens. Let's see the maximum tokens here. 53. 5.6.2 OpenAI Playground Parameters Part 2: Our next parameter is that is maximum tokens. So what is about maximum tokens is tokens means which maximum tokens. Tokens are chunks of text that the model processes including words, punctuation and spaces. So if you are using the Chagpter any language model, you see the output, the output contains all of those things like words, quotation marks, commas, all those things. Spaces also. So that they are called token. Token means four characters is equal to one token. Four characters equal to one token. The character is not only letters, it has some punctuation spaces also included in that, okay? Or three by four sod word like that. So the tokens limits are also depends on the model we select. If you use the highest advanced model, the tokens are all change. Why? Because the advanced models are tried by the more data. In that, the output also change, which changes the token values. Okay, it is depend on our model selection also. Let's see, to better understanding the Open AI have their own tokenizer platform in which we can see the how many tokens are using my language model to generate output. So for example, if I take the above information and I will paste here. So you can see here. In this paragraph, there is 86 tokens. So just to go on in and just search for Open AI tokenizer, you will get from this website. So you can see here, this information paragraph have 86 tokens and 435 characters. To better understand, you can see this here, OPEN, four characters is equal to one token. Okay. After that, AI, fast, also, and S. These four characters equal to one token, like that. The red color, all that comes under one, one token. Okay? If you add this, it comes to nearly 86 tokens. As I said, it is depend upon the model we select. If you use this GPT 3.5 GPT four, these two are maximum advance only. If you go to GPT three, it will changes. You can see here 88 tokens because it is from basic model when compared to these two advanced models. It depends on the model we select. You can see here according to that, we can adjust our output to reduce our APA cost in the open AI. Okay? So you can search in online the best practice is to reduce open AI AP cost. You can get the information from there. So it is all about maximum tokens. So for example, you can change the here value of maximum tokens. What happens here, the output will be in this provided tokens only. If my is very long, if my question is small, but the output is very long, what happens there? The output will be adjusted to this specified maximum tokens only. Output the maximum output will take 20 or anything that I have specified in the maximum tokens only 17, 19 will take my output. So by this, we can adjust our cost of API, right? We can set some goal or we can have some insightful how many APA cost is using to generate output, how many tokens are AI using to generate an output. By that, we can analyze our APA cost in open air platform. Better we can optimize it, right? If I if I put maximum tokens that is 400, 300, 200 like that, the output will only generated using 200 tokens only. Even if it is a long quotien the long quotien will be, uh, converted into two to three lines. Why, we have set maximum tokens to 200 only like that. The output is in our control, right? We can control the output, how much output should be generated. Okay. In that, we can easily, uh, adjust our maximum tokens. That is all about maximum tokens. You can set by here from anything that particular assistant you are making or that particular AI, you are running further. According to our requirements you can put here, you can experiment with each and everything, which shoots your 54. 5.6.3 OpenAI Playground Parameters Part 3: Top. What is top P, nucleus sampling. So top P means it will control the response, okay? By considering or taking the tokens option. By keeping tokens. In the foundational level and it will generate and it will control the output. That is a top fee. So you can see there is a range 0-1. So when we set it to one, the model considers all possible words options. So how it can be explained means if your output have some words. Okay, as I said earlier, the maximum tokens, Okay, how much token values you decided to generate a output of all queries in that particular tokens, so the tokens, the output, which is required, the word. Okay? The words selection will take the top P value. Okay? The top P value controls the word. To generate to generate in the output for the specified maximum tokens, you decided, I hope you understand. Okay. For example, if you see when set to one, the model considers all the possible words option. Lower values. If you put 0.3 or 0.214, the model focus on the top few most likely words, reducing randomness. Right, for example, let's go to yeah. So if I kept let's take, I will go to low value, that is 200. So I will tell AI to explain Explain me about AI. That is my query. Okay? What I tell, I just tell generate the output in 269 tokens only. So the answer will be 260 or toons only. Okay. So I will the temperature one. Is a basic default one. So I will tell top value should be one. Let's check it out first here. So it will generate the output in 269 tokens only. Okay. If I increase the maximum tokens, it will generate the output even more than this because we select the tokens, generate the output only in 269 tokens only. If you increase this, the output also increase. It is address. These words are chosen by top value. These words present in the output is controlled by top P value. Okay, I hope you understand. If I kept this one, you can see here, there are words artificial intelligence is a branch of computer science that focuses on the creating intelligent misins, right? If I lower the top P value, it will generate that randomness. Yes, for example, if I take 0.30 0.2, it will low. Now I will ask again this question. Explain me about AI, C. You can see here some maximum tokens limit reached response terminated. The AI is looking to generate the more right. AI is looking to generate more output for this. But what we have we have set maximum tokens to 269 only. That's why the AI is telling that maximum tokens limit reached response terminated. There is a response after this also, but it will stop there why the maximum tokens are reached. That's why the maximum tokens are keep should be decided based on per requirement. Okay? Based on Opera application, you are looking to build. You have to focus on the output first, okay? Then only, you have to decide the maximum tokens. Otherwise, the user expense can be disturbed. Okay. Come back to top V value. When I decided top V value to the one, so you can see the output here. There is a good one, very specific one, right? The control is good. The words control are within the maximum tokens we decided to 69. But when I decrease top V value to the lower value, so it will going out of the maximum tokens. That's why here I just not stop at here, when compared to the above one. Here if I kept top V value to the maximum, the maximum top V value will control who the output, right, in specified maximum tokens. That is a word selection. The perfet word selection will be taken here. When compared to this, I is generating the random words, which goes, which reaching. Which reaching maximum tokens means there is a there is more information here. There is more output here after this, but it will stop here. Why we have decided to extend tokens only. When compared to previous one, there is also two sixten tokens, but the top V value is maximum in which the top V value controls the whole output to specific words. But when the top V values decrease to low values, so the output is generated by randomness in which the output is increasing, even the maximum tokens are specified. That's why the AI is just, uh, error message that is maximum tokens, limit reach. Why, there is no choosing the words in effective manner. That's why keeping the top V value to the highest number better can help us to generate better output within the specified maximum tokens. So I hope you understand this top P value. Okay. I hope you understand. Let's see another parameter. 55. 5.6.4 OpenAI Playground Parameters Part 4: That is frequency penalty. What is the frequency penalty? The frequency penalty discourages the model from repetiating the same words too often within a response, range zero to two higher values reduce the word repeton. So let's see. What is a frequency penalty means? Here, the output sometimes contains some words or repetitive artificial intelligence can written by two or three times at wherever required in the output. Okay, there is nothing wrong in that. Okay? There is no need to change the frequency penalty and present penalty. But if you are looking to change your output according to your requirements, like not repetiating the same word again and again. In some applications, you need to do that. Okay, you can change here. The higher value, if you put this higher value, the repet if you put the higher value, the repetition of words will be reduced. Okay. I hope you understand. So I recommended not to do this because sometimes the word, anything like TH or some grammatical mistakes that also called a word. Okay. If you put this frequent penalty value to the higher level, the output intent or the grammatical or sentence formation of output can be changed, which can up the whole output. For that, I recommended never use a frequency penalty. If your output is generating the repetittive information, then you can use this according to your requirement by changing the whole this frequency penalty parameter. Let's see another parameter we have that is presence penalty. Presence palty means it will encourages the model to introduce new concepts that haven't been mentioned in the text yet. For example, presence penal, presence means it will introduce some more new concepts. Concepts means if you doesn't provide the particular um, information or concept in the system message to do that. So if user ask the question that is not related to your task. So if you put this, it is zero no. If you put this, the user quotien is not related to the system message, then the EI will generate the answer for that question. That is a presence penalty, but we do not need that. We are doing for the specific one. We are building the application for the specific one here. For that, we do not increase this presence penalty. If you are building the EI application like hGPTHb which can solve anything, which can generate anything based on user requirement, and you can choose presence penalty as your requirements. You can see here. First, just click here, it will show the information what the presence penalty about. So that is, o. So let's see how the stop sequence works. These parameters we are used to stop the output for a particular time. For example, if you take, I take some simple query that is generate three productivity tips as a prompt here. So it will generate some three predativity tips here. You can see here. So if I want to stop at second tips only, I do not like to generate third one. For that, what I can tell AI instead of writing here, I will just write here to AI to stop the output at third point only. For that, I will write here third one. So what if it happens, it will never generate the third productivity. Let's see example here again. So you can see it will generate only two productivity tips, even if I ask you to AI to generate three productivity tips. Okay, that is all about stop sequence. Okay. If I take here, number two, Okay, add two. So it will generate only one productivity tip, you can see here. That is one. That is all about stop sequence in which we are going to tell AI to stop at the specific point. That is all about all the parameters of open EI. To get the more deeper on these parameters, you can experiment with this playground by writing the proms by checking the output, by analyzing it, all those things. Okay? This is all about the prompt parameters we have. 56. 6.1 The Future of Prompt Engineering: Hello, guys. Welcome to this module. So if you are followed all previous modules and practiced well what I have explained to you, then I am congratulate to you learn the best and perfect prompt engineering to get some opportunities as a prompt engineer. So up to now, we learn some skills, some techniques, the prompt techniques, and all the topic related to prompt engineering. Now we will see what are from future trends of prompt engineering and what are the different opportunities you have as a prompt engineer, you can do. Okay. And we will also explore in this model that is GNAI. Okay? It is a advanced area in which okay. You are interested about this GA, you can go after the prompt engineering skill. As a prompt engineer, you should know what is about GAI. Okay. So it is quite easy. Okay. But you have to learn some technical skills, also. We will explore all these things upcoming in few minutes. Okay? We will also explore what is your main role as a prompt engineer in Gen AI team. Okay? If so most of the companies will hire, as a prompt engineer, either in two ways, okay? As a prompt engineer for specific or with GNAI skills. Okay, I hope you understand. So the companies will hire Gen AI specialist in which the prompt engineering is some part of skill. But with the prompt engineering, you need to have some extra skills that is coding skills and other technical skills, okay? Let's dive and let's dive in this model in detail. Let's see first what is the future of prompt engineering. So as I said, the AI is now becoming more advanced and it will take all over the world in upcoming future, right? So in that what are some emerging trends as a prompt engineer, you should know. Okay. We can see there are three types of models out there. How, let's see, with the prompt engineering, you can do. So multimodels. What are the multi moodel you can see here, AI systems are moving beyond the text to include images. The multi moodels means if you use Gemini, in the chat section, you can upload image, right? You can upload any document, and at the same time, you can write a text, even you can add your vs. So that are all called multi models. Even Char GPT have even all the language models have their own multi moodels. Like the AI system will take all the input from a user, like text based image base, voice base or document based, all those things come under the multimodel language model, right, LLMs. So you can see now what is the role of that prompt engineering? So prompt engineering will soon involve creating inputs for these mediums. Okay. So if you observe the language moduls will generate a output based around input. So what happens here, you have to write you have to train AI. So like writing the input and at the same time, how output should be looks like based on the input. So you have to train AI model by your prompt writing skill as well as output. Okay, output. You have to write both, right? The multi moodels are very important and in upcoming future, their multimodels will take more, right? So it is also some emerging trend right now and even in future. So what is the next type of the emerging trends we have? That is fine tune models. So what is the fine tune models? So what is the difference between multimodel and fine tune? Multimodel means like ChaGPTGemni because this is a language model. It trained by a lot of data. So it will gives output for every quotien not a specific one, right? You can ask anything to ha GB. I will generate the answer like that cloud perplexi data. Even if you take any language module, right, it will generate answer or it will give output for any quotien that are for all the purpose. Is called multimodel models. But when comes to fine tune models, this are specific one. As a prompt engineering, we will learn. We have already learned. Fine Tune models, what we do. So there are some businesses. The business have their own data, right? So if the business want, for example, if the company wants to integrate the EI in their workflow for their employees to increase to improve efficiency in the working, what they will do they will use some basic models the AI companies provide. Like if you take Open EI, they will provide BERT model, GPT three model in which we can fine tune with our data. Okay, with our own data. If you take, for example, if so and so company is looking to create their own chat board for their company employers to improve efficiency or to guide something for training to train the employees also. That is basic Open our requirements, right? So for that, the AI have their own data, okay, own custom data. So with that data, they will try a basic model like you can take BERT model GPT three. They they will try this basic model with the data the company have. With that models, we call the fine tune models. Okay? That is called fine tune models. It is all about fine tuning. You can see here businesses are trying custom models for specific industry requiring prompts tailored to the specializer system. So while fine tuning the models, they don't know how to react they don't know how to generate output based on the input. For that, what we will do, we will train data to AI with writing the prompts. Okay? How we will train AI model, just writing the prompts. Okay, how it will works, for example, if you go any website, there is something chatbot like in the left, right, bottom side, in which you can click there and you will ask some question regarding the business. It will give the answer, right? It is all about some AI is doing in back end, like that. So it is, for example, these fine tune models are used for customer queries to swallow customer queries 24 by seven. So what will they will try AI model with their own, all the pricing, all the FAQs of the business to AI model in which AI will learn from our data business data. Okay. With that, what happens when user asks a question? Okay, seroton is prompt here. Okay? Let's take that. User is prompt. When user asks a question, the AI will check the own data. Okay, the business data, it will give the answer based on a per hour data. Okay. We user at the time of training to AI models, like fine tune models like BERT, GPT three. So what happens here? So to generate a answer. Okay, to generate answer, what the model need prompt? That prompt need to be write by the prompt engineers. I hope you understand. So to try an AI model, we require some prompt engineers that can write the best prompt to finetune models simple in low cost. Okay. So you can go in deeper with this by searching online YouTube. We'll get them more knowledge about this fine tuning models. Okay. I hope you understand. These are the two types of different models you will see that is all about doing you can take example like Cha JBT Gemini. This can be using some specific businesses to try AI models to work their customer career or anything that. For specific application. So for example, you can see any specific businesses, you can go there in upcoming future. Every business will use AI in their workflows by providing the chat board in which the AI will source the customer queries based upon the businesses data. Okay, 24 by seven. Okay. That is the fine tuning models. Okay. And another opportunity we have emerging trends, which is integration with automation. So as I said, building AI chat boards. Okay, AI chat booards means like the HAGBTH it will give the answer for all thing. But when compared to integration with automation, we can build a chat board, okay, that handles all the queries of users, but we can integrate with automation. Like we have Zap, we have some make.com. Okay, they have some tools, automation tools we can use. To automate the repetitive task. Okay, booking the if, for example, ****, I have gone to one website, Okay, which is, let's take health relator. So I went to the website. Once doctor website, there is a chat bot. I will ask my question, my problem I have. For example, I have more stomach pain. So it will give some answer. It will give some suggestion. Okay, that chatbot will give. Okay. I will take the suggestion first. So it will also suggest some tablets related to my stomach. Okay, I will take. If that is not help me, so I will book meeting with the doctor. In the chatbot only. What happened there, the chatbot will show, okay? The chatbot will show some booking system like when you are available to meet with other doctor, all will taken by EI itself. Okay. With the chat board will take the user, simple, take the answer from me. It will take to automation tools. It will trigger the automation tools that we connect by the automation tools like zaperm.com in which automatically the meeting is booked in the you can take anything like Zoom or you can take Google meet like that. It automatically create a date, creative meeting with specificular, speci for a particular date. Okay, I hope you understand. This is a Zapier and make. This is a integration with automation, how it helps. So where the prompting it happens here. While while building a chat boards, if you are going into the technical side, you have to write the prompts, okay? They see the chat but also AI, right? So to tell AI when what to do, you have to write the prompt. Thereby, where there is a prompt engineering is required. Okay? So we required in that also. So there is a lot of opportunities where there is AI, the prompt engineer should be there to try and a specific AI model effectively. That's why the prompt engineering is best and best career path if you learn right now in the perfect manner and in the effective manner. Okay? So we can use some even you can use loco tools. That means without writing a single line of code, with a dragon drop function, okay, you can build a jab boot for yourself. So there are many tools available in the online that you can search it and learn the in automation. Okay? The automation is the best skill in upcoming future to automate the business's repetitive work. Okay, by using zaperm.com and building a chatbod for specific business. So I hope these skills are also very important. As a prompt engineer, you should learn. Okay. Let's say another. Let's see how to stay updated in this field. As a prompt engineer, C, AI is changing year by year day by day because it is learning from our data. Okay? It will going advance, right? So some as a prompt engineer, you should know how the latest model language model is working. Okay, how there are tools or invented in the market right now, which is which can help which can help to become a best prompt engineer. You have to connect with other prompt engineers as well to learn from their techniques from their learnings like that. For that, what you can do is Waste one is stay adaptable. Adaptable means you can experiment with the new models and tools. Never set your boundaries for this up to this course. Okay? So my prompt techniques that I've explained you earlier can be have more advanced prompt patterns that I don't know. Okay? So even the new prompt techniques can also emerge in upcoming future, right? For that, as a prompt engineer, you should know up to date with that prompt patterns. For what you can do, you can directly join the community of the companies, the AI companies like Open AI, Google Gemini, right, Cloud. They have some community. Even you can search online in the social media like Instagram, Twitter, Facebook, you can join the the community groups like Open AI, search it and join in that communities. Okay? It is simple thing, right? You can see here, follow A communities, engage with forums, research papers, updates from companies like Open A, Google Deep Mind. It is the best way to get up to date with this field, prompt engineering, okay? So another thing is stay adaptability. Experiment with new tools, models with your prompt patterns for a specific one, and it is all about how you are interacting with AI, right? So this skill can be developed by practicing only by trying new things. Then only you can become the best prompt engineer. Okay. Last, that is third point that is keep learning. So this prompt engineering is not a set specific subject, right? So it will grow. Okay? It will grow. Why? The AI is growing, the prompt engineering should be grow. It is not a limit. The AI is not a limit subject. AI is always infinity. So, like that, the prompt engineering also will grow day by day with new models out in the market with different techniques, prompt patterns like that, you have to keep learning that. Then only you can stay updated in this field. I hope you understand this topic. Let's see What are some prompt engineering opportunities? 57. 6.2.1 Prompt Engineering Opportunities: Welcome back to the lecturer guys. What are some prompt engineering opportunities out there in the market? As I said, there is a demand for prompt engineers in future and right now. It is slightly increasing the demand right now, for prompt engineers. So in early or before two to three months, I have seen there is a rise in prompt engineering jobs, right. So in the education marketing field, okay, I have seen that there is a lot of prompt engineering jobs are required in these three platform like education, marketing, entertainment, right for stories, writing like that, but not not limitation. But there is in upcoming future, the AI will take all over the world. That means the AI is used by every industry because it is a fast and reliable system in which we can do the things very fastly. Okay? So we can do anything that is very automatic for that. So most of the education if you take any industry that is education, health care, marketing, they need the content as fast as possible to make their services better, okay? Like that. So there are growing demand for prompt engineers in every industry, right? So you can see, as I said, there is applications of prompt engineering in which we will see education, healthcare, marketing in every industry. Okay, the prompt where the AI is used, they need to be prompt engineer. The company need a prompt engineer to manage the LLM, to get the content, best content from the AI. For that, the prompt engineer is required. Where the AIs LLMs are used, their prompt engineer is required. I hope you understand this part. In every industry, the EIS will take the part of their system in which the prompt engineers also rises. Okay? So by learning this skill, so it can give the future proof skill. Let's see, these are the applications we have seen. There is a growing demand for prompt engineers in the um, industry. Not only this, there are other industries they are looking to hire prompt engineers. Is all about finding the jobs in online, all those. We will see later. Let's see what are growing demand. Okay. So but the best tip is, if you learn the prompt engineering, well, in effect, if you are now, you write the prompt for any scenario and you have the capability to get the best output from AI. Then to stand out from the competition of prompt engineers, you need to go as specific as specific, right? So for example, if I am a marketing industry, I am looking for the prompt engineer, I will enter into the market. I will see the best prompt engineers all over the world. What happens here if the prompt engineer can write the prompts for every industry. But there is one person who have specific knowledge about marketing in writing the prompts for marketing only. I will hire that person as a marketing industry. I will hire that person who have the experience in writing the proms for marketing purpose only. Instead of going the prompt engineer, who can write the proms for anything. So that's why I'm recommending you. If you are well written, if you have the capability to write any prompt for any scenario in effective manner, what you have to do, please always get the expertise in specific area. You can go for the education purpose, you can go to the healthcare only, you can go to the marketing, take the marketing. For example, if you take marketing, learn the fundamentals. After that, there are so many marketing types there are digital marketing, offline marketing, Internet marketing. There are so many things. So try to write the best prompts for the specific purpose, like generating a hard copy generating content creation, email marketing, cold email. Like that they have some specific topic in the marketing. So try to craft the prompt, write the prompts for specific industry that you can get expertise in that area. Okay, as any industry will come to hire a prompt engineer, so they will see, okay, this person have, for example, if the marketing industry need a prompt engineer, instead of going to the prompt engineer who can write the prompt for anything, instead of going that person, so this marketing industry will hire that person who have the specific knowledge about writing the prompts for marketing. I hope you understand this point. So it will work anywhere if you are looking to provide a freelancing service or if you're looking to get the job, in specific manner, you can go, we can grow, and you can get expertise as you can. I underst. I hope you understand this topic because where AI is used, there is a prompt engineering opportunity you have. The main problem is you have to build specific expertise in specific area in which you can go in that grow in that and you will do some best impact in that market. This is some examples I have taken the industries I have. Even you can go coding, if you have coding knowledge about that. Even you can go for the other industries. There are a lot of more you can search in online. 58. 6.2.2 Career Opportunities in Prompt Engineering: Next see, what are the career opportunities in prompt engineering? This is, as I said, as a prompt engineer, I set it on so many job roles as a prompt engineer. I listed some of there, which are common in the EIS era. So this is not there are other roles but as a prompt engineer, this will take most of the work as a prompt engineer here. You can become a prompt engineer that is writing the prompt for any specific area or industries, as I said before few seconds, right? So this is a comes under the prompt engineer. Another job role that is conversational AI designer, that is AI trainer. So what is actually conversational AI designer AI trainer. As I told you, that is fine tuning. Fine tuning. In which we will try an AI model based upon our custom data. Custom data means training AI data for specific application. For example, if you are looking to build a chatbot like hA GPT, for math solving, math solving total. In which the user asks mathematics quoion, I will automatically generate the solution for it like AGP two. But in the specific manner, in that cases, you will design conversational AI designer. That means you have to write prompt, then you have to write the response. For that prompt means if you write the prompt, the AI will learn from the problem. This is a prompt, as well as you have to provide a response also for that prompt, how the response should be looked like for this type of prompt. Here you are training AI with both conversational. Conversational AI designer means you have to write the prompt and you also have to provide the response for that prompt in which you are going to train AI, which called AI trainer or AI tutor. But as a conversational AI designer, as I said, this AI trainer is the conversation Air designer miss building a chatbard like ha GPT in the specific domain. You have to have the two skills that is mangiy and important. That is advanced English, second one is specific expertise in any subject. As I said, you are training the AI model in the specific area. Then you have the good knowledge. You have some expertise in that specific knowledge. You are training AI to provide the accurate solution. For that, you have the expertise in that specific subject. You can design a prompt and response for that with your expertise in that specific knowledge. What is advanced English? The any chatbard you will take, they will generate in English. It will generate in all other regional languages, but the global language, what is this means English. The AI is trained in English. For example, if you are not good at writing English, let's see the AI will learn in that way only, which you have many mistakes. If your English is not well good, the AI will learn that mistakes. Ultimately, it will generate the output with mistakes in that output by some grammatical mistakes, sentence formation, all this will because you trend data in wrong way in wrong sentence formations with grammatical mistakes, all those things. Further, the companies who have running the agency or other companies to try an AI model to hire the prompt engineers, they will see these two things that is having the expertise in specific subject and advanced English writing. For regional language, they also see, for example, if you are a Spanish, if the chatbot, uh, is training in the Spanish language, they will see the specific advanced skill in the Spanish language, written and speaking both. With specific knowledge. Okay? Language and specific knowledge matters to becoming AI trainer or conversational AI designer. Okay, to build chat bots like Hagibt for a specific one. Okay. I hope you understand this is AI trainer. So this is one career opportunity as a prompt engineer. See, this is a prompt engineer means you will write the prompt. You will write the prompt. At already trained model. For example, the chargeb is already trained, with the many prompts and responses by the company's own prompt engineers. As the prompt engineer, you will write the best prompt for the AI models to get the best output that is called prompt engineer. In the conversational AI designer, AI, AI trainer or AI tutor, you will write the prompt and as well as response AI module like hagibtF the specific one or even for another time su. It depend upon the requirements. Okay, client requirement. Okay. I hope you understand these two things. AI chatbot developer. So it all comes under this only. But as I said, there is a automation, automation integration. Here, chat booard developer, what happens here. So you will use, there is a two ways to build a chatbot. Okay? Even you can take any framework, lang chain, all Okay. Otherwise, you can take some no cod tools in which you have to drag and drop, you have to connect the points. Oh, build a conversational flow. There are two ways. Even you can go with your coding skills, you have to learn some technical bit. But if you don't have a coding skill, you can go second way that is using no code tools in which you have to drag and drop, connect build a conversational flow and integrate the AI system in that without writing a single line of code. So there are many tools in the market. You can find it, learn, and you can build AI Chartboo developer. You can become a developer with the prompt engineering skill. Okay, I hope you understand. So second one is that is AI content specialist. So any language model you can take ha JIT Cloud, they can generate the best output for the content for any content creation. But what happens here? So the content have but there is something not engaging, right? So as the human, you need to edit that content, AI content. Okay, you need to, for example, see the Google is search engine, right? So there have some policies. So you cannot rank the copyrighted or AI content in the top of the search pages. For that, what you have to do, you have to bypass the pass the tools, Bypass the tools that your content is not AI generated. For that, what you have to do as the AI content specialist, you will write the prompt to generate the content for specific one, and you have to proofread it, and you have to make it like Women written. Okay to bypass AI content detector tools. Then only you can write the best article to rank at the Google top search pages. Okay, I hope you understand. As A content specialist, you have to generate the content. You have to proofread it, and you have to make that human written content. Then you can become AI content specialist. Okay? That is all about AI content specialist. That is our main that is advancement, that is Gen AI consultant. What is Gen AI consultant? Gen AI means generative AI models. Examples are HGPTGemni. They all comes under this GEI. Okay, Gen AI means building the specific chat board or specific application for the businesses or for specific use cases, right, by yourself. So the GEI have more skill sets, need more skills required to become a GI consultant. So we will explore all about GI in few minutes. So let's see here. So okay, we have some job roles we have learned here. So what the prompt engineer can do. 59. 6.2.3 How to Find Jobs & Freelancing Sites for Prompt Engineering: So where we can find these jobs or all those things. So you can become a freelance or you can get the job opportunities. Okay? Many businesses outsource prompt creation for specific projects. We have learned all these things. So for example, freelancing and job opportunities. So I will show. So just go online and see some freelancing sites. Direct search in Google. It will show some freelancing sites direct to you. CNC the fiber freelancer, guru.com, PeoplePerHour, Upwork top tell B hands Flex Jobs Corporation 99 Designs. So there are a lot, right. So I recommend you to fire, freelancer guru.com, Toptal, Upwork people per hour, and the Linkedin. So these are the best best these companies are doing the best market right now. So I recommend you to focus LinkedIn. This is a main one right now, and the five freelancing, you can do with the freelancing websites. But for freelancing, consultation, and finding the jobs, you can do with LinkedIn. That is enough for you because LinkedIn now our HR and companies are hiring the any candidate through LinkedIn only because LinkedIn have great features like posting your expertise and your portfolio link, all those things. Okay? So you can go YouTube learn the LinkedIn profile optimization, and you can go to the fiber also see the gigs out there, right. So you can go and you can see the gigs or what are the gigs the client is requiring. You can set the prompt engineering, right? You can go to the fever. Please make it a profile as a prompt engineer, but go in specific area like prompt engineer for marketing because it will standard from competition. That you can get the projects very fast. Okay. Like that. Freelancing. Just go YouTube and learn how to make the best profile in each of these freelancing sites that is freelancer five orgo.com, Upwork and LinkIn A. It is the best platform to connect to businesses and clients for you. Okay? You can find the flancing and job opportunities. Okay? As I said, this prompt engineering is not limited. If you learn, if you keep the interest and focus, this skill can open your mind. Okay? This skill will change the way of you think. It will make more opportunities not only freelancing and job, you can become an entrepreneur, I building some application for a specific area or building by solving specific problem that women are having right now. A market is looking for that particular solution that you can solve by AI. So you can make some AI Power tools, apps, web apps, Android app, IOS apps like that by using prompt engineering and generative AI. So not only generative I even you can use, as I said, there are low code tools in which you can build your own apps without writing a single line of code. Okay, you need to have some simple app idea. Go use AI prompt engineering, train AI with your prompt writing skill, what you have to do, what you have to solve it when you have to solve it like that. Write your instructions, get the API key from open A. Even you can use gem dot hattex AI. There are a lot of model, but use open A as a playground, they have good platform to turn our specific model as we earlier discussed in a model five, right? So you can get the code directly from integrate in your app. Just have good documentation of local tools we have right now. So you can check it at online best tool apps to build apps without coding. You will get the list of that. You will choose according to your needs, pricing, all those things, learn and create your own AI power tools, apps and launch it. The way you will get the opportunities. Like AI IDA will work in the market, you will get more opportunities like the investors will approach to you and any founding, there are more opportunities if you IDA will work in the market, so you can go from here to zero to o like that. So that you have to find the problem right now in this world. So the world needs problem solvers. I can solve the problem. Now, AI is more advanced that can solve problem. But what we have lag in that finding the problem is problem. Find the problem first, then you can use AI to solve the problem. Simple. Focus on finding the problem in the market. After that, you can solve it by AI easily with your prompt writing skills. I hope you understand. You can become an entrepreneur also. Even you can become a content creator by using your prompt writing skills to get the ideas from AI, and you can make a video and you can put onto the YouTube, all those things. There is a lot of opportunity, right, if you learn the AI how to use it in effective manner. That is prompt engineering. So what is the tip I have written here, Building a portfolio? I forgot this. So before approaching to any client businesses as a freelancer or if you are looking to do some job in particular company, please build your portifolio of prompts because you have to showcase your work, what you have done. If you already work with companies, you can build portifolio with your past projects like crafting prompts for specific company for the specific use cases, like that. Okay? If you don't have any previous experience, you can put in the portfolio that you have tested with your skills like your sample projects you have done, okay, by yourself, that you can put it in a portfolio to showcase your prompt writing capability, ability, okay, to work with different LLMs that can give you competitive edge, okay? That will stand out the crowd. As a prompt engineer, you will get more preferences when compared to the others that they don't have a potifol. So please make a portifolio. Even if you write the smallest prompt, please keep that in that portifolio. Okay. So it will help you while interacting with client or looking for the job opportunities. I hope you understand. This is all about career opportunities, prompt engineering and some tips and how we can solve it like that. So you can go, Okay, I have missed something for you. I have a prompt engineer or you can become AA trainer. All these you can find the freelancing sites like Fiber, all those things. But for AA training purpose, there is something companies are looking for the AI trainers tutors. So for that, I suggest you go to outlier. So this company is looking for AI prompt writers or AI trainer in which you have some advanced writing skill and specific subject knowledge. So as you can say, train AI models and math. So for training AI models and math, have you should have expertise in the math. Okay. And in which language they are going to train AI model, that language you should know and that language, you have the advance. Okay? If they are looking to train AI models and math in English, you need advanced English, like that. If they are looking to train AI model in other language, you need that particular language in advanced manner. Okay, further. So Outlet is the best company. They are looking to hire AI trainers and their pay will be the high per hour it will take. Okay, ten hour, $10 per hour, $30. Okay. Based upon our requirement, their requirement, okay? There is a outlier. There are other companies that are looking for AI trainers. You can just go. The best way is just go to Link itIn. Okay. You can go to Link it in. You can search directly jobs. Okay. So for that I have too, I will show another time. So just to go Google AI training jobs. This If you keep this, you will see there are more platforms that are hiring for AI training, A trainers. You can see the outlier is the first option, and RWS the second one, Pere AI. There are more platforms they are looking to hire AI trainers. So in which I have you advanced English or advanced language that they are looking to train AI models and specific knowledge that you can easily hired by the platforms to train AI models. Even you can see that there is a high pay a 13.252, 27.5 $0 per hour. It can increase with your expertise and experience. So there is a opportunity of prompt engineers if you have some advanced writing English and specific knowledge. Okay. I hope you understand. So you can go to the LinkedIn, search it. You will get most of the AI training jobs. And even you can build AICAtbd developer profile, I content specialist profile, and search for that, GNA consultant, all those things. Okay? I hope you understand this. Well, there's a lot to say more, but it can be learned from your side. Okay, just go find it online. This will increase your research researching skill. Okay? Just go and learn the researching skill. So my honest tip is, I learn this prompt engineering by researching only, not nobody is guided me at the time of AI rising. Like 2023, I have learned how to use Tage Bit. After that, I come into this prompt engineering. So it is all about researching capability. If you research anything online, so there is no limitation for you. Okay? If any recission or any company will fire you, there is no limitation you can do with anything if you have researching capability in online. So research by yourself, research your requirements in online instead of asking to others, please go and research. There is the Internet have more data that can help you to give some more parts, you can go it, okay? So you can find this type of job roles, all those things online. So our next topic is how to prepare for this opportunity. 60. 6.2.4 How to Prepare for Future Opportunities as a Prompt Engineer: So our next topic is how to prepare for these opportunities. As I said, first step is stay updated with new LLMs and tools. So we have to update it with new LLMs, tools and prompt writing patterns, all those things. After that, develop a specialization. As I said, you have to, uh, put your you have to become a good at specific industry like healthcare, marketing, education, like that. So then you can get hired fast when compared to other prompt engineers who can write the prompt for any industry. Okay, right? If you develop a specific industry expertise at writing prompts, so you will get by that specific industry. As fast when compared to other prompt engineers. The second step is develop a specialization in specific use cases or industry. That is healthcare marketing. That is your choice. And the third one is very, very important. Build a network in AI communities to find projects and collaborations. So the great way to build a network is using the Linkedin. So LinkedIn have more than 1 billion users all over the world. That means they have companies. The Lincnn have all the EI companies or marketing companies, all those comes HRs, all recruitment team all in the LinkedIn. See if you build your profile with EI skills like prompt engineering, GNI, all each and every skill, if you put in Lincan with your projects with your Potipl website and you the expertise, you can if you are able to showcase your expertise in the form of video, audio or document, anything in the LinkedIn, continuously, right, you will build a network, strong network in which the companies also will build a network with you, HRs, many entrepreneurs, AI learners, they can follow you. They can make connection with you in which you will more opportunities to work with the clients, to work with the companies, and even you can sell your own courses. They have more opportunities if you have community, if your own community. Okay, there are more opportunities if you build a network in AI communities or in Lincarn. You can find the most projects, right? Like that, the company will reach out to you to work with them. This will happen in LinkedIn only. Okay? Because I have already tried it, it works for me. It will work for me. The most of the companies will come to me to work with there or do the prompt engineering to build some chatbard for their use cases. So for that, you have to learn this building network with AI committees, you can go any open EI, they have own community. You can join there or you can go the LinkedIn is the best option for you. So just to go and learn the skill and get the expertise in specific area and just put all your learnings and projects, put it for a link in the LinkedIn. Please make a best optimization profile. When any company or client will search in the LinkedIn, you will get the top rank or the a friend of the client. So in which they will directly reach out to you. Okay. So for that, you have to showcase your expertise, okay? Share your learnings in the Lingreen itself through post video like that. Okay, I hope you understand these points. So this is all about how to prepare for these opportunities. I hope you understand this 61. 6.2.5 Basics of Fine-Tuning and RAG: Come back to this lecturer guys. In this lecturer, we are going to see what is fine tuning and Rag. Okay? So as we earlier, learn some prompt engineering techniques, patterns, all those stuff. Okay? We have already also see some prompt engineering opportunities in this A era. So now what is this fine tuning and Rag. So it is also some applications of the prompt engineering. So actually what is a fine tuning and Rag. So in prompt engineering, so we will write the proms for language models to get the best output. But in fine tuning, we are training AI model. Okay? We are training AI model with our own data. Would do some particular task specific task. Okay? So let's see in details. So in this lecture, we are going to see some basics of fine tuning and Rag, some difference between fine tuning and Rg. So we will explore some examples. So let's start with first one. That is what is fine tuning. So as we earlier discussed, the fine tuning means training a pre trained AI model on a specific dataset to specialize it for a particular task. So pre trained AI model. So what is the meaning of pre trained AI model? So this model is nothing but it is a language model, which is trained by large amount of datasets. You can take any model. So for example, before a char GPT, a GPT 404, 3.5, it has some basic model that is GPT three. That is called pretran model. Okay. After that, it goes on to 3.5, 44o. That. So after training by data in real time, it goes to different versions of models like that. So here, preten model means it is already trained by large amount of datasets, but at the foundational level, at the foundational level. So this pre turn A model, the fine tuning means we have to train fine tuning involves training a pre trained A model on a specific dataset to specialize it for a particular task. So we have to train AI model with our own data. To solve a particular task only, not for all purpose like ha GPT or other language models, which will solve all problems. Instead of that, we have fine tuning some basic model to do particular task only. For example, ha GPT for marketing only. So this is simple. Okay. So how it works. Let's see here example. Let's see what is the work is. Let's see how it works. Start with base model. So you can take any model. For example, we have taken GPT three here. So this is a base model, pre trained AI model. Okay? So after that, we have to provide domain specific or task specific data. Example, medical transcripts, legal documents or anything that your task is about. Okay? So we have to train. We have to provide these domain specific task specific data to the GPT three model, okay? In which this AI model will learn this data. It will become legal document, helpful assistant like that. Okay? So GPT three model have only the knowledge about these topics only. That is medical transcripts, legal documents only. They don't have the knowledge about anything. Okay? That is a specific data. Next, try the model to improve its performance on that task. So how we can train the model. Okay? Training the model means we have to write the prompt. So the GPT three model already have the knowledge about our specific domain task, like medical transcripts, legal documents. It has some already response, right? Now, what is lagging? What is lagging means questions or prompts that the AI can learn, Okay, that the I can learn to fetch the output, okay? To show the output related to the prompt here. So to learn that we have to write the prompt. As a prompt engineer, that is your work. That's why the fine tuning is also application of prompt engineering. So we can see that earlier we discussed some prompt techniques like that, but here is writing simple question is called a prompt here or writing some needed question or relevant question to the documents will help model to match the output. To match the output, based upon our prompt. Okay. I hope you understand in which dI model can learn how to best matching output to the prompt, and it will automatically improve its performance. Okay. That is all about fine tuning here. Let's see what is relation to the prompt engineering. As I said, as a prompt engineer, you have to write simple proms. It is called asking simple question to AI model in which DI model can answer it by the knowledge of documents we have used for a specific or particular task while trying AI model. Okay. I hope you understand. You can see the example here. So for general model, you can take any language model like charge Bet. So for a summarized purpose, you will write like this prompt here. Summarize this news article for a teenager. But when compared to fine tune model, fine tune model means, as I said, fine tuning means training a model to do particular task only. Here, when compared to this prompt, here the purpose is the model is already tuned to create summarized for tins. I will just write the prompt here that is summarized. Okay, instead of writing this news article for a teenager, for a Fine Tune A model, I will just write a summarize prompt. Why this fine tune model is already trained summarize the articles for teens. Okay? This already trained to generate responses to generate for teens. So what is the news article? What is about news article? The news article is trained. Okay? I hope you understand. So if you see here, provide domain or task specific data. So here, I have taken the news article information. All about news article is feed it to the base AI model in which it will create summarizes for teens of particular news article. I will just write the prom the summarize. It is automatically created summarize for teens. I hope you understand. That is all about fine tune models. Fine tuning is nothing but training AI model with our own data to do a particular task. It is most use cases and we will see different industries are looking to fine tune their own EI models in which they can use in their workflows and to improve their efficiency among the employers and workflows like that. Okay, each and every industry have their own data. So by using this technique, fine tuning technique, they can easily try their own EI model with their own data. 62. 6.2.6 What is Retrieval Augmented Generation (RAG): So what is the second technique that we have what is the ag? Rag means retrieval augmented generation. So what is a retrieval augmented generation is? Retrieval means retrieving the data from other sources. So for example, you can see the definition of this rag. Rag is nothing but Rag will combines a retrieval system. Retrieval system means taking the information from other sources, external sources like database, search engine, okay, online websites like that. It combines retrieval augmented generation. Retrieval means taking the information from external sources. Okay. Augmented generation meant it is a generator model in which it will generate a response, base or upon our prompt by taking the information from retrieval data. This retrieval data is taken from the different sources of different sources from online or search engine like that, or any document we will provide to this AI model. Okay? So let's see in detail. So what is a Rag? Rag compensate retrieval system, example, database or search engine with a generative model to provide accurate and up to date information. So I hope you understand this definition. So it combines RAG, right? Retrieval system means taking the information from other sources, external sources like all in websites, forums, social media, like that. It will take some different sources relevant to our prompt. It will generate the response by using the generative AI model, you can see here. These two combines to provide accurate and up to date information. The best example is you can see the perplexity.ai. So we'll jump into the perplexity. It is simple Rag. For Rag, it is the best example here. So you can take any question here. Let's I will take here. So what happening here is, I have asked a question that is a prompt. This AI model, perplexi dot I is retrieving the information from these different sources like this or call some websites, online websites, right? It will retrieving the data from these websites and generating the response for me. Okay? So this process will take by the retrieval system, and generating the response will take the generative AI model. So these two combines to provide accurate and up to date information that is called about RAG. So by this. So you can get the real time data, right. By this, we can get the real time data and up to date accurate information. When compared to other language models that they only produce output based upon their own data. But here, it will taking the data from external sources. That is the best part here. It will taking the real time data from different sources like external APIs, external knowledge documents, PDFs, dogs that we have that we can use to train I model. Okay, I hope you understand this a clearly. So let's see how it works. So as I said, the detrival system fetches the relevant documents based around a query, so it can be documents relevant documents, search engine, database, anything that it will take the data from external source. The generative model uses the retrieve information to generate a response, as we already described the perplexiE working like that. How this works. So what is the relation to prompt engineering? That is simple. As we see writing the prompt also comes under the prompt engineering right. So what happening series? Prompts guide both retrieval process and the generation process. Okay. So for example, if you go to the perplex.ai, so you can see this is also prompt here, right? So I have asked a prompt, simple prompt here. So then only it will taking the retrieve process. This is called retrieve. Okay. So it will taking the data from external sources. Even you can add a PDF here from here, right? I will automatically taking the data from external sources like online websites here. Okay. After that, generative A module is generating the response according to this retrieve data and the prompt. Okay. So it is all about happening when we provide prompt only. That's why the writing the prompts is also application of prompt engineering. That is Rag, right. So that's why the prompt engineering is always useful in any technique or any language model that you are using. There are only two things in any language model that is prompt and response. The response is generated when only the prompt is written. The art of prompt writing is called prompt engineer. That's why the prompt engineering is more powerful skill if you learned how to use it, so you can build some impact by using the language models in the market. Okay. I hope you understand. That's why the prompt engineering is related to this rag. Okay, so we can see some example workflow here. Retrieval prompt, search the latest research on climate change. Search means we have guiding AI to search the latest research on climate change. So it will check the search engine or other online website, and it will retrieve the data from the online website or external sources to generate a response. There is a retrieval prompt. When it comes to generation prompt, summarize the retrieval documents in three sentence. What we have to tell through the AI, search for the latest research. So it will research some latest documents or anything. Okay. That is a retrieval prompt. That is over. But next, that is generation prompt. So what will tell summarize the retrieved research about climate change in three sentences like that. So it will combines the retrieves system and generation process. These two combines to form a Rag application or Rag that technique. Okay? I hope you understand this clearly. Let's see some difference between these two techniques. Okay? That is fine tuning and drag. 63. 6.2.7 Fine Tuning vs RAG: So let's see some difference between these two techniques. Okay? That is fine tuning and Rag. So let's see we'll take some aspects like purpose, data dependency, prompt usage, real time updates. So as I said, fine tuning means but training AI model for a specific task. Okay? That is here, specialize model for a specific task. RAG means integrate external knowledge. External knowledge maybe it is a database or other external documents that we provide the database like that. We will integrate some external knowledge to this AI model to retrieve the information from that external knowledge to generate a response that is accurate and up to date, right? So it is a specific task, okay? The fine tuning is fixed model in which it will generate the response that what it has trained only, not the current or up to date information. That is about purpose. What is the data dependency? So as we see, the fine turing means that is a fixed one. Okay? It will generate the response based on this trend data and the prompt only. It will never go to search external sources or give that up to date information. Okay. So while we're training AI model, okay, we have to require Q rated DRSs, right? So we have feeding AI model in the form of datasets only in which we require some rated datasets. But when compared to RAG, we are not providing data with the datasets, but we are providing some search APIs, okay, and legal documents and other documents or we are providing some database. The database already have some data, o like that. So in which we can try and AML very fast by using the Rag. Why? Because it will retrieve the system. It will retrieve the data from already have database through search APIs, any different online sources. But in the fine tuning, we have to provide each and every data, to generate output. That's the main problem here. But these two techniques have their own uniqueness and purposes and applications. Okay. Let's see some prompt usage. So as I said, so in fine tuning, we have to write simple prompts, like questions only to get the answer from AI pretend model. That is simple. Okay. But in Ag, Rag so we can write any prompt in any format. So you can write any question that is regarding your query. So it will directly search through online. It will generate the answer from any sources. Okay? According to prompt. This is not a fixed one. You can ask any question to this model, this application, Rag application. In it will use some external database search aps it will generate the answer, up to correct, up to date and accurate information. But in the fine tuning cases, you have to get the information from AI model in which it is trained only. It will never goes off the topic of trend data. It will never go out of the data trend, like that. That's why the simplifies prompts, but in the rag, enhance prompt flexibility. So we cannot write any type of prompt in any specific task or any task. There is no limitation in the Rag, right. So when compared to fine tuning, it is specified, it is a dynamic. So that is all about the prompt usage, real time updates. So as I said, the fine tuning is a specific, that is fixed I model. There is no up to date, current information, all those things. So it will only generate the response based upon the data. Okay. So there is no having the current information capability to generate the response that is up to date. That's why it is static knowledge. But in the Rag, it is a dynamic and up to date information, as we said, so it will take it will retrieve the information from real time data providers, like it can take a search engine, any Google or any online websites, any YouTube. Okay, I will take the up to date confirmation information. It will generate a response baser upon our prompt. That's why the Ag is for most of all use cases, but the finding is specific. Okay. So as we earlier discussed some perplexity dot I, that is a Rag based, in which it will retrieve the data from different sources and it will generating the response forever prompt like that. So it is all about fine tuning Rag. So let's see some example here or fine tuning example means, for example, fine tuning means training AML for a specific dataset. Like in the domain, I have taken legal contracts. If I ask a question to generative I model, some general model like Cha GPT or any Gemini, so I will write this like summarize this contract in plain English for a client. Okay, so it will summarize this contract for the general model. But if I write prom to fine tune model, I will just write summarize. Why means the fine tune model is already trained by legal contracts to summarize the contract in plain English for a client. I hope you understand. Okay. So what happening here for the general model, I will write the whole prompt, whole my requirement. Summarize this contract in plain English for a client, to do a specific task in general model. But when compared to finetune, it is already trained to do this particular task. But I have to give the command to proceed, like summarize it. So big how the fine tune model is already trained by legal contracts document or domain to do or to summarize the contract in plain English for a I will just provide command to fine tune I model to summarize it, but simple. That is simple. That is all about fine tuning AI model. What is the rag example here as we earlier used the perplexi.ai in which we can get the updated information like this. In this domain, I'll take in the medical research. I will write the prompt like retrieve recent retrieve recent articles on Alzheimer's treatments and summarize the findings. So as I said, RAG means retrieval system. It the combination of the combination of retrieval system and generation process is called Rag. You can see her retrieving the recent articles on alzheimer's treatments. So it comes under the retrieve system, in which it takes the data from external knowledge, like it from the document, search engine, online websites, YouTube, social media, like that. Okay, it will take that data according to this prompt, that is a retrieve system. Okay? And another system that is generative system in which it will summarize the findings. Okay, I hope you understand this very clearly. That is a difference between fine tuning and Rg. So what is the summary here. So as I said, the fine tuning means for training I model for specific use cases in which we require some simple prompt writing skill, simple prompt. That is asking question related to the document or specific task that you have trained I model. Prove its performance, like that. The next thing is that is Rag combines two system that is retrieval system and generation system. In which in retrieval, it will take the data from different external sources. Maybe it can be database, search engine online websites or documents that we provide, like that. It will take the information from the external sources and it will generate the output for our requirements like that. Okay? That is accurate and up to date. So these both techniques comes under the prompt engineering. Why we are using the prompt in these two techniques as well, right? To enhance AI performance. In the fine tuning, we're writing the prompt to learn the fetching to generate the relevant output to the prompt. So the prompt writing is written with a prompt engineer. That's why the fine tuning is also the part of prompt engineering. So it is a different technique, but the prompt, it also written by prompt engineers only as well as it is a simple question that is ask you to document or fine tune AI model. So there is no required technical writing skills. But that is all about prompt engineering related to the fine tuning. Okay. RAG also have some prompt writing skill in which the it can help the AI model to retrieve the information with clearly and in effective manner to generate the output. So if you take any model, the output is dependent upon the prompt only. That's why the prompt engineer comes onto the picture to write the best prompts or any type of model. It can be generative model. It can be fine tuned model. It can be RAG application like that. So that's why the prompt engineering is always best skill if you learn how to use these AI language models. So that is I can make wonders in this market, in this EIS era. So that is all about this fine tuning and drag. I hope you understand this well. So this is a basic things I have explained to you, so you can go deep and deep if you want to learn this as such a best techniques for different use cases, right? So you can learn from other any online sources. Okay. So to implement this to implement this practically, you need to have some technical knowledge like having a coding, Python, frameworks. Okay, you need to have some generative that is machine learning. You have to know some machine learning algorithms like that. So there is no need to learn algorithms, but they have some specific technical skills that you need to acquire to implement this practically. That is fine tuning and rag. So you can get the help from different language models like ha Gebre. You can use for coding purpose, you can use Cloud. It will help to generate the best output in the form of code when compared to other language models. So that is all about the fine tuning and rack. Okay, I hope you understand. So let's see our last session that is overview of generative AI, and we will dive into that now. Let's do that. 64. 6.3.1 Overview of GenAI: So we're going to see what the generative AI is. So we will see in this lecture some basics of GenEI. How does GI work? And we will explore some real world applications, and what is the trends or future of GEI. And what is your role of prompt engineer in GNI, and we'll see some final thoughts. Okay. So this is our last lecturer of these codes, and it is very important after learning the prompt engineering skill. So let's see the first one that is basics of GenEI. So GNI means simple, that is a multi moodel like HGPTGemini or comes under the GAI. So you can see here the basic definition. Generative AI refers to models that create new content, text, images, code, music, based on inputs or proms. If you take any image generation tool like Leonardo, mid journey, or you can take video generation tools like Sra or other invido dot IVO, that they have some tools online, right? They will generate a image based upon our input prompts, right. So that all tools are called GNAIEven hago also called it GII, how it will generating the output, text output, content, ideas all based upon the prompt. So it is all called generative AI models. Every model like Charge B, Gemini Cloud, they call comes under the Gen AI. Okay, you can see here. So unlike traditional AI, which focuses on recognition or prediction, Gen AI focuses on creation. There is the most important point here. So you can see, for example, there are some EI systems in the US or something in which we are building some EI cars, okay, and EI bikes or like that, in which they will recognize information, recognize some scanning or routes, all the uh, all the data, right? If you see AI cars, right, Tesla cars, they don't have the driver o. The AI is automatic will run that car. How the AI will recognize all the road and all the parameters they have like how to ten the car. Okay, when to stop, where to stop, in which speed I have to the car should go. It will recognize, right? So this will call that traditional AI. Okay. But what is a GEI? You can see generate you. In the name itself, there is a generate generate AI means focuses on creation. The creation, anything that can be content creation, image creation, video creation, that all the EI, which creates something, Bass and open hour proms called GEI. But there's a traditional EI which predicts or focuses on recognition or prediction. Like, as I said, any GI cars, sorry, AI cars, which will recognize all the real world scenarios in which I am the car should be take right turn like that. Some examples. That comes under the traditional AI. But GI focuses on creation only like creation content, image video, like that. Base and open hour requirement. The best example is take any language model like HGPTGemini or image generation tools like Leonardo I, video generation OR like that. They will come under the GEI applications, focuses on creation. Okay. I hope you understand the basic definition of GenEI. So what are the examples you can see, HGP generating essays, answers, mid genery creating art, co pilot helping developers with write code. So some of our video creation tools that come right now that is OR like that by open EI. But there is all some basics of GenEI. So let's see the second one that is how does GenEI work? So it is simple, right, you can see it uses large scale machine learning models trained on vast datasets to predict and generate content. Sit example. If you learn if you know how the hajbtI works, how the Jajbit is developed trained with lot of data, it is all genial right? So it is all about so taking some model, basic model. Okay, they don't have any knowledge about anything, then you will try that model with your data with large datasets, a large amount of datasets. Okay, to predict and generate content, image, videos, everything about based on upon our requirements, right? So that's all the models will comes under the GI. It is simple like hA GPT, Gemini. How they works, they are the same the Geni work. Okay? They all comes under the GEI only hGPGemion all other models that we are using right now, okay? Simple. There is a Geni work. You can see the key models in Geni is text based GPD four Cloud, image based Dali table dificienT is all open EI, Multimodel Gemini or GP division. Like this, we have seen already in our previous lectures. Gen means that models are working right now or trained by large amounts of data to generate a content or to solve a user query to generate ideas, images, videos like that. That is how it works. That is all about GenI. So we will see some real world applications. As I said, EI is used everywhere. The prompt engineer also needed everywhere. Why? Where EI is, the EI is only GEI. Okay, I hope you understand. We are using AI. For example, if you take LLM any LLM like HGPTGemni. Okay, the companies, for example, will take some education companies are using Open EI HGPT. Okay? The HGP also comes under the GEI. Okay, where there is a GI, there need to be prompt engineer. Okay, I hope you understand. Or see if the prompt engineer is required to get the best output from the AI. At the same time, the prompt engineer also required to build a generative AI applications like JGBT I hope you understand, right? Okay, what is a real prompt engineer means? If the prompt engineer know how to write the prompts to get the best output from AI, then he also know how to train AI model, how to train AI model whether the prompt patterns. Okay, I hope you understand. So where so where the EI Gen applications are used in each and every industry they will use, if you are using hagiPGemni like LLMs, then you are using the GI only, not the other thing. Okay, I hope you understand. This is some real world applications, education, business, creative fields, healthcare, all it comes under this GEI applications. That is the most important thing, ethical reminder. So while GNI is powerful, but it must be user responsibility to avoid generating misinformation or biases. So as I said, the AI is not 100% accurate. So it can do mistakes. It will provide inaccuracies in the output, right? So with mistakes and lot of wrong data, misinformation, all those things. So we cannot blindly trust on this AI output. For that, we have to know the specific knowledge about that we are looking to get the output from AI. You have to know about that. For that, so to make this easier the companies or anything, they will hire only that person who have the prompt writing skill as well as who have the specific knowledge. Okay? If I will generate the output, the prompt engineer should able to correct the output. Then only the company will hire them. Okay, for that, I'm recommending you again, please learn the prompt engineering, but in specific use cases in specific area, you can take an education or you can choose marketing only in which you can easily analyze the output. Okay, as we said in earlier classes, prompt engineering is nothing is not only writing the prompts, but they have some several steps. Okay, analyzing the output. Okay, refining the optimization. So all this comes under when you know the Information when you have the knowledge about that particular task to solve by AI. When the output comes from AI, you need to analyze the output, whether it is right or wrong. Then you can go to the next step for optimization, for refining, all those things. If you don't know about that, so there is no worth to become a prompt engineer. That's why I'm recommending you to build expertise in specific area, like business only, education, only, writing prompts for specific in that in which you can analyze the output easily. You can optimize it and define it in all the prompt engineering steps. We will see later in a few minutes what role, responsibilities as a prompt engineer in GNAI or other specific area. So for this ethical reminder, as I said, theI will do mistakes. For that, the prompt engine responsibility is to analyze the output, or to refine to optimize the output to get the accurate answer from AI. Okay? For that, you need to have the specific knowledge about your using AI to solve it. Okay, I hope you understand. So what is the future of GEI? As I said, now in upcoming world, every industry and every aspect, the EIS will take over. Okay? So more GI applications already come in the market right now. So even there are more GEI applications will be rise in upcoming decades and years. Okay? So what are the GNI specific applications? You can see some more personalized AI, tailored responses and outputs based around user profiles, increase in multimodal capabilities, seamlessly combining text, image, audio. You can see the best example is JGB right now, Gemini Cloud, they have. We can input the image document, text, voice right in the chat itself. This all comes under the multimodal capabilities. They have other also. So there's some popular Gemini JGBT. So democratization. What is a democratization? So tools becoming more accessible to individuals and small businesses. So as AI will become the part of our daily life, so everyone will use the AI. Okay. So there are many people, right? There are many individuals. They have some specific knowledge. For them, we can develop GenEI applications for the specific use cases for nurses, okay, for doctors, separate GEI like that. So there are more opportunities to build GenA applications, that is a future, right? This is all about some points regarding future gene. So let's talk about what is the role of prompt engineer in GEI 65. 6.3.2 Role of Prompt Engineer in GenAI: Rights, let's see what is the role of prompt engineer nginEI. So as I said earlier, so to build some generative AI applications, the prompt engineering takes a crucial role. Why? Because, so we have to try an AI model with the prompt and response writing skill. Okay. So then only we can easily train AI model in the effective manner. Okay? For that, we have to write the proms okay, and responses to train EI, like we do with the Cha GPT and other models, right? So as we are already discussed about fine tuning models, okay, creating conversational AI trainer like that, it comes under the GNI, right? So there are several steps. There are some roles and responsibility. As a prompt engineer, we have to do in GEI as a team. Okay. So let's see. In this lecture, we will going to explore what are core responsibilities of a prompt engineer, applications of prompt engineering in GII, skills needed by prompt engineers, challenges and ethical considerations, impact of prompt engineers on GI success. So let's start our first one that is core responsibilities of prompt engineer. So in the responsibilities, we have to keep these five points in our mind to become the professional prompt engineer. So we have to write the specific prompts for specific use cases that you can see here, designing effective prompts that we earlier discussed in the previous lecturers many times. Okay? You have to write the best prompt pattern or prompt for our requirement, okay? I effective manner, okay? That becomes to the first step. And the second step is testing and refining. As I said, testing means you have to set up a inshell prompt. That dI will generate a output. You have to analyze the output, whether the output is correct or not. The output is the generated output looks like any have some mistakes or not. The output is match my requirement or not, like that. You have to test it. You have to analyze the output. And you can analyze it when you have the knowledge about that output. When you have the knowledge about the task that you are writing the prompt for that I have recommended you to build prompt writing skills at a specific area like marketing education that your choice, right? So after analyzing the output, you will refine the prompt here. What you will test it. After that, you will write the second prompt by analyzing the previous output to not to do the mistakes. Okay. You will refine the prompt. Previous prompt by writing more advanced detailed second prompt. I hope you understand. So as we already discussed how do the test, what is the refining proms in detail in our previous lecture. I hope you understand. So the second step is testing and refining the prompt. Banalyzing the previous output, we will write the prompt again by keeping some mistakes in the previous point to avoid that in the second time. So we will refine the prompt again. We will rewrite the previous prompt in effective manner to avoid the previous mistakes in output. Okay? The third step is model specific optimization. The third one is crucial for us. Optimization. What is actual optimization is the optimization have some several steps. So analyzing matching the perfect LLM to do specific task. Okay, that comes into this optimization. Optimization means keeping your requirements aside and analyzing the output which is generated by AI to compare your requirements and the output from EI. If the AI is generated output is matching your requirements, then the model specific optimization is done. Then your output is optimized. Okay, here, output is not optimized, but your prompt is optimized. Okay. You will return the prompt in a way that the output is optimized. So your output is not optimized here. You prompt is optimizable to generate the specific output, which matching your requirements. I hope you understand. So you have to compare your requirements and the As output, whether it is matching your requirements or not to optimize our prompts. I hope you understand this step, and the fourth one is exploring prompting techniques. So technique means we have already learned the specialized techniques of prompt engineering in previous, that is model number five, in which we learned how to understand the different LLMs, capabilities, pros and cons, how to write the prompt for all other LLMs to match our requirement in which LLM is best sit to solve this particular task. So it will comes under the prompting techniques. We have learned all the prompt patterns and techniques, tools we have to write the better prompts for us. Okay, you will explore that also. Exploring not only explore, we have some prompting tools like Open AI playground in which you will write the prompt, you will get the best to prompt like that. We have also seen the three different methods to use LLMs to write effective prompts. A techniques will come under this step in which you will write the prompt and you will test in all other LLMs. Okay, and you will choose best LLM by analyzing the output, which matching your requirement. After matching your requirement, you will go with that particular LLM to go in deeper and deeper. Okay? So that is simple. I hope you understand these steps. And the last one is documentation reporting. So you have to document each and everything, how you take in the output, how you written the prompt. Okay, how you chosen the particular LLM to solve this particular task. And how you analyze the output, what tools are used, what prompt techniques are used to automize it, right? How you are written the prompt, what is your ability to write the prompt. In each and everything, you have to document by yourself to showcase to your team and hire officials in the GEI team or other in hire or leader that you have team have when you are working in a specific job, okay? And reporting. You have to report your prompts and response, all those things to your team member or any official who are running your team like. This is all about co responsibilities of a prompt engineer. So it is quite a different in working to build a generative application. This is something different in which you are going to write the proms and response at the same time. Okay, you will write the different prompt patterns using different prompt pattern techniques. Okay. But when compared to other type of job, right, already using LLM to get the output from EI. In that these steps will change. The steps will remain same, but the functionality under the steps will change. I hope you understand. So for example, if you are working prompt engineer in the GenEI team, GenEI means you are building a GenEI application like hA EPT for specific use cases, image generation or specific use cases. In that as a prompt engineer, your role is to write the prompt and response like AI trainer. Okay? In which you will functionality and roles and responsibility will different under the steps, okay? But if you are working as a prompt engineer in a specific industry like educational or end user industry like education, marketing, you will write the prom to get the best output from EI. Okay, in which your functionality and roles and responsibility will changes in these steps. Okay, I hope you understand. So here, prompt engineer in specific industry like educational or anything, you will write the proms for LLM like HGPT or any other LLM or to get the best output from EI, which matches your requirements. So you will do all the step by step, as we learned earlier. Okay? If you are working as a prompt engineer engine EI companies which are developing chat boards, you roles and responsibility change in which you have to write the prompt and response to trend EI model. Okay, I hope you understand this difference between that. So once you learn this, you will know about it. Okay? These are most five important steps. Well, if you are working as a prompt engineer, you have to do. Okay. 66. 6.3.3 Applications GenAI Prompt Engineering: The applications of prompt engineering and GEI, if you are building GNI for any industry like customer support, education, health care, automation, the prompt engineering application remains same. Okay, I hope you understand. So writing prompts and response for it. When you are working as a prompt engineer in GEI, that is development side. Okay. So what are the skills needed by prompt engineers? So I am telling you about the prompt engineer skills. So up to now what we learn, that is enough for you to become a prompt engineer, we learned how to write the effective proms, how to analyze output, Okay, how to use LLMs to write the effective proms for our requirements and how to analyze or how to choose the best LLM to solve task. Okay. But what are these skills needed by prompts engineer? That is a technical part, right, reading the proms in that. But after not only the technical parts, we have some basic other skills we need. To become some advance or some professional prompt engineers like understanding of GenI models that we know already, that is understanding the different capabilities of LLMs like HAGBT that we have already learned. GenA models means cha GBT like that, Linguistic skill. That is important here. As I said, so linguistic skills means ability to write clear concise and unambiguous prompts. That means clear proms. Okay, in specific language, if you are good at English, you can write the clear and concise prompts that I can easily understand you intent and prompt easily. Okay. So linguistic skills is very important. It is very important. It is a required skill when you are working in the Gen EI team. Why? So you are going to train an AI model in specific language. If you don't know how do that, you will never do the AI training will goes wrong. That companies also hire that person who have advanced English or other required language that they are looking to train AI model. So they will keep the exam for you, advanced English exam for you, which have writing, thinking, speaking, listening, all the skills that you can take TWO FEL exams like that. I like that. Problem solving. It is very important. As I said, so A is here. It will help you to do anything. But the main problem is the world needs problem solvers. So you have to find the problem. Take A as help to solve the problem, simple. So for that, you need the problem solving skill. As a prompt enginer you should know. Then only you can become a valuable team member in so and so company. So with this problem solving skill, you can become an entrepreneur also by developing the solution tools for that problem, apps like that. So it is a problem solving is very most important if you are looking to get the expertise in coding as a prompt engineer. So you can do analytic thinking or debug optimize prompts for that. Okay. Adaptability is very important because staying updated with evolving I tools and techniques. If you know some prom patterns are works very well for now language models. The AI is becoming more advanced. Okay. The prom patterns also becomes more advanced in which we have to get up to date with that prom patterns also. So if you're not adaptability to learn new things, new prom patterns in this field, so you cannot write the best prom patterns for the latest models. For that, you can connect with the forums, companies forums, follow their social media accounts. Okay. Link in Instagram, Facebook, YouTube, company forums like open A forum, AH Germany like that. Okay. And even you can take courses in online platforms, advance prompt engineering skills, latest like that. To get update with that. Simple. And as I said, domain expertise, okay? Tailoring prompts for specific industries or use cases. As I said, this is very most important. If you are prompt engineer, you you need to build a specific expertise in specific area, then only you can become a perfect prompt engineer or professional prompt engineer. Without that, you cannot become prompt engineering is nothing is not only writing the prompts for something, but it includes analyzing the output, refining the prompt. Okay, matching the LLM for our specific task, understand the different capabilities. Okay, using LLMs to do the task at potential level like that. So if you have the specific knowledge, you will optimize it. You will analyze the output without mistakes like that. Okay, that's why Uh, writing the prompts for specific industry is very important. This skill is needed by the prompt engineers. I hope you understand these important skills. This are not a technical bit, but it is required as a prompt engineer. So what are some impact of prompt engineers on AI success? So see, for example, if you are a prompt engineer working in Gena team, so the Gena is not having only the prompt engineer part, but they have some other technical part writing code using Python code frameworks, Cloud functions, cloud storage. Okay, that is, uh Amazon, Azure, Open EI, APIs. They are using to build some Gen AI applications. But as a prompt engineer, you play a crucial role. Why? Because you're training AI model. The other people who are working to write a code to build application, they will write the code for one time. Okay, I hope you're understanding this point. They will write a code, ok. They will use frameworks to build some application, but the real thing happens here prompt engineer. I, you are training AI model, right? You are training A model. The AI will generate the response based upon the trained data. How the is trained in which patterns, in which language, in which way the is trained at the base of that it will generate the response. As a prompt engineer, you plays a major role in that. Why you are working as a AI trainer in the Geni team. Okay, that is rows are responsibility of prompt engineer gene team, right? So if you are not well at writing the prompts, then what is the value of others doing in the Geni team like writing the code or building the user interface, all this. Okay. The main crucial part is prompt engineer. That's why you need to have great aability to write the best prompts in advanced English or other required language. They are looking to train I model. Okay, you can see some points here, enhance the productivity and accuracy of GNI applications. So the prompt engineer a skilled prompt engineer can enhance the productivity and accuracy of GenEI applications. As I said, so even other people in GNI team can write the code. But the accuracy and productivity of AI module is depend on your side. As a prompt engineer, it will depend on your side. Why? The main purpose of generative creates something based on user input. Input means prompt output means response that you will try to AI. That's why the accuracy and productivity depend on your side as a prompt engineer. As a prompt engineer, you will save the time resources by reducing the trial and error cycles. The third one is enable businesses and individuals to unlock the full potential of Gen AI tools. So as a prompt engineer, if you try I module with the best prompt patterns and responses, so the end user or businesses that you are using GNI application, can unlock the full potential of GNEITols that you are developed with your team. Or that company team that developer GNI. I hope you understand this point can unlock the full potential of GI tools. If you if you are a skilled prompt engineer, you try an AI model in the productive and accurate data. Then end user, any businesses or individual who are using your AI developed tool, they can unlock the full potential of AI, and they will get the best accurate data. Why? As a prompt engineer, you trend AI model with accurate of data and with advanced English or other language that they are looking. I hope you understand these points very clearly. Okay, this is all about role of prompt engineer in AI success. 67. 6.3.4 Impact of Prompt Engineers on GenAI Success: Observed after analyzing the company's requirements as a Gen AI and prompt engineer requirements. So the company is now looking for the person who have all the GNI skills. For example, go and search GNAI jobs. So before that, we will see prompt engineering jobs, prompt engineer, go Google and just tip prompt engineer jobs. So we will directly see her. You can go to link it in directly. Okay. You guys see AAWsPmpt engineer contract to her. So let's take this. If you see here, this is some job restriction here about AWS and prompt engineer. See this here. We are looking highly skilled innovative prompt engineer and AWS developer with expertise in solving real problems using effective prompt writing and closed Bs solution. So you need to have some knowledge about AWS MongoDB. There is technicals that is Python JavaScript. So key responsibilities, prompt engineering, designing effective prompts, okay. But this all know about we have learned in the course. But you don't know about AWS, okay? This all those things. Okay? We don't know about this. This comes under the GEI. Okay? So this is a prompt jering part. Okay? It's a small part. But the company is hiring for not only for the prompt engineer, but the skilled person who have some technical knowledge about Cloud, right? So programming language that the Python JavaScript, some frameworks like Langhin Mango B Mango B is not framework. So database management, all those things. If you already have an software engineering background or any coding background, so you can learn this prompt engineering, you can go with this skill to go in the GEI companies to work like prompt engineer and GenEI specialist. So most of the companies are looking who have the prompt engineering and as well as some technical part like Python program scripting, or JavaScript, Okay, any Pi tarch tensor probe that are up whatever frameworks or libraries in Python. A Lang chain, o ML, that is machine learning models, all those looking for the prompt engineers. So some companies will hire only specific prompt engineer, like educational. They do not need any specific application. For example, if you take educational company or education one university, why they are looking AI because to generate the educational content for their students. In that, they will use EI, but they will hire prompt engineer who are able to write the prompt for their specific requirements to get the educational content from EI. There no need to have some coding languages. They need to have some prompt writing skill is enough for them. But when coming to developing side, developing some Gen AA applications, you need to have all the required skills like prompt engineering, coding skills like Python, Pitot, libraries, some frameworks. Okay, like you can take hugging face transformer, some frameworks, okay? Database Cloud, you have to be good knowledge and practical application about this skill about this. Then only can hide as a GEI specialist in so and so company. So this is all about the requirements of companies. They're looking for the different prompt engineers. As I said in earlier, we have three types of prompt engineers like just writing the prompt for LLM to get the specific output based around client or company's requirement. Second type is conversational AI designer or AI trainer in which you are going to train AI models by your specific subject knowledge and language expertise that becomes AI trainer or AI tutor. And the third one is building GNA application in which you will train AI model with your prompt writing skills and with some coding language like Python, you are going to build some GE applications using prompt engineering, coding, like using Python or JavaScript, okay, database Cloud. Then only you can become these three types of jobs are available for prompt engineer right now. So you can choose any of them, build a profile on the top of that specific, um, job, and you are good to go. You can find the clients companies that they are looking. That is all about. Okay? So you can find you can see this is A prompt engine looking for the wood. Okay, this is all about prompt engineer jobs. If you are looking for prompt engineer jobs in USA, you can directly go to Google. It will show some jobs. Okay. You can see PT, remote a prompt engineer and evaluation. For example, if you take here, you can see some you need to you do not need experience to apply since we are provide training, and many people find the work quite engaging and repeatable. You have to be fluent in English, detail oriented or more items. So you can see the job requirements, qualification benefits in each of company that are looking for prompt engineers. So you can see here. Right, they have qualifications, benefits, responsibilities. So you can check it out based upon the requirements and responsibilities. You can build your profile and learn that. Simple. So you can see here as a prompt engineer qualification should be proven experience working with LLMs, GPT based models, Azure Cloud function, framework, tensor flow Pytorch. This all comes under the GNI and other things. They are looking for developing site. That's why they are asking for the Azure framework coding. Okay? Prompt engineer Hona company is looking for let's see. So is hiring prompt engineer to develop and optimize prompts for language models. So you can see here. It is for the first one. They are looking prompt engineer who can write the prompt optimize prompts to get the best output from AI. It is one type of job category, as we learn and this comes under the developing side. Okay, in which you are going to use all your coding language and prompt engineering, as we said in our earlier that is prompt engineering engineI. It is developing side. Now it is end user side, in which you will write the prompts to get the best output on AI in this developing side, in which you are going to use prompt engineering skill to train AI model with coding or to build a GNI application for specific use cases. These two types. Another type is A trainer. You can find it Oler company in which you are going to try IA model with a specific knowledge you have subject knowledge and linguistic skill like if you have to know advanced English or specific language that they are looking to runMmdel. So these three types of prompt engineers or categories are there in the market right now. So please make sure uh, take one job category. So even if you go in the three types of category three types of job categories, if you learn the advanced English, so you will interact with AI is good, and you will train AI with good. Okay? So what you have to learn so for the first two type of categories, like EI prompt engineer and AI trainer, you are well to go. You can go with it when you have the specific knowledge and advanced writing skill. You can go at one time with this in these two job categories. If you are looking to go in G developing side, you need to learn some extra skills like coding skill that is Python, frameworks like tensor flow Pytorch, closed side, Amazon or Google Cloud. Okay. That is Cloud side for database, database management like that. Okay? You have to learn all the technical side to become a GN EI. So this is all about how to find. So even if you directly go to the LinkedIn and please make a profile based around your requirement, that is you are targeting the specific job category, learn the required skills, and showcase your skills, help just by posting videos and articles in LinkedIn, build your connection, then you are good to go. Okay? You will unlock more opportunities in this AI Ea. You can B learning this, even you can build your own application. There is no limitation for you because you already learned how to use AI at the potential level. So now, it is good to go. There are more opportunities if you use AI very well. So I just simply I have told you some basic level how to find jobs. So remember one thing always before learning any skill, okay, before learning any skill, just go. And see the requirements, actual requirements of companies that they are looking in the candidates that they hire. Okay, for example, I am looking to go prompt engineering as agendas. So what I will come to here, Google, and I will tell I will just search prompt engineering jobs in USA or anything. Okay, prompt engineering jobs. Then I will come here and I will see the qualifications and requirements that are looking company in the candidate. Okay. So what I will do, I will just take this, okay. O. Even if you can take help with ha gibt or AA language model to learn this. But I recommend you just copy this whole the qualification requirements from any job requirements. Okay. Most of the who are looking for prompt engineers, they have same similar qualifications or similar requirements in developing SADA. So what you had to do just before learning any skill, go and search it online. Just jobs. That particular skill you are looking to learn. Jobs and see the company's requirements. Take those requirements and learn their related topics only. Okay. So don't go and just ask YouTube in YouTube, what are the required skills to become a prompt engineer. So they will say up to their knowledge. But so what is the purpose of learning the skill? Whether it is to build some solution or either it is to do job, for job purpose, for career switching like that. The end goal is to make money. Okay. For that, what we have to do we have to learn the skill according to requirement. Okay, according to the company's requirement. So instead of learning all the stuff, please focus what the company is asking, what are the company's requirements. Only learn those topics only learn those are requirements only in which you can focus on required that you can build the portfolio on it, and you are good to move in the interview process, all those things. You will get hired easily and fast. So I hope you understand my tips and tricks, all those things. So you can find the online YouTube, how to find a job, how to automize LinkedIn and how to build a portfolio, all those things. So you can find it in online. 68. Final Thoughts: So understanding GenI's capabilities and limitations will help you hone it full potential in your work as a prompt engineer. So when you are working in the Geni, after you build a Gena application, you will learn the capabilities and limitations. Why? As a prompt engineer, you train the Geni application. So then you easily know what is the capability of Geni that you developed, right? And you will also know about limitations that your Geni have. Why you train the II model. It automatically, you know the capabilities and limitations of particular Geni that develop. And you will know how to use this GenEI that is developed by yourself, that is developed by your team members in potential Okay, full potential. You will know how to use this Geni application in full potential. Why you already know because as a prompt engineer, you try that GNI application. You know the capabilities, limitations, and how to use a full potential. That is all about GenEIs and the role of prompt engineer. I hope you understand this whole part of this course in easy and if you think it makes you some valuable for you, so it can help you to get the best job in the IIS market, and it is a very interesting and growing field. So in each and every step, you will learn new thing by using the prompt engineering skill. So I hope you understand the GEI skills and what are some advanced GEIs. So it is all about this course and prompt engineering course. Up to this, our course is ended. So now, if you followed all lecturers and practice all the prompt patterns with my techniques, all those things, so I will congratulate you now prompt engineer. Yes, I have. So from now, just try by yourself with different examples, use Kass, and build a potifolio and build a new connections and make a great profile in inkern on other financing websites and unlock more opportunities in these upcoming AIs. The upcoming era. I hope you're doing well and you will make something big in upcoming future in the market. So bye bye, guys, thank you for joining this course. Okay. And we will connect with other course very shortly. Thank you, bye bye.