10x Product Manager with Generative AI & ChatGPT | Anna Kolenkina | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

10x Product Manager with Generative AI & ChatGPT

teacher avatar Anna Kolenkina, Product Builder, Entrepreneur

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Welcome to the course!

      1:17

    • 2.

      AI landscape of today

      8:29

    • 3.

      Introducing Generative AI (part 1)

      6:00

    • 4.

      Introducing Generative AI (part 2)

      7:32

    • 5.

      Who stands to benefit the most from Generative AI?

      10:23

    • 6.

      How Generative AI Can Impact Product Manager’s Productivity

      9:25

    • 7.

      Follow-along: Let's build your PM AI assistant!

      5:32

    • 8.

      Follow-along: getting your ChatGPT account ready and exploring the GPT store

      4:14

    • 9.

      Follow-along: Creating custom GPTs

      11:16

    • 10.

      Follow-along: Using knowledge files for custom GPTs

      7:55

    • 11.

      Follow-along: Sharing your GPT

      0:55

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

9

Students

2

Projects

About This Class

Product managers who understand generative AI have a significant advantage in today's market—not because it's a trendy buzzword, but because it's becoming a fundamental part of how products are designed, built, tested, and improved.

Much like how data analytics evolved from a nice-to-have to a core product management skill, generative AI is quickly becoming essential for staying effective and competitive in the role.

Whether you're streamlining your daily workflow or evaluating AI features for your product, this knowledge directly impacts your ability to make informed decisions.

By enrolling in the class, you will learn:

  • An overview of the AI landscape as it stands today—a lecture for those who want to explore the broader AI landscape beyond just generative AI.
  • What generative AI technology is and how businesses can benefit from integrating it into their products or services.
  • How generative AI will impact a product manager’s productivity.
  • How to build your own AI assistant and automate essential product management tasks.

By enrolling in the course, you will also get unlimited free access to 2+ Product Manager AI assistants I built for this course, helping you brainstorm new product ideas, get feedback on your product manager resume, and much more!

Meet Your Teacher

Teacher Profile Image

Anna Kolenkina

Product Builder, Entrepreneur

Teacher

I help professionals and fresh graduates to learn digital skills, start new careers and advance in their roles.

I started my journey in the IT industry and software product management 15 years back from being an IT and management consultant and then transitioning to a full-on startup Product Manager and Product Director. I've built products from scratch for different industries - commodities trading, logistics, natural language processing, and e-learning - and also for different markets, from Europe to Asia. I have a Master's Degree in Applied Informatics and an MBA from the National University of Singapore.

Before joining online education, I shared my expertise and knowledge with only a limited number of people - my co-workers and mentees. With Skillshare, I'd like to s... See full profile

Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Welcome to the course!: Everyone, and welcome. Here, we are going to talk about generative AI technology, a term you've probably heard even more than block chain, DFI, or NFTs, as it's now the hottest topic in the tech landscape, we'll begin with an overview of the EI landscape as it stands today. An optional lecture for those who want to explore the broader EI landscape beyond just generative AI. Next, we'll have lectures introducing generative AI technology and how businesses can benefit from integrating it into their products or services. Of course, since this is a product management course, we'll dive into how generative AI will impact a product manager's productivity. After that, you'll pick a task where you'd like assistance from generative AI and we will build your own AI assistant, which you will be able to use right away. I hope you enjoy learning and find the next lectures both engaging and insightful as we explore all the possibilities of generative AI together. 2. AI landscape of today: One. Welcome back. In this lecture, we will go through an overview of the AI landscape as of today. First of all, let's define what AI is. In simple terms, AI is the ability of machines to learn, understand, reason, and interact in ways similar to us humans. This allows machines to solve new sets of problems they could not before. For example, AI Powers voice assistance like Siri recommends movies on Netflix, helps doctors diagnose diseases. AI encompasses a range of technologies from simple automated rules in everyday gadgets to advanced systems that learn and adapt. While AI can perform specific tasks at or above the human level, moment of recording this video, it does not possess general intelligence or consciousness. Recently, AI has also made significant progress in creative fields, generating art, music, and literature. Okay, now that you understand what AI is, let's discuss how machines actually learn at its core, machine learning, key component of AI involves teaching computers to recognize patterns and make decisions based on data. This process is somewhat similar to how humans learn from experience. But instead of learning from life experiences, machines learn from data. Machines learn in different ways, mainly categorized into three types. Supervised learning, unsupervised learning, and reinforcement learning. These are what we call the foundational learning methodologies. Each of these methodologies has its own approach to learning and is used for different kinds of tasks. Supervised learning involves training AI models on labeled data. Labels are identifiers associated with input data. For example, they can be textual in a dataset of animal photos, each photo input. Be labeled with the name of the animal output, like cat, dog, et cetera. Another example is numerical labels that can be used to predict house prices based on features. Supervised learning essential for applications where the model learns to predict outcomes based on provided examples. This includes speech recognition, image classification, and expert systems, AI systems that mimic the decision making abilities of a human expert in a specific domain. Unsupervised learning focuses on finding patterns or structures in unlabeled data. In other words, it discovers the underlying patterns in the data without explicit guidance. The unsupervised learning is pivotal in domains like recommender systems, systems that predict user preferences and suggest relevant items accordingly. It is also used in certain aspects of computer vision that focuses on enabling machines to interpret and respond to visual information from the surrounding environment. Third methodology is reinforcement learning. It focuses on training models to make decisions through trial and error, receiving feedback from the environment and learning optimal actions through rewards. It is key in robotics, autonomous vehicles, and some planning and scheduling tasks like resource management and automated scheduling systems. Please note that most application areas rely on a combination of different learning methodologies to leverage the strengths of each. This approach often gets better performance and more robust solutions. For instance, many modern recommender systems integrate all three methodologies to leverage their strengths. Supervised learning provides accuracy based on historical data like predicting and recommending new movies or products a user might like based on historical data with user preferences or ratings. On the other hand, unsupervised learning offers insights into users which might not be apparent through ratings alone. Clustering algorithms, a type of unsupervised learning technique that organizes data into clusters or groups based on similarities. Might find that certain groups of users tend to watch similar genres of movies even without explicit ratings and recommend movies based on these clusters. Finally, in case we want the recommendation engine to be dynamic and adapt the recommendations based on how users interact with different content. For example, by browsing, watching trailers, selecting and watching movies, reinforcement learning comes into play. The system will learn by interacting with users over time and adjust its recommendations based on user engagement and feedback. All right. Our overview of the AI application areas won't be complete without the other two that also leverages all three foundational learning methodologies. These application areas are natural language processing or NLP and generative AI. NLP implies understanding, interpreting, and generating human language, and is used in such applications as language translation, sentiment analysis, chat boards, and voice assistance. And finally, generative AI, the term that has become extremely popular in 2023 and that you probably have heard of before. It is an umbrella term that includes various techniques focused on creating new original content that never existed before, like images or text that mimics or is inspired by real world examples. Our next lecture will focus on learning more about generative AI technology. But before we begin, let's sum up what we've learned in this lecture. AI is the ability of machines to learn, understand, reason, and interact in ways similar to us humans. A key component of AI, machine learning involves teaching computers to recognize patterns and make decisions based on data. Machines learn in different ways, mainly categorized into three types or foundational learning methodologies, supervised, unsupervised, and reinforcement learning. Supervised learning teaches AI with labeled data. Unsupervised learning finds data patterns without guidance, and reinforcement learning involves learning via feedback. Most application areas rely on a combination of these learning methodologies to leverage the strength of each. Generative AI is an umbrella term that includes various techniques focused on creating new content that never existed before, inspired by real world examples. All right. And that's it for the lecture, and we'll see you in the next video. 3. Introducing Generative AI (part 1): Hello, everyone. If you watched the previous lecture, you already have an initial idea of what generative AI is. Since that lecture was optional, let me recap the definition for those of you who decided to skip it. Generative AI refers to algorithms that can create new content, ideas, or predictions based on the data they've been trained on. Like traditional AI, which focuses on identifying patterns and making decisions, generative AI has the ability to produce new data, whether it's text, images, music, or even code. It can craft articles, generate business reports, design graphics, and more all by learning from vast amounts of information. But let's break down this high level definition and look at the generative AI Ecosystem, which can be visualized as a funnel with several layers, each representing a different level of AI infrastructure. At the top of the final, we have AI applications and agents. These are the tools and platforms that end users interact with direct limb. The most prominent example here is chat GPT, a text generating chat board developed by Open AI that reached 1 million users within just five days of its launch. Making it the fastest growing up ever. It's out of the box accessibility makes a generative AI different from all AI that came before it. Users don't need a degree in machine learning to interact with it or see its value. Nearly anyone who can ask questions and use it. Another famous example of generative AI product is Mid journey. It generates unique visual content based on text descriptions or prompts provided by users, showcasing the creative capabilities of AI in generating new and original outputs. There is also a growing trend for companies to change their product roadmaps by incorporating generative AI features into their existing products to enhance functionality, improve user experience, and provide innovative solutions. Here are just a few examples. Microsoft introduced Microsoft 365 copilo in November 2023, a set of generative AI capabilities integrated directly into the suite of Microsoft Office applications like Word, Excel, Power Point, Outlook, and others. It uses advanced AI models such as those developed by Open AI to provide features that help users generate text, summarize documents, create and analyze data, design presentations, automate email drafting, and more. Iki then launched a suite of Open AI power tools in October 2023, adding reading and writing tools one month later, as well as tools to help with writing profiles, recruitment ads and company pages. Adobe, a software company that provides its users with digital marketing and media solutions, launched the generative AI application Firefly in March 2023. Introduced Firefly powered features to its flagship products like Photoshop and Illustrator. Okay, I think that's enough examples for now. If you have your own favorite generative AI product or feature, please don't forget to share its name in the Q&A section for this lecture. And let's continue exploring the levels of the AIC system. Beneath the AI apps and agents, we find foundational models. Think of them as the engines behind the creativity and intelligence of generative AI powered apps and features. A foundational model is a large scale AI model trained on vast and diverse datasets taken from many different sources, including books, articles, websites, images, and other digital content, allowing the model to learn from a rich variety of information. Because foundational models are trained on such massive datasets, they can capture a broad spectrum of knowledge, making them highly versatile and capable of being adapted for numerous tasks. For instance, foundational model can quickly summarize a lengthy research paper on climate change, write a customer service script for an online retailer, and suggest different meditation techniques based on a person's stress level. The downside to this wide ranging capability is that for now, generative AI can sometimes provide less accurate results, highlighting the importance of careful AI oversight and risk management. Foundational models can be of different types, including large language models, image generation models, video generation models, and multi model models. These different types of foundational models are all built on similar principles of large scale data training, but are optimized for different outputs and use cases. Let's go through some examples. 4. Introducing Generative AI (part 2): Large language models are advanced machine learning models specifically designed to understand, generate and manipulate human language. These models are trained on vast amounts of text data, allowing them to predict the next word in a sequence, generate coherent text, translate languages, answer questions, summarize documents, and even conduct reasoning. Examples of large language models include OpenAI's GPT series, cloud developed by a company called tropic models from Mistral developed by the company also called Mistral, Lama, from meta and others. The existing capabilities of large language models are truly impressive, to say the least. For example, GPT four, the latest large language model from OpenAI, exhibits human level performance on the majority of professional and academic exams. Notably, it passes a simulated version of the uniform bar examination, a qualification test for lawyers with a score in the top 10% of test takers. PIT four also shows human level image understanding capabilities, as well as humor understanding and explanation. The large language models can understand physical objects, including their size, shape, and physical priorities. Finally, the large language models have also been evaluated in theory of mind tasks. Theory of mind is a cognitive and psychological concept that refers to the ability to attribute mental states such as beliefs, desires, intentions, emotions, and knowledge to oneself and others. It is fundamental for human social cognition, allowing individuals to interpret and predict the behavior of others, leading to more nuanced and effective interpersonal communication and relationships. Theory of mind is typically assessed through various tasks and tests. Surprisingly, GPT four solved nearly all the tasks, 95% to be exact. This findings suggest that theory of mind like ability, thus far, consider it to be uniquely human may have spontaneously emerged as a byproduct language models, improving language skills. All right, let's stop here for now. The format of these lectures does not allow me to go through all the research papers in length, but I'll leave links in the resources section of this video for your further reference. Okay, coming back to the AI ecosystem levels. Moving down the funnel, we come across AI Cloud software and infrastructure. This layer includes the platforms and tools that support the training, deployment, and scaling of AI models. Examples include cloud services from providers like AWS, Azure, and Google Cloud, which offer the computational power and frameworks necessary for running AI applications. This layer is critical for ensuring that your generative AI applications can scale and perform reliably. At the core of AI Cloud infrastructure are specialized chips, such as GPUs and supercomputers. These chips are designed to handle the intensive computations required for training and running AI models. Without powerful chips, running complex AI models at scale would be impossible. Finally, at the base of the funnel is electricity. It might seem basic, but electricity powers everything in the AI ecosystem from data centers, housing AI infrastructure to the devices and users interact with. Electricity is the foundation that supports the entire generative AI stack. Chances are that last several levels of the ecosystem are not something you think about when considering generative AI. But it is important to recognize that the scalability and efficiency of generative AI depend heavily on these underlying resources. As AI models grow more sophisticated and widespread, the demand for advanced chips and reliable electricity sources will increase. Potentially creating bottlenecks that could slow down progress and innovation in the field. Okay. And that's it for this lecture. Let's sum up what we've just covered here. Generative AI refers to algorithms that can create new content, ideas, or predictions based on the data they've been trained on. The generative AI ecosystem consists of five layers. The first layer is AI applications and agents, which includes user facing tools like HAGBT and Mid Journey. The second layer, foundational models consists of large scale AI models trained on vast and diverse datasets taken from many different sources like text, images, and others. Foundational models can craft articles, generate business reports, design graphics, and more all by learning from vast amounts of information. Foundational models can be of different types, including large language models, image generation models, video generation models, and multi model models. Models like GPT four already exhibit advanced skills such as reasoning and solving theory of mind tasks. The third layer is AI Cloud software and infrastructure, which is critical for training and deploying AI models and is supported by platforms like AWS and Azure. The fourth and fifth layers include specialized chips such as GPU and supercomputers, which handle intensive computations. And electricity, which powers all aspects of the AI ecosystem. Last but not least, the future development of generative AI may face bottlenecks due to increased demand for advanced hardware and reliable power sources. And that's it for this lecture, ILCA in the next one. 5. Who stands to benefit the most from Generative AI?: Everyone. Welcome back. Now that you know what generative AI is and what the technology is capable of, let's explore how generative AI can transform the way we work and what value it can bring to industries and businesses. Let's get started. Generative AI is likely to have the biggest impact on knowledge, work, tasks and activities that primarily involve cognitive functions like processing, handling, and generating information and knowledge, which are typically performed by knowledge workers. Specifically, this includes activities involving decision making. And collaboration, which previously had the lowest potential for automation. McKinzie estimates that the technical potential to automate the application of expertise jumped 34 percentage points while the potential to automate managing and developing people increased from 16% in 2017. 49% in 2023. Generative AI's ability to understand and use natural language for a variety of activities and tasks largely explains why automation potential has risen so steeply. Now, let's look at which business areas stand to gain the most from generative AI. I'll also refer to McKin's research that predicts about 75% of the value that generate AI use cases could deliver falls across four areas customer operations, marketing and sales, software engineering, and R&D. Let's look at some examples of how generative AI can transform each of these areas in more detail. For customer operations, generative AI powered chat boards and agents can provide instant and personalized responses to complex customer requests regardless of the language or location of the customer for example, customer service platform ZNdsk has integrated generative EI into its customer support platform to automatically detect what customers want and how they feel responding like human agents would. Their EI agents can also carry out full tasks like refunds, changing passwords or cancellations. It is estimated that applying generative AI to customer care functions increase productivity by a value ranging 30-45% of current function costs. In marketing, generative AI could significantly reduce the time required for ideation and content drafting, saving valuable time and efforts. For example, Coca Cola uses Open AIs, generative models to create engaging advertising content and social media posts. This allows the company to maintain a consistent brand voice and style across different platforms while quickly adapting content for various audiences. In sales, generative AI can help nurture leads and automate repetitive tasks. Salesforce integrates generative AI into its CRM platform to assist sales representatives in crafting personalized outreach emails, follow ups messages, and sales speeches. For example, the AI can generate customized messages based on leads previous interactions, preferences, and engagement history. Additionally, companies like outreach dot IO use generative AI to automate follow ups and maintain continuous engagement with potential customers until they are ready for direct conversation with the sales representative. Generative AI has the potential to significantly impact software engineering by treating computer languages as natural languages. According to McKinney analysis, the direct impact of AI on the productivity of software engineering would range 20-45% of current annual spending on the function. This value would arise primarily from reducing time spent on activities such as generating initial code drafts, code correction, and refactoring. And root cause analysis. An internal McKinzy empirical study of software engineering teams found that those who were trained to use generative AI tools rapidly reduced the time needed to generate and refactor code, and engineers also reported better work experience with improvements in happiness and fulfillment. Generative AI has significant potential to enhance R&D productivity, delivering values estimated 10-15% of overall R&D costs in industries like life science and chemicals. Generative AI is already being used for generative design. Where it can accelerate the development of new drugs and materials by generating candidate molecules. For instance, in Silka Medicine, a biotech company uses generative AI models to identify novel drug candidates more efficiently by analyzing vast datasets and generating potential molecular structures. Okay, moving on to which industries will benefit the most from generative AI. The good news is that virtually every sector stands to gain. For example, in the banking sector, adopting generative AI could add an extra 200 billion to $340 billion annually by building on the efficiencies already achieved by artificial intelligence. This would be done by automating lower value tasks in risk management, such as generating required reports, tracking regulatory updates, and collecting data. In the life science industry, generative AI is set to play a major role in advancing drug discovery and development by predicting molecular structures, generating patient reports, and even simulating clinical trials. This dramatically reduces time to market for new treatments and enhances personalized medicine. So as we can see, businesses have real opportunities to enhance the performance and increase the revenues through the strategic implementation of generative AI, and by implementing generative AI, we don't necessarily mean developing brand new generative AI products. A large portion of the use of generative AI within an organization will come from employees using features integrated into the software they already use. For instance, email platforms might offer options to draft initial messages. Productivity tools could create presentation outlines based on brief descriptions and CRM systems could suggest strategies for engaging with customers. These capabilities have the potential to significantly boost the productivity of every knowledge worker. In the following lecture, we will speak in more detail about how implementing generative AI can impact the work of product teams and product managers. And for now, let's sum up the lecture. Generative AI is likely to have the biggest impact on knowledge work, tasks and activities that primarily involve cognitive functions. According to McKinney research, about 75% of the value that generative AI use cases could deliver falls across four areas customer operations, marketing and sales, software engineering, and research and development, almost every industry from banking to healthcare can benefit from generative EI through increased efficiency and cost reductions. Integrating generative AI into existing software can significantly boost productivity without the need to develop entirely new generative AI products. That's it for now, Ilsa in the next video. 6. How Generative AI Can Impact Product Manager’s Productivity: Everyone. Welcome back. Since you are enrolled in the product management course, we cannot miss discussing the topic of how generate AI will impact product managers work and productivity, given that product managers are knowledge workers. Those who will be most impacted by generative AI. As the technology is new and evolving fast, many product managers and product teams are still exploring which tools to choose, how to make the most of them and which use case to start with, McKinzy conducted interesting research to understand and measure the impact of generative AI on product management I found the results worth exploring. So let me share more about the research. The company recruited 40 product managers with different levels of experience from the US, Canada, Europe, and Latin America to participate in a study. Before participating in the study, the PMs attended a brief training workshop to familiarize themselves with generative AI tools. Research participants were then asked to play the PM role for a fictional company and to work individually through five activities at their own pace. The activities simulated the real life work of a PM across three phases of the product management process, discovery, validation, and development, and required PMs to create deliverables, such as market research document a press release with frequently asked questions, Product one pager, a document shared with internal stakeholders to align on the why behind the product initiative, its value proposition and how success will look. The participants also needed to produce product requirements document and a product backlog. Participants were divided into three groups, each with access to different generative AI tools, one group had access to task specific tools such as copy.ai. Another had access to chat GPT only. And the third group did not have access to any generative AI tools. Each group rotated and start and end times were recorded to measure the time spent on each task. PMs who used generative AI tools, either generic tools like CHAD GPT or task specific tools took less time on average to complete activities than PMs who did not use them, accelerating the products time to market by about 5% over a six months product development life cycle. The time savings were driven by using generative AI to synthesize user research and write press releases in the discovery phase, develop product requirement documents in the validation phase, and create product backlogs in the development phase. Another insight from the research is that product managers reported a significant improvement in their experience when using generative AI tools. 100% of the participants said that access to generative AI improved their product management experience. All but one of the PMs reported that the tools were helpful with the tasks and that they would highly or somewhat likely to use these tools in their work after the study ended. Three out of four believed that quality of their deliverables was either largely or somewhat improved compared to what they achieved without them. PMs perceived the tools as automating their mundane routine tasks and enabling them to focus on more strategic activities, such as defining the product vision, creating a strategic roadmap. And engaging in customer facing activities. Okay, let's move to the third research insight. Generative AI tools had almost twice as much positive impact on content heavy tasks, synthesizing information, creating and polishing content, and brainstorming as on content light tasks, such as data gathering and visualization. Specifically, PM productivity with content heavy tasks improved by 40%. How amazing is that? General purpose tools like hA GPT were more readily used by PMs than task specific tools, allowing PMs to iterate with flexibility and use the tools as partners in solving problems. This outcome is likely because general tools are more familiar to PMs and easier to use than specialized tools. In addition, some specific generative AI tools are designed to address more nuanced use cases and require custom input and text instructions prompts that PMs are not accustomed to them. Now, how about the quality of the deliverables produced using generative AI tools? According to the survey results, on average, generative AI tools helped PMs produce more accurate and complete outputs. However, the impact of generative AI generally varied based on the experience level of the PMs using them. More experienced PMs maintained a high quality of output, while junior PMs gained productivity but at the expense of qualdm. From this research finding, we can hypothesize that seasoned product managers can provide better instructions to generative AI and perform more effective reviews of the output given their experience and stronger product sis. More junior PMS, on the other hand, are still learning how to create high quality deliverables. And cannot yet write complete instructions to generative AI or effectively review the outputs. The final conclusion that the researchers are making is that while generative AI cannot replace the foundational skills needed to be a product manager, it can help PMs develop those skills. I tend to agree with this conclusion, and what do you think? Please share your thoughts in the Q&A section. So as we can see from this and the previous lectures, generative AI is not just a passing trend, but a powerful tool that can greatly enhance our work. We can finally get an extra pair of hands and delegate routine repetitive tasks, giving us the much needed extra time to focus on strategic thinking and creative problem solving. If you are being hesitant to include generative EI in your daily work routine or are unsure which tool to use, you're in the right place. The next series of lectures will be very hands on. You will learn how to create your own AI assistant to handle content heavy product manager tasks. As always, I'll share my experience working with these tools and demonstrate how to create one of the AI assistants without any coding involved. Let's get into practice. See you in the next video. 7. Follow-along: Let's build your PM AI assistant!: Everyone, and welcome back. So far, our discussion has been mostly theoretical. So I suggest we switch things up and try to automate one of our product management tasks. This is my favorite part because to be honest, I use generatFI quite extensively to support many of my tasks from brainstorming and writing interview scripts or product descriptions. To assisting with spelling and grammar checks. In the previous lecture, we mentioned that generative AI tools are most effective for content heavy tasks, which include the following generating content, for example, writing a problem statement, user interview equations, discussion guides, survey equations, product requirements documents, and so on. I'd also include brainstorming ideas here, analysis and research. This includes tasks like analyzing customer interviews, support ticket information, conducting market and competitive research, and other similar tasks. Getting feedback. This is a category we haven't discussed yet. For example, you might ask for feedback on your resume before a job interview or request advice on what questions you might be asked. You could upload your product portfolio and ask for suggestions on how to improve it. There are countless examples where you might want feedback. Personally, this is one of my favorite use cases for generative AI. However, you should be cautious about data privacy, especially when dealing with documents under NDA. Always check the data usage policy to see how your data is handled and if it will be shared with third parties. If in doubt, avoid submitting the entire document and instead upload only a part that does not contain sensitive information. Or describe in your own words what you want feedback on. After watching this lecture, your task will be to decide which type of tasks, content generation, analysis, or feedback you'd like to automate with generative AI. When making your selection, I strongly recommend choosing a task you are already familiar with one that you've done several times. As you will soon see, you will need to provide detailed instructions for the model, so having prior experience will help when writing those instructions. For the tutorials, AINa and I will demonstrate two projects a side project idea generator app to help aspiring product managers brainstorm site project ideas and the product manager CV review to help improve product manager CV for the first or next PM job application. To build these apps, we will use one of OpenAI's foundational models GPT four to create custom GPTs. GPTs are custom versions of chat GPT that users can tailor for specific tasks or topics. They can range from answering frequently asked questions to performing detailed data analysis, generating creative content. Or even interacting with third party applications to automate workflows. In real world situations, however, choosing the right foundational model for your use case can be a challenge. It turns out that using the largest model isn't always the best choice, as it can be more expensive, harder to manage and may produce inconsistent results across different tasks. A smaller, more focused model might be a better fit for certain use cases. But how do you decide which model is right? I found a six step framework from IBM very helpful, and it is something I use for my projects. I leave a link to the framework in the resources section so you can dive deeper into it when you are tasked with selecting the right foundational model. Okay, back to the hands on part of the lecture. Now it's your turn to choose which task you'd like to automate. Please share your decision in the Q&A section and I'll seea in the next lecture. 8. Follow-along: getting your ChatGPT account ready and exploring the GPT store: One. Welcome back. The first thing we need to do to build our AI system is to create an account with CHAD GPT. You will need access to the paid subscription option to be able to create custom GPDs. However, if you don't want to select a BAD plan yet, you can sign up for their free tier and still follow along with the tutorials. The difference is that you won't be able to save your instructions within your GBT. Instead, you will need to create a new chart and paste the instructions whenever you need AI assistance for that task. I'll provide more details in the upcoming tutorials. After creating your account, the next step is to set custom instructions for chat GPT. This feature allows you to customize hat GPTs responses based on your preferences, and you can modify or remove these settings anytime for future conversations. From the main screen, click on your account icon. The top right corner and then select customize chat GPT. The first question you'll answer is, what would you like hat GPT to know about you to provide better responses? Here, provide information about your background, where you currently work, and what you do. You can explain this in simple terms as if you were writing an essay about yourself. The second question is, how would you like Chad GPT to respond? Here, provide any details that would help Chad GPT structure its responses. For example, I said, I prefer responses framed in conversational language without using formal words or cliches. Since many of my students are non technical, I asked it to use language that can be easily understood by non technical people who are not experts in the topic I'm teaching. You can also choose what capabilities you plan to use most of the time. Take some time to think about what information you'd like to provide to chat GPT. When you are done, click Save. Another setting worth exploring is data controls. You need to decide if you want to allow your content to be used to train Open AI models. You can toggle this setting on or off. The last step is optional, but I recommend doing it if it is your first time customizing GPTs. Go to Explore GPTs and browse through the apps already available in the store. You can search by categories or keywords. For instance, let's search for product manager and see what comes up. Here is a list of relevant custom GPTs along with the short description and the number of conversations each GPT has been used for. Click on the GPT of your choice and explore how it works. Check out the conversation starters and see what happens when you click on one of them. By exploring existing GPTs and seeing how they work, you can get a good idea of how to design your own GPT. Plus, you might find a useful app you can use for your own tasks rather than creating one from scratch. In the Q&A section for this video, please share which custom GPTs you've discovered and ILCA in the next tutorial. 9. Follow-along: Creating custom GPTs: Everyone. Welcome back. Let's create our first custom GPT. To get started, go to account settings. My GPT is create a GPT. On the first step, you'll see a GPT builder that uses a conversational interface to help you create your GPT without having to manually fill out all the required fields. The configured tab allows you to provide more detailed instructions for your GPT. I usually prefer to start with the configured tap right away, and that's what we'll do for this tutorial. Begin by defining the name and description for your GPT. Next, you can either upload a logo for the GPT or create one using Dali. Open Ayes text to image model. Let's keep the instructions and conversation starter fields for now and explore the three sections at the bottom of the page. The knowledge feature allows you to provide additional content for your GPT to reference. You can upload one or several documents here for your GPT to access while performing tasks. For the ID generation GPT, we won't be using any additional content, so we will leave this section empty. The capabilities section allows you to enable web browsing, DL image generation, and advanced data analysis. If you want your GPT to perform additional functions. For my GPT, I'll choose web browsing and DL image generation. Custom actions are commands or scripts that the GPT can trigger to perform a variety of functions, such as interacting with APIs, manipulating data, or triggering workflows. Essentially, they extend the functionality of GPT models beyond text generation. For instance, if a user asks for the current weather, a custom action could be set up to fetch real time weather data and return that information. Custom actions require technical knowledge, so we won't be including them in the ID generation GPT. Now, let's return to the instruction sections, which describes the core logic behind how the custom GPT will work. There are certain guidelines to follow when writing instructions to get the best results. Let's go over them. These guidelines are applicable not only for custom GPTs but also for any individual chat you will create with chat GPT. If you are on the free plan, you won't have the same interface to write safe instructions as customizing GPTs is not included in the plan. As a workaround, I recommend saving the instruction text in a Google Doc. So you can access it later when you're ready to test or use the instructions, just open a new chat and copy paste them. You can always access the chat history through the left hand side menu. Now let's cover how to write the instructions. Start by describing the purpose and use case for your custom GPT. Explain what kinds of questions or tasks it should help with and what outcomes you expect. This helps the model stay focused on delivering relevant responses. For example, for the ID generation GPT, we have the following instructions. The full script of the instructions used to create this GPT is available in the resources section, so don't forget to check it out. Next, identify the target audience for your GPT. This includes their skill level, interests, and any specific needs or preferences. Third, describe the tone you want the GPT to have. This could be friendly, professional, casual, or humorous, depending on your target audience. Specify if you want the GPT to use conversational language or maintain a more formal style. You can also provide behavioral instructions for how GPT should handle different types of interactions, such as questions it cannot answer, handling sensitive topics or when to redirect users to other resources. The next set of instructions for the GPT we are building will depend on which conversation starter you've chosen. Conversation starters are example prompts users can use to begin the interaction for the ID generation GPT. We have two conversation starters. Our instructions will vary depending on which one the user selects. Here is how we handle this logic. First, we write. If a user selects, give me ten ideas for my side project as the conversation starter, proceed to step one to four below. When writing instructions, it is important to break down multi step tasks into smaller, more manageable steps to ensure the model can follow them accurately. Be as detailed as possible, especially when multiple actions are required within a single step. For example, in step one, we ask the user to provide the following information about themselves. We then listed the questions we want the GPT to ask one by one. We also include a behavioral instruction to ask each question sequentially, waiting for the user's response before moving to the next question. In step two, we instruct the GPT to generate ten site project ideas. These ideas must intersect across all four areas we've defined. We capitalize all to emphasize the instructions. In the step three, we specify the information that needs to be provided for each idea. Notice that we structure the information into a list to improve clarity. It is also a good practice to include one or more examples to reduce variability in output. Here is an example included in the idea generation GPT instructions for step three. Lastly, step four asks the user if they would like to refine or further develop the generated ideas. We've just covered the instructions for the conversation starter. Give me ten ideas for my side project. Instructions for the second conversation starter. How can I build my side project are much simpler. We ask GPT to provide a link along with following text. Great. Now let's test our GPT in action. Mm. We've got some great ideas that we can either start working on immediately or provide additional instructions for how they should be refined. Of course, one test won't be enough to finalize your GPT, so you will need to iterate several times refining and adjusting your instructions based on the responses you observe. All right. That wraps up the tutorial on creating your first GPT. The ID generation GPT is available for you to test and explore. You will find the link to the app along with the instructions used to customize the GPT in the resources section. Take your time and review the instructions you want your GPT to follow. In the next tutorial, we will cover how to implement a scenario where the model requires knowledge beyond it has been trained on S there. 10. Follow-along: Using knowledge files for custom GPTs: Everyone, welcome back. Let's continue exploring how to create a custom GPT. You might have a use case when the model requires knowledge beyond what it has been trained on. Imagine you are building custom GPT to help derive insights on your customer's problems and product improvement opportunities. While GPT four can offer general advice on how to conduct product discovery, it does not have access to specific details about your customers and products, such as customer interview scripts, customer survey results, support tickets, and other relevant sources. Solution here is to give the GPT access to these data sources so that it can retrieve relevant information and help generate product improvement ideas. To achieve this, the GPT needs a mechanism to fetch and integrate specific up to date information from your internal tools into its responses, which is where retrieval augmented generation comes in. Retrieval augmented generation is the process of retrieving relevant contextual information from a data source and passing that information to a large language model alongside the user's prompt. This retrieved data augments the model's base knowledge to improve the accuracy and relevance of its output. To implement retrieval augmented generation, you can either connect your GPT to live data sources, such as your ticketing system or customer database, or use the Knowledge upload feature where files containing additional context are indexed and used in responses. PTs then retrieve this data dynamically to provide more relevant insights based on user prompts. For this tutorial, we will learn how to use the Knowledge upload feature. Let's dive into the details. I have created a second custom GPT designed to help product managers improve their CV for the first or next product manager role application. For this GPT, we have two conversation starters. When you choose, please review my CV for product manager role. You will be asked to provide several pieces of information. Your CV or the parts you want feedback on, a description of the job you're interested in applying for and any other relevant information about your background or professional goals that would help in understanding your profile. Once this information is provided, you will receive feedback, including an overall review of your resume, feedback on your work experience, education, formatting and style, and other recommendations. In addition, the custom GPT will highlight your profile strength and areas for improvement. Now let's look at the configured tab for the GPT. Let's go straight to the knowledge feature. Here I uploaded six documents for the GPT to reference when making its recommendations. Five of these documents include information on how to craft RCV specifically for a product manager role. The last document contains examples of job requirements for product manager roles, which I collected from Linkin jobs across three regions, US, Em and APAC. I want the GPT to use these documents when reviewing the submitted CV and to pull information on how the CV should be structured and what content should be included based on current job market expectations. Let's see how I reference the documents in the instructions section. In this section, I included a paragraph describing how the GPT should use the knowledge files. I wrote, to provide recommendations, refer to the knowledge section of this GPT. Then I listed the document names followed by a brief description of what each document contains. And that's it. I didn't provide additional details on when the GPT should refer to each specific file or what exact information it should extract from them. Instead, I wrote the following. To provide recommendations, refer to the job description the user wants to apply for I provided, access the given resume against this job description, and for each of the seven points listed above, here I refer to the output format the user will see highlight how the resume can be improved to maximize the chances of being shortlisted for the interview. Initially, I tested more detailed instructions on when the GPT should access each of the knowledge files. For example, in an earlier version I wrote, when reviewing the work experience section of the submitted CV, ensure it is structured according to the recommendations described in this file. I did this for each item in the output list. However, when testing that version, I found that the recommendations were not as clear and tended to be somewhat ambiguous. I realized I had placed too many restrictions on the GPT, making it difficult for the model to respond naturally and helpfully. That's why I modified the instructions to the version you see now, which led to much better structured, concise, and accurate results. You're welcome to test this GPT yourself and share your thoughts in the Q&A section for the video. You'll find the link to the GPT in the resources section. I've also uploaded the full text of the instructions I wrote for this GPT so that you can reference them as well. And by the way, if you're looking for more examples on how to write instructions, I recommend checking out the GPT Builder from OpenAI. The GPT Builder itself is a custom GPT with instructions and actions. The full text of the instructions written for the GPT Builder is available on the OpenAI support page, and I found it really helpful to review this before writing my own. I'll include a link to this page in the resources section as well, and that's it for this tutorial and ALCO in the next one. 11. Follow-along: Sharing your GPT: Thank you.