Hands-on Text mining & NLP In Python Using Textblob | Natural Language Processing Made Easy | Nileg Production | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Hands-on Text mining & NLP In Python Using Textblob | Natural Language Processing Made Easy

teacher avatar Nileg Production, A Developer & A Teacher

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

26 Lessons (2h 3m)
    • 1. What is TextBlob?

      2:00
    • 2. Environment Preparation For Development

      7:50
    • 3. Blob Creation and Tokenization

      4:43
    • 4. Part Of Speech Tagging

      3:30
    • 5. Noun Phrases

      4:05
    • 6. Sentiment Analysis

      6:11
    • 7. Words Inflection

      6:49
    • 8. Advanced Sentiment Analysis

      3:51
    • 9. Spellcheck

      4:58
    • 10. Words and Noun Phrase Frequency

      5:48
    • 11. N-grams

      5:00
    • 12. Textblob acts like Python Strings

      4:49
    • 13. Start and End Indices

      3:34
    • 14. Text Classification System

      3:56
    • 15. NaiveBayesClassifier

      8:03
    • 16. Text Classification

      6:02
    • 17. TextBlob Classification

      4:18
    • 18. Checking Model Accuracy and Model Updating

      3:44
    • 19. Sentiment Analyzer

      7:30
    • 20. Tokenizer

      5:21
    • 21. Noun Phrase Extractor

      3:28
    • 22. POS Tagger

      3:17
    • 23. Blobber

      6:10
    • 24. Language Detection and Translation

      3:10
    • 25. HTTP Error

      3:54
    • 26. Class Project

      1:16
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

7

Students

--

Projects

About This Class

In NLP Boot-camp: Hands-on Text mining in Python using TextBlob for Beginners course, you will learn Text Mining, Sentiment Analysis, Tokenization, Noun Phrase Extraction, N-grams, and so many new things. I will start from a very basic level where I will assume that everyone is an absolute beginner, having no knowledge regarding Machine Learning, Artificial Intelligence, and Natural Language Processing. So, I will be explaining everything in a very easy manner. I will start with Tokenization, Parts-of-speech tagging, Noun Phrase Extraction, Sentiment Analysis, Spell Checking, Words Inflection, Lemmatization, Spell Checking, Words, and Noun Phrase Frequency, and N-grams. Then, I will jump to the intermediate level where I will explain how to develop your own Text Classification System and I will explain what is Naive Bayes Classifier, how to create a model, its training and testing. In the advanced/ final level, I will explain Model Accuracy, then I will again explain about Tokenizer, Sentiment Analyzer, Parts-of-speech Tagger, and Noun Phrase Extractor. For all these projects, I will use one of the easiest Python libraries for Natural Language Processing and that is TextBlob. This NLP Boot-camp: Hands-on Text mining in Python using TextBlob for Beginners course is designed in such a way that after this course, learning other advanced Natural Language Processing Libraries like NLTK, Tensor-Flow, and Keras, etc will be no more difficult for you.

Students will Learn:

  1. Strong concepts regarding Natural Language processing
  2. Complete command over Tokenization, Tagging and Noun Phrase Extraction
  3. Complete knowledge regarding Sentiment Analysis, Words Inflection, and lemmatization
  4. Fully familiar with N-grams, Frequency and Text Classification System
  5. Complete knowledge regarding Naive Bayes Classifiers and Model Accuracy
  6. Complete knowledge regarding Language Detection, Translation, Spell Checking
  7. In addition, students will learn "How to use NLTK classes in TextBlob?"

Prerequisites:

  1. Basic Python Programming.

Meet Your Teacher

Teacher Profile Image

Nileg Production

A Developer & A Teacher

Teacher

Hello, I'm Umair.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. What is TextBlob?: Hello everyone. So in this tutorial video, we'll talk about text block. So let's begin the lecture. Okay, so textblock is basically a combination of two words, text and blog. I hope you know very well that this text, our text, good to be a volt, a sentence, a paragraph, or even a hall asset. Okay. And what is blogs or a blob is basically a smile, sport or lump of anything OR gate. In this case, what will be our blob? So our blob. Blob of text as part of text. Okay, so I hope this didn't make sense. So by definition, a text blob is a Python library for processing textual data August. So keep in mind that it is a library for processing textual data. So it provide a simple methods and functions for natural language processing. Or if I say it provides APIs, simple APIs for natural language processing, then it can be quite good. Okay, so here I have a question and I hope The same question will be in your mind as well. And that is, what are these processing? So what are natural language processing? Okay, So these processing's includes part of speech tagging, known phrase extraction, sentiment analysis, classification, translation and much more. And we will go through all these things very deeply in the next tutorial. So keep watching because and just follow the instructions. And in the next tutorial, we will see how to prepare environment for text blob. So I will see you in the next tutorial. 2. Environment Preparation For Development: Hello everyone. In this tutorial, we will see how to prepare environment for natural language processing using text blob library. So let's begin the lecture. Okay, so first of all, you need Python, okay? So just to go to python dot RG and just download the desktop version of Python, and that is 3.9.7. Okay, so just hit this download button and it will begin the download. Okay, So make sure that if you are using Windows, then download it for Windows. If you're using Linux or get, then just click this Linux slash Unix ocher and download the patent for Linux. And if you are using Mac OS, then download for Mac OS. Okay, So after downloading Python, just double-click on the setup and you will see such type of interface. Make sure, make sure that you click on this little check box, which is add by 3.92. Okay, so make sure that you click this little checkbox or get, and this will add bite them to you, our environment variable, okay, so now make sure that you do this step. This is very important. Otherwise you need to add the pot of Python manually to the environmental variables. Okay, So after downloading and installing Python, just go to the jetbrains.com slash pycharm. Okay, and what is Pi jam? So I hope you know very well, by jam is basically an IDE and integrate, integrated development environment in which we will actually write our code. And the reason why I'm using PyCharm is that firstly, it's interfaces quite good. And secondly is met for Python development. And that's why I'm using PyCharm. So there are two versions off by jam, the professional one OR gate and the community one. So we need PR pipe and developmental. We will download community when aka, otherwise, you can use professional one, but I think that we do not need it and it is bad as well. Okay, so just download the community one and just double-click on the setup. Hit Next, Next, Next and in start, and this will install your PyCharm OK. And so after installation of Pi champ, you will get such type of interface, okay, noted in the back one. Okay? You will get this type of interface in which it will ask you the location, okay? And all other things. We will go through all these things. So first of all, location. So we need to create a new project and we need to set a location for our project. So just click on this file button or get and just define your luggage. And so I want to sell my project in the desktop. And I have created a separate folder with the name text blob. So how you can create a separate folder. So just click on this button and it will create a new folder for you. Okay, So after creation of new folder I have created and other folder, the name project underscore one and in which either sell all my files aka. So after creating everything, just click OK and OK. And now we need to create virtual environment. And what is virtual environment? So virtual environment is basically an environment which allow us to do development. Okay, so I hope this will make sense. But everything we do in virtual environment do not interact with our men system or everything we do in our main system, do not interact with our virtual environment. So this is basically a difference between a man system or a virtual environment. So we need to define the location for our virtual environment as well. So I'll just click on this folder and I will go with text blob, okay? And I will create a separate file here, okay, I separate folder. So just click on this folder icon and just dry like project underscore one, underscore V D virtual AKA, and just click OK. And this will create a virtual disk, will create another folder in which we will sell all of our virtual data and just click OK and OK. So I hope that you have understood everything and nothing as Vinod to do, just keep everything as it is and just click here, create and distributed on this window. And this will create our virtual environment for us, okay? And it is loading components, creating virtual environment or guess or just read for a few seconds. So one of our virtual environment is ready. You will find such type of interface. Okay, this is our project one folder and this is our main.py file. Don't worry about Discord. Just select everything and just press backspace. Okay, So we do not need that, God. Okay, so now we have installed Python, we have installed PyCharm. And now toward step, and that is to install our text blob. Okay, so it's very simple, just open determinant, okay? So if I installed my text blob using CMD or just type here cmd. And if I installed my text blob using cmd, then the program which we will write in inside our PyCharm when not drawn and it will give us an error. This is because we have created a virtual environment. So there would be no interaction between our system and our virtual environment. Okay, So do not install text blob using CMD aka tried to in style it using determinants of PyCharm. Okay, so just try it here big and install. And just try to your text below the text garb and just hit enter. Okay, PIP is not recognised, sorry, PIP, pip install and SDN install text blog AKA and hit Enter. And this will start installation of text blob or RCA and distribute Tech few seconds. Okay, So after the installation of text blob, need to actually the necessity NLTK corpora. So let me clear you that the textblock is best on NLTK library or guess or we need to start the necessary corpora for this text blob. Okay, so I'll just write here pi and then dash m, dx, dy, dx dot down. Lord, no lot underscore corpora. Okay, so we need to install be an accessory corpora for text blob and hit Enter. So now we have the necessary corpora as well. And this was all about preparation of environment. And in the next tutorial, we will jump into the development. So I will see you in that tutorial to that point. Bye-bye. 3. Blob Creation and Tokenization: Hello everyone. So in this tutorial, we will jump into practical work and we will see how to create blobs of text in text Bob. And then we will see how to perform tokenization, which is one of the basic NLP operation. So let's begin the lecture. Okay, so previously we have created our virtual environment and this is our main.py file. So first thing first and that is import our text blob. So just right here from the text blog, import to import text blob. Okay, So we have imported our this text blob and with the help of this glass reveal, actually create our blob of text. Okay, so let me clear you that performing anything in text block consists of two steps. Okay? So in the first step, we create a blob of text, ok. And in the second step, we call a certain function for a specific task. Okay, so I hope this with Mecca sense. So now let's come to step number one, and that is creating a blob of text. So I just tried TO via lobby blog is z equals 2 and text brackets. And now inside this text blob just Boss our required string. Okay? So for example, in this guess I will go with like hello, hello. This is natural edge e and natural language processing course. Okay? So this is natural language processing course and awesome. We have created our blog and if I die peer present BOB blob it to imprint our blob. Okay, so if I run it, okay, So this is our string, or guys are awesome. You have completed step number one and that disk creation of a block. And now we will jump to step number 2. And that is calling our function for a specific task and in disguise, our specific task is tokenization. So before I jump into tokenization, you need to understand what is tokenization. Tokenization basically refers to a dividing text or a sentence into a sequence of tokens. And these tokens are roughly corresponds to votes. Okay? So in a simple definition, we need to divide this sentence into vault, okay? And it's very, very easy task or get, so just called blob, aka our variable blob and just put dot and then just tried TO volts. Okay, we need to divide that sentence into small tokens or small words or gun. And if I run it, I hope that you will see nothing. And of course, and that thing is here. Okay? So this is because this function, this word function has created tokens, but we need to print it. So just try it here, print and print. Ok. And now we will see the tokens of the sentence and these are hallow days is Natural Language Processing course. Okay? So this is basically a list or gap, and you can actually loop through all these worlds as well. So for example, if I write here for B, BLOBs, blog dot W, ds volts, okay? And I need to print each word separately. And this guess, this will be my B. And if I run it, yeah, Here is my hallowed. This is natural processing. So this is how you can actually perform tokenization and textbook with just a single lattice. So that's why I choose text blob library for natural language processing because it's very, very good, It's very easy and it's very author. And in the next tutorial we will see something new done that bind. Bye-bye. 4. Part Of Speech Tagging: Hello everyone. So in this tutorial, Vivien see how to find different type of part of speech in a sentence. If I say, we will do part of speech tagging in this tutorial, then it will make more sense. So let's begin the lecture. So previously we have done this bond two, we have done tokenization here or and I hope you have understood it and now vivid do bout of speech tagging. So it's very simple. Just keep in mind that performing any type of task index block consists of two steps, creation of a blob and then performing a certain operation on that globe or get. So this is our step number one and this is our blob. So this is hello, This isn't that struggle language processing course. Okay, So this is my blog and now I have it apply tagging, part-of-speech tagging on this block. Such very, very simple. If I said below dot DDGS tags, okay? And if I say all of these tags in a variable, like I said, that part of speech, okay, BOS part of speech and have instead, I will say every tag in this part of speech variable, and then I will print that variable, okay, B, O, S. And if I run it, distinguished work fine. Okay, so, yeah, Here is my output. So hair, the halo is my noun. Okay, and this is my d t. So these are all the abbreviation of a different part of speech. Okay, I have given a list. You can just download that list and God, through all these abbreviations. Okay, so a WBS edge represent low book, net should or the judge and language. Now again, okay, So this is how you can perform tagging in text blob, okay, so now what I want, I want to print all Lee nodes, okay? So it's very simple. Loop through the list. Four, I've said the name of the world or I will say volt Orca, and I've instead tag in pause or gesso. Why I'm writing here wall than tag because this boss should returns two things. One is the volt Orca and the other is the tag, Olga. And that's why I'm writing here vote and coma tag or I will say have Wold in this variable and I will sell it stag in this variable. Okay, So it's good. And now I would check if, if my tag deBroglie cause to my tag w equals two, like an N and B now, Okay, Then I will just need to print that world or get WOR the word, okay? So this will print just known. Okay? And if I run it, and yeah, here are my knowns or GGA hello language processing cores. Okay, so I hope you have understood that bright and in the next tutorial we will see something new. It didn't that bind? Bye-bye. 5. Noun Phrases: Hello everyone. So in this tutorial, we will see how we can perform noun phrase extraction and text blob. So let's begin the lecture. Previously. We have done to that bond two VOV who are performing some sparked of speech tagging process is in text below. And in this tutorial we will see how we can extract noun phrases. Okay, So, but before I go to noun phrase extraction, Let's change our text here. Okay, let's increase it. Okay, So hello, this is natural language processing course. In this course, C or U, our course, you will learn text blob for the EEG data from beginning. You Vin some basic concepts, okay, So I think it's quite good and no vive in at bolide noun phrase extraction. Okay, so we have our blog, okay, and now we will perform our noun phrase extraction operation here. And it's very, very simple. Just try to be a lobby blog. Dot. Now, friends is okay. And if you need to print it so you can write here BRI and D print or RCA. And noun phrase is okay, or you can assign a separate variables. Do this blob dot noun phrases, and then you can put that variable as well. Okay, so if I run it, okay, so yeah, here are my 1, 2, 3, and 4 noun phrases or kept her lawn language processing, text blob, massive concepts are. So this is quite good. You can even find the length as well. Okay, so if I write here at the end and I said BLOBs, blog dot noun phrases. Again, if I said that land in a very ever like I will say, is equals to, and then if I print out, you will find that it will return false. Okay, because there are four noun phrases in this paragraph. If I run it, yeah, Here are my output, AKA and here is four. Okay, So, so this paragraph front-end far noun phrases or so. I don't know how I didn't mention, but there you can actually find the total number of words in this paragraph as well using a phenomena which we have done and that does tokenization orcas, or with the help of tokenization, you can find the total number of words in this better Graph, azure Arc. And so it's very simple. Like guy that has say, n is equals to b modulo dot o volts WOR these wars. And I've added find the Lent Orca. So, and then, and then I will print that and get BRI into print. And, and it treated return the total number of words which this paragraph has. Okay? And yeah, Here is my output and this better graph consist a content only 22 awards or gas, or you can actually call it, but it's quite time-consuming tasks, or this is how you can actually call the total number of words in a paragraph. I hope you have enjoyed this tutorial and in the next tutorial we will see something new till that point, my Bye. 6. Sentiment Analysis: Hello everyone. So in this tutorial, we will see how to perform sentiment analysis in text blog. So let's begin the lecture. Okay, so first or fault. Before I jump into coding, it is very important to understand what does sentiment analysis. So sentiment analysis is basically the process of determining the attitude or the emotion of the writer or gas, whether it is positive or negative or neutral. So I hope you have understood what the sentiment analysis, okay, and now the sentiment functions of the text log, which basically enables us to determine the attitude or the motion of the writer, returns two properties, the polarity and the subjectivity. So what does polarity? So polarities are flood is a flirt value which lies in the range of minus one to one. Okay? So if we get a polarity, which value is between minus 1, 2, 0, then this means that the attitude of the writer is negative, okay? If we get minus one value, this means that this sentence is very negative. Or if we get a value between 01, then this means that attitude of the writer is positive. Okay? So polarity basically tells us about the attitude of the writer. And subjective sentences generally refers to personal opinion, emotion, or judgment. Various objective refers to factual information, aka subjectivity is also a float value, which lies in the range of 01 OR gate. So polarity lies in the range of minus 11 OR gate, while subjectivity lies in deranged 01. So I hope you have understood what a sentiment analysis, and now let's jump into some practical work. So I am and my PyCharm, okay? We will do a sentiment analysis. Okay, So just remove everything here. And now let's change this text. Okay, So just change it. And I will go with like, YouTube is one of the best platform PLN or bad form, to learn, to learn new skills. To learn new skills. Okay? So I hope this sentence is good. Okay, so now we will analyze the sentence, okay, so it's a very, very cool, very, very easiest. Just write blob or dot sentiment as NTIA sentiment AKA, and I want to print it. So just try to print and then set blog dot sentiment. And let's run it. And yeah, Here is my outward that is polarity 0.5. This means that it described positive and subjectivity is 0.3. This means that it is a public opinion, or 30 percent of public agree with this sentence. Okay, so this is quite good. And now I want to print these values in a percentage, okay? So it's a very, very easy. So basically the sentiment blog.js sentiment function returns two things, the polarity and be subjectivity Orca. So I've inserted the polarity in B and the subjectivity in S is equals to blob. Blob dot sentiment, AKA blog dot sentiment. And now I will convert this speed to percentage. Okay? And I would have caught it like B underscore bar center, okay. B, B, B part. Okay, and now what I, how I can convert this to a percentage. So it's very awesome and it's very simple. Just multiply that value with 100, okay? And then we need to convert it into int, okay? So just kinda worked it into int and this will return us a percentage value. And for the subject, just like B are deployed, is equals to INT and then S multiply by 100. And now I need to print these two values. So just like rent and I will say be our IT, you have polarity. And for the polarity, I will go with this value, which is p percent. So just right here, B underscore, B Adi, okay? And I need to put person sign as sven here. And now I need to print to the subjectivity. So just try it. Hears OBJECTID I VIT, subjectivity are okay. And for this, the value of my subjectivity is S underscore B are part AKA and I need to print the person sign here as well. And now if I run it, you see averaging in percentage. Yeah, My polarity is 56 percent while my subjectivity is 37 percent. So this is how you can actually perform sentiment analysis in text blob. So I hope you have enjoyed this tutorial. And in the next tutorial we will see something new till that bind. Bye-bye. 7. Words Inflection: Hello everyone. So in this tutorial, we will see what is world's inflection and how to perform words inflection in the next block. So let's begin the lecture. First of all, you need to understand what is world's inflection. So words, inflection is a process of world formation of gas. So keep in mind that inflection is a process of world for mesh and enrich, correctors are added to the best form of words to express grammatical meanings. For example, for example, I have a word, it is in its singular form. Again, I want to convert it into its pollutants pump. So this is called Wald's inflection or if I have included foam. And I want to convert that to word into its singular form is that this is also called world's inflection. Okay, so I hope you have understood what is words inflection. So now let's do some practical work. Okay, So I am on my PyCharm and previously we have done how to perform sentiment analysis in text blob, and now we will see how to perform words inflection in text blob, okay, so it's very simple. First of all, I need to avoid, okay, so you can actually tokenized that sentence and then select any of the wall by defining its index or gas or faster probably we'll go with that process. For example, I want to convert that a lead form world into its plural form. Okay? So it's very simple. Trustor fall just tokenized. That sentence are just right below blog dot w or RDS blob dot words. Okay, and I will save that list in WD or divorce, okay? And now we need to access to this particular word. So this will be at index number six, I think 0, 1, 2, 3, 4, 5, 6. Yes, it will be at index number 6. So for example, let's print it. So w d, w d, and just define the index 6 or again, now if I run it, this will print a lead form yes, platform. And now I need to convert that WD, that pellet form work to its plural form. Okay, so it's very simple. Tried tear, I've created other variable and I've said like a, b or b, pl, plural, okay, and is equals to W, D, index number 6 dot glued allies PLU pluralize. Okay, So at this brutalized function will basically convert that singular and do explode edit form. And then we need to print that be ADL, be added. Okay, and now you will notice that vivid and get the plural form of pallet form and that displacement farm. So awesome, awesome. You have den walls inflection in text below. Now, for an example, if I don't have the sentence and I need to perform awards inflection, oppression on a single word or camp. So it's very easy, It's very awesome. And I want the user to enter that toward indeterminate. Okay, So first of all, let's create that function, the input function, again, I will save that toward in w d is equals to NP, NP UT capital IN the BUT input OR gate. And I've got a set. E and T are inter Zhou word, okay? And I will said that door in WFD and now I need to convert that world to its plural form. Okay, so I need to move that to wold, to a particular class that discard voids or get. So I've hit import that class from text law IMP or import WOR the world. Okay? And then I will say dessert RES u and d. Result or raw reserved are at the bureau score w0. Z is equals to void. Okay? And then I will pass that to wold hair. Okay? So wold and WD, OK, and now I perform my oppression on this particular variable sided call. Ws underscore result dot ALU Pluto lies and I need to print it. Okay? So just print it to PRI into print, OK. And now everything will work fine. I hope interviewer world and I1 to like add MLP. Any apples, and apples are my polluted of this word. Okay, and now what I want, I want to convert a gluteal word into its singular form. Okay, So it is very simple. Just copy this SAM cord, okay, and just copy it. Or at best a tear and just change the variables. It will be my singular, okay, singular and run the score result underscore singular or gather our underscore result underscore singular. And now instead of Pluto lies, I will say singular arise. As I NGOs singular arise. And if I run it faster, we need to interrupt singular words so that it will convert it to pollute our Lord. So okay, at FIP, Ellie, apple. Yeah, Here is my plural, aka and now I need to interrupt plural word and either call it like be edible address balls. And yeah, Here is my singular. So what is your homework? Your homework is that you need to check if the user has entered a plural word or a singular word. If it is a plural word, then you need to convert it into its singular form, okay? And if it is in a singular fault, then you need to convert it into its plural form. Okay, so this is your homework. Try your best to do that homework and I will see you in the next tutorial till that point. Bye-bye. 8. Advanced Sentiment Analysis: Hello everyone. So in this tutorial, maybe I'll see how lemmatization walks in text blob. So let's begin the lecture. Fester. Before I jump into coding, it is very important to understand what is lemmatization, okay? And in order to understand what is lemmatization, you need to understand what is Lima. So according to Cambridge dictionary, lima is a form of a volt that appears as an entry or a tag in a dictionary and is used to represent all the other possible forms. Okay, So I hope you have understood, for example, build, okay, Build is a wall and it represents various building built or guests are built, is basically a Lima or an interest point or a tag for bears building and built. Okay, so I hope you have understood what is Lima and what is lemmatization. And now let's jump into some coding. So I am on my PyCharm and previously we have done, we will how to perform words inflection index to blow up. And now we will see how lemmatization works in textbooks. So remove everything, okay, remove everything and know what I need. I need to call a word into its entry point or Intuit steg world. Okay? So I will also take, I do use our input, okay, and then I will convert that word into its entry point or intuits Lima. Okay? So just right here, WOR or WD is z equals to i and BUT AN bird and write E and D are interiors you or volt okay, into your word. And then I need to convert that toward our w rA is equal to WR, the world or WFD. And now I need to convert that into Lima form. Okay, so I will bring, print our fibular R-dot to Lima dermatitis. Okay, and now inside this too rounded brackets, I need to pause the parts of speech in which I want to convert it. For example, in this case, I will go with verb aka at v represent my verb, okay? And now if I run it, okay, I need to enter my world. And for example, I will go with like VU, ILD ING as spendings. Okay? And yeah, belt is de Lima of building aka. And now if I run this thing again and this time I've been intercepting as like what? Like vetted, IT ED vetted, and it is performing its oppression and yeah, that is dilemma of vetted. Okay, So this is how lemmatization works in textblock and how simple it is just by writing a single function lemmatized, and it converts a wolf boiled into its Lima phone. Okay, so I hope you have understood and in the next tutorial, we will see something else till that point. Bye-bye. 9. Spellcheck: Hello everyone. So in this tutorial, we will see how to check for spelling mistakes in a sentence using text blob library. So let's begin the lecture. Okay, so faster far, Let's write here are wrong sentence or so. Again, keep in mind performing any task in textblock consists of two things. Aka, creation or fewer string AKA and then performing operations on that string. Okay, So first of all, let's create a wrong sentence, hair. And in this case, I have and we'll go with like, I have good dee, Dee, Dee, good trouble or the good spellings, SP, EA, and G, I have word spending. Okay? So now I need to perform our spelling correction or pressure on the sentence. Okay, so it's very simple. Just right here, blob dot, correct. Okay. Cr wor wor ECT blob dot, correct. And agonists have distinct in SEN, descent to area above. And then I will print that variable, SE and dissent. And if I run it, yeah, Here is my correct sentence and that is, I have got spelling. Okay, So this is how you can check for a spelling mistake in text block, okay, So you can also check for the list of suggested world's Orca and you can also check its confidence level as well. Okay, so it's very simple instead of correct, or I will said that the new variable lst is equals to b or b blog VM or B, blob dot spellcheck, SB, spent THE see guess spent check. Okay. And then I will print PRI AND print that list or LSD and if I run it, okay, so we got an error. Basically we need to define us. So turn on which we want to perform that spell check method. Okay, So for example, I1 to check this world or the list of suggested worlds. And it's confidence for this word g, tuple or D got aka. So it is at position 0, 1, 2. Okay, so just right here too. And not to, we need to actually convert that thing into first bags and then we can check it. So just right here. But instead of converting the hopping into tech, I will say a blog dot words. And then I will pass the index, which is two dot spellcheck, SPE, spend THE see guess benchmark aka, and now it will work fine. And I need to sad distinct in LST variable and if I run it, yeah, good, so good is 100%, okay? And for example, let's go with that's changed distinct, okay? And instead of good, I will call it like the ET grad. Okay, and now let's check this word. And yeah, here is my list of all possible worlds and their confidence level, okay, So grad, confidence level is 0.5. This mean 50 percent get a 30 percent degree is 0.1 B, and this means 10 percent and gray is just 2%. For example, if I perform spellcheck or pression, you have notice that it has depressed graduate grad because it's percentage or its confidence level is high, yes. And on the list. Okay. So this is how you can check for as confidence level and our suggested words or get. So, I hope you have enjoyed this tutorial and just play with these things, okay, for example, convert that to add you into percentage, or you can actually make your own app like Grammarly, okay, in which you a check for spelling, miss DAG for noun phrases, mistakes for sentiment analysis, awards inflection or lemmatization. Okay, so you can create your own Grammarly ad as-yet. So I hope you have enjoyed this tutorial. In the next tutorial, live in see something you kill that bind. Bye-bye. 10. Words and Noun Phrase Frequency: Hello everyone. So in this tutorial, we will see how to check the frequency of a certain word in a sentence. And we will also see how to check the frequency of known phrase in a sentence. So let's begin the lecture. Okay, So previously we have done, how do you check spelling mistakes in a sentence? And now we will see the frequency or how frequent this world is present in a sentence. Okay, so it's very awesome and there are two methods, okay? The first one is to use a dictionary, and the second one is to use a method. Okay? So first of all, we will go with the machinery or can just remove everything. And instead of the sentence I have copied as intense, okay? And this is okay, So just to just make it together, this is my sentence OR gate and we will check, check this world, okay? And we will check how frequent disorder is present and descendants or thruster font. I will go with dictionary method, okay, dictionary and AR by dictionary auger and just try it here to blog dot, volt, underscore count or gas. So this is basically a dictionary and we need to define here the certain word Orca. And I will say like a doublet guy and I need to print it. So just right brain. Okay, printed. And if I run it, you see that this certain word is present three times and yeah, this is my answer, which is 3, or a way to enter it, okay? And the next method is to use a built-in method. Okay, So this is mattered. And this guess, we just need to write here blog dot towards.com, okay? So blob, dark dwarves dot count. And here we need two possible mole and that is a double IEP. And we need to print that to all Ezra. We need to print the amount as when. So just try TO her brain to BRI and depraved Orca. Print and print three as well. Okay. And yeah, here is three and Harris to the AACN. So one more feature that this method provide desk is deck and check for cost-sensitive world as well. So just right here, CSE gas and that scar SEN, SID IV sensitivity is equal to true. And now you will notice that it will return, just do aka, this is because this 0x0 is present two times here, okay? If I apply, this gets sensitive to true. So if I run it, you will notice that our first answer would be three. Okay, so we got an error, I guess sensitive. Okay, so sorry, we just need to remove this rounded brackets, okay? And put bracket here. And now this will print two here. Okay? Yeah, here is two or Gesso, this is how a disgust sensitive value works. Okay? So this is basically a Boolean. And if I increase this E, then you will notice that it dream Outbrain to one because it is present to one. Okay. So yeah, here is my answer. So this is how you can check how frequent are word is present in a sentence and nef. I will check noun phrases or known pH RMSE and noun phrases. Free. Glenn Seaborg, a noun phrases frequency and it's simple as well-off guess or just tried to blog, blog, dot. Now and friends is dot count or count and we need to call what? Tonight's get an IG SDSC night, okay. And we need to print it darker just right here. And just print distinct. And if I run it, okay, 0. So basically tonight is not of or noun phrase or guests are now foster form that sprint noun phrases or guess or just remove it. Noun phrase is okay. And yeah, NI, EEG, EEG, EEG, and putting my noun phrases. Okay? And for example, let's type this dislodged per tank for two times. Btn, btn AAG. And now if I check my noun phrases, nike dot count, OK, and now I need to pass my friends and that is b, d, and g. Okay? And you enlarge just that, it will return three, okay? Because this particular noun phrase is present three times in this sentence, okay? So I just processing and yeah, Here is my answer. So this is how you can check for a noun phrases frequency and wards frequency. So I hope you have enjoyed this tutorial. And in the next tutorial we will see something new. Circle that wind, Bye-bye. 11. N-grams: Hello everyone. So in this tutorial, we will see how to perform ngrams method in text blob. So let's begin the lecture. Okay, so first of all, instead of jumping directly into the code, you just need to understand what is n-grams. N-grams is one of the most important concept in data sciences or camp. So it is very much important and it helps us in many scenarios. So basically anagram methods let us divide a sentence into n number of words, okay? So it divides a whole sentence into lists. And each list content and numbers are formed, which we will define letter or camp. So in real-world problems and grants basically help us a lot. For example, according to a recent research, if we use three grams or 40 grams method, it helps us to filter scams, emails, or in most cases to grams had us in data sciences, we use N-grams in probability as well. Okay? I will not go deeply into it because I am assuming that you are an absolute beginners or again, all of these things will make you going to fuse, okay, but I will just show you how to perform n-gram operation in text-block or camp. So you can get a better understanding. First of all, just remove these things, okay? And I will perform n-gram oppression on this blob aka, so you're going to get a better understanding of n-gram, then we will do practical work. So just try to add a blog dot and grams, okay? N-grams. And now we need to define the end value, okay? So this n grad, basically they represent and number of grams, okay? And we will define that to value here and to make sure that your value should be integer not flawed, nothing else, okay, it should be integer. For example, a far now I will go with like three grams. Okay, so just try to your three and I will save the result in a variable and I will call it like G is equals to, and now I will print G. We would have get to a list and then inside that list we will get sublist or guess or like multi-dimensional list, I think. Okay, so if I run it, yeah, Here is my man list, okay, so here is my man list and inside that list I have sub-lists. Okay, So this is my list number 1, this is my desk number two, this is my disc number three. Okay? So this three basic array represents our grams. And you have noticed that this tree has divided, award the sentence into a list. And then inside that list we have further list to it, each list of words of three. Okay? So for example, just look at this list, v naught, okay? We are no, So we have three votes and then just look at this sentence are no longer, it has three words, no longer D, it has three words and longer the CONOPS, or guess I test three or so. This is how n-grams who are caucus or if I replace this three with to you and notice that these list didn't have only two words. Okay? So if I run it, okay, so now we have only two words in that list. Orchestral V IRR law, no longer, longer D or Gesso. This basically helps us to filter scams, okay, or to find the probability as an okay. So you can actually play with these lists, like you can find the total length of this list, okay? Or you can print each of these lists separately. You can perform certain type of operations on these list. Okay? So I hope you have understood what is N-grams, okay? I know I know that you still have so many questions regarding n-grams, but gave these questions for your future research or get. So keep your self curious about anagrams because this concept really helped us in real-world scenarios OR gate. And I will make a course on machine learning and artificial intelligence in the future in which I will explain all the basic concepts regarding machine learning, deep, deep learning or neural network or artificial intelligence. So I will try my best to explain these concepts in a very valid manner in that course. So till that point, I hope you have understood everything and I will see you in the next tutorial. 12. Textblob acts like Python Strings: Hello everyone. So in this tutorial, Vivien, see how text gloves like byte, string. So let's begin the lecture. Okay, so festival, just remove these two lines and let's remove this sentence and write a simple sentence. Okay. So gesture, move it and they are, I've MRI like this. This is my S END NCAA sentence. Okay, so this is why simple sentence. We're going to see how these textblock. So this is my texture blog. How these texts blob like Python, string, sorry, very, very simple. For example, for example, the i1 to convert this whole sentence into capital letters. Okay? So I've just dried blog dot DBR upper, okay? And I haven't said that sentence into a variable. I would have caught it like UP is equals to this, and now I would print this UP. Okay? So print GOP and you will notice that this upper method and word my hall sent dense into it. Uppercase letters. Okay, So this is my sentence, this is my simple sentence. Okay, it's awesome. And note what I can do with this. I can actually find a particular world as well. For example, I want to find the position of the simple, okay, So I can write like blog, blog dot find. Okay? Blog dot find. And I've defined my word here and that is as I MPL is simple and you will notice, you will notice, for example, just print it for now. And you will notice that two, we will get an integer value here. And let me tell you why we are getting integer value. Yeah, it does say it is. I saw our eyes, it is 11. So this means that the simple word is present at position 11, okay, and how this textbook calculated, so it counts these correctors are gesso for example, T1, 2, 3, 4, 5 space, just going dispensed as well. Okay. 5678910. Allow it. And yeah, Here is my simple okay. So it has defined our position. Okay, so that's why we are getting this 11 here. Okay, so I hope you have understood why VR getting allowing because this symbol is present at this and low and position. But sorry, I counted this T from one video, but in this byte then everything starts from 0. So 0123456789, 10 and 11. So we are getting 11. That's right. Okay, So make sure that you should count it from 0, aka otherwise you in a bit, wrong answer. Okay? And now you can actually compare two strings as well. For example, let's do some fun here. Compare reason, for example, I have two blobs Orca. One is a is equals to text, blog, and I will pass aaaa or oedema will be at EPA here. And I have next blob and that is B. Okay. And I would advise here banana, the ad development. And I think that's correct, splitting VA and banana. And I want to compare these two strings, okay? And if our b, if b is greater than a or I print, I will print B. Okay? And as I will print, print. So obviously, if you count the correctors in banana, you will find that the correctors and banana are greater than correctors in epsilon. Okay, So if I run it, you had not does that it will print banana here. And yes, so this is how you can compare two strings in decks, blob as well. So text blob has Biden string properties as well. Sorry, this was a short tutorial on how texture globe behaves like Python, string, Orca. So I hope you have understood this tutorial and in the next tutorial we will see something new till that point. Bye-bye. 13. Start and End Indices: Hello everyone. So in this tutorial, we will see how to get the start and end indices of a sentence in text blob. So let's begin the lecture. Okay, so just remove everything. And let's increase the length of this sentence or this is simple sentence. And B, E, a UTI beautiful is better than U, G, I believe Olga and by beautiful, beautiful, beautiful is better than ugly. And I haven't said simple is better than pollex. Okay? So I have these three sentences and I want to get the indices of all these three sentences, okay? They start indices and the end indices. Okay? So first of all, I need to convert that text Rob into sentences now. And it's really simple, just IT or blob dot sent density or not 70 men's dark as the NT, NCS intensive. Okay, I will say with this list in SEN descent to area BOL, okay? And if I print that variable, you will see that we will get a list. Okay? So if I bring did yeah, so this is my list orcas or this is my simple sentence. Beautiful is better than ugly, and simple is better than complex. This is awesome. I know I need to loop to each element of this list. Okay, so just strike here for as in SEN descent, okay? And I saw this estimate how this sentence first and then this sentence and then the sentence. Okay, so this S has some other properties as readily like I've just said that, that SD, which searches and starts or as dark start is equals to I will print to the stock or I will save the start of the sentence in this SD variable AKA. And I will print to the end S dot end of a sentence in this EDI variable organ. Then I will print it or so. Just try to your brain. And I will print the sentence fast ARCA, and then I will say SDR, the start. And then I will say sd, okay? And then I will say, and, and I will print the end of the sentence, ED. Okay, and now let's run it. Yeah. So this is my sentence, this is my sentence and it starts from 0 and ends at indices 27, okay? And this next sentence starts from 20 ad and ends at 2750 at, and the next one starts from 59 and at, at yet orca. So this is how you can find the start and end indices of a sentence in using text blob library. So I hope you have enjoyed this tutorial. And in the next tutorial we will see something new to let wind my Bye. 14. Text Classification System: Hello everyone. So now we're going to start the development of our own text classification system or who must be thinking that why we need our on text classification system because we are doing text classification from Tutorial Number 1, then y, v need over on text classification system. So you have a very valid question and it's quite good if you are thinking, I really appreciate you if you have this question in your mind, so vivid infrastructure. We will develop this system by our set of forecasts. So we will develop this text classification system and this will really help you to understand some Beck sick concepts regarding machine learning in natural language processing, but it will help you infer them machine learning, okay, because you will learn what is a model, aka how to train our model, how to test our model. Okay, So you haven't learned so many basic concepts regarding machine learning in this deck sequestered during the development of our text classification system orbit. And the next question is, how we will develop our own text classification system so vivid, use a textbook dot classifier module and it will really help us to develop our own text classification system. Okay? And now you are thinking that what is a model? Because I have used disorder in the beginning of the lecture. So what is a model? So a model is basically a file or gas, so that would help us to recognize certain patron's okay. After training, get Dorcas or in simple words, if I said a model is just like a brand, okay? So V first strand the brand and then we test our brand, Okay, so at level 0 of our brand or our model is just like a brand of a child OR gate and it knows nothing about anything. Okay. Knows nothing how to perform operations. Okay. And how do check for certain conditions, okay, and everything else. So if I store far via trend this model by providing data and providing honest says archive for example, we will provide us and dance and our model will tell us that the sentence is positive or negative. Okay, like sentiment analysis we have done before. So even develop over on customer sentiment analysis system. Okay, So parser fall in training section, we've added pearlite sentences and we will also provide an answer. So okay, so we will write a sentence and a NBV that data that dissent dense as positive, then we've got bromide and at their sentence and we will say this sentence is negative. The more vivid provide data to the system, the more vivid mac of our system accurate. Okay, so I hope you have understood what does training and then we've tested so v then vividly ask a user to enter any sentence or get NVivo, then that divide there your sentence as positive or negative. Okay? So this is how a model work. So it is like an artificial brand, okay? And it actually make a decision. And we do not need to explicitly tell the system that, that this is positive. This is negative. The decisions are made by machine, this model itself pocket. So I hope you have understood what is a model. And now the next question is, and that is, what do you learn? So you will basically learn how to load data from files, okay? And you will also learn how to create a model then how to check the accuracy of the model, okay? And you will also learn how to update a model. So I hope you have understood the basic concepts regarding our text classification system. And from the next tutorial, we will actually start the development of our text classification system. So I will see you in the next tutorial to embed point, Bye-bye. 15. NaiveBayesClassifier: Hello everyone. So in this tutorial, we will start the development of our on text classification system. And in this tutorial, we will see how to load data or GGA, and then we will see how to create a classifier. So let's begin the lecture. I'll faster foreign v need to create a classifier Orca. So in this case, I will use a built-in textblock classifier and debt is nev bad classifier. No, you are thinking that do what is a nav back classifier. So it does basically are probabilistic machine learning model that is based on base to them. So if you know Bayes theorem then Vaillant good. But if you don't know best to them than Ivan, not explain it because I don't want to make you confuse or gay. I'm assuming that you are absolute beginner or having no knowledge of machine learning, aka, And that's why I will not go in 2D OMS or anything add is I would make a separate course on all these basic concepts regarding machine learning. And I will explain all of these two rooms there. But I've not explained Bayes theorem here, okay? And because this really makes you confused, so I will just directly go to practical work. So first of all, we need to import our model or our classifier. Okay, so I'll just write here from text below B, from text blob dot classifier in Part IV, nav, bad classifier. Okay, so the eyebrow, but okay, so now we have imported our nav back classifier, Orca, and now we will create a model. So for example, I'm going to call it like MOD L mod m is equal to nav and IVE nav or sorry, nav, bad classifier. And here inside this classifier, we need to pass a dataset. Okay, So we need to trend this model or gesso that 82 and make a correct or accurate decision. So now we need to load our data. So let me mention hair, creating glossy, CNN, SIF, ER, creating our classifier. And now we need to load data. And or D I, N G loading, DIT, loading data. Okay, so we need two datasets are kept and driving it to that I said, pfizer data set will be our training data set or get and it will train our model or gap. And then we need a second dataset and that will be our testing dataset. And Ovid the help of that testing dataset, we will test our model. Okay, so I have just died PR DRA IN trend, okay? You can call any being organiser to just are very evident that and you must, must make sure that your data should be informal for list of gap. So for example, inside this list, I've added past my sentence like I Lao, sand WAS sandwich, okay? And then this sentence you to be P or S positive OR gate. So this is my one, this is my sentence, okay? And this is my answer, Orca. And then put both of these things in these rounded brackets. And now let's create a second sentence. And in this case I've met get a negative. I do not like what? I do not like burgers or yeah, you RG our burgers. Okay. And just make it negative and EG negative aka. So this is, this is how you can actually create our training dataset aka. And now we need to create our testing dataset. Just, just try to test is z equals 2 and your testing data set should be of type list as well. Okay, put rounded brackets here and just boss your sentence, for example, might testing, I like birds, okay? And you are thinking that dry, I am writing here positive Orca. And let me explain it to you that are Orca. And I've added right here. I, I hat, I hat what? And all the vanilla lollipop, EOPS, lollipops or gay. And I didn't make the sentences negative 4k. Now you're thinking that why I'm writing here positive and why I am writing HIV-negative. Do it a testing list or guess or why I need answers. Because with the help of this testing list, we will check the accuracy of our model. Okay, So V, VI, VII, it actually passed this testing list of Henry VIII, wanted to check the accuracy of our model. Otherwise, we will train our model, okay, with the help of this list on that, then we will get input from users. Again, it will depend whether your sentence is positive or negative. But do Ven v. Von to test the accuracy of our model, then we pass this test AKA, and it really make a decision based on these lies. And then it will compare decision with the uncertainty which we have written here, and it will get better. I am at grad or not, or whether I add 2% accurate or 90 percent accurate or 100% correct. So I have written these two sentences here. So I have mentioned in the previous lecture, the more you will provide data to the model, the more it will learn new things and be more, it will become accurate. Okay, So I hope copy two datasets, the training and testing datasets, and I've invested here, okay, So this is my training data set, okay? And this is my testing datasets. So just remove it and and just remove these lines here, Orca. And yes, Everything else is good. Okay. So this is my training dataset, okay? Yeah, it is a lace and it has these sentences. And inside the sentences I have written the answer of the sentences. Okay, and now this is my testing list or guess or I have these sentences and I have written the answer of the sentences. Okay, sorry, it's good, it's good. So let me remove these things are okay. So I have rearranged my training data set and testing dataset and now we will pass our training data set to this model. So just strike here t, i and trend or get. And now our model will train. It says whenever v run it, okay? So if I run it, everything is processor and you will see nothing. Okay? So you will see nothing because we are printing nothing. Okay? Okay, So it has finished. Okay, So now if I said brain to brain or d, Let's see what we will get. Okay, So it is saying that our classifier, this classifier and trend on n instances, so these odd my instances, okay, So 12345678910, AKA certain instances or is awesome, you have trend to you our model. So in the next tutorial, Vivien see something new to that point. Bye-bye. 16. Text Classification: Hello everyone. So in this tutorial, we will see how to classify text using a lower on classifier. So let's begin the lecture. Okay, so previously we have created our model, okay, using this nav back classifier and then we are parsing our data set, okay, and now we're going to see how our model is acting, or gesso Vader, our model is accurate or not orchestra, it's very simple. For example, I've been passed a sentence to my modem, okay? And I will print the result and this result variable. Okay, so just try to model dot. What I said, classify motor dot classify archiving need to classify or what text. And here I've advised my sentence for exempt by the day is, and as at ING, amazing LIB are right. This is an amazing library and V1 to check that.org This sentence is positive or negative. Orcas are just sprint of a result. Okay? Are the SGLT result and we know very well this sentence is positive. So let's see whether our model is accurate or not. And yes, of course this sentence is positive. This means that to over modally scribed to good. Okay, So let's try another sentence AKA, or you can actually get input from user as well. For example, either get input from user or VGA input. And I will say inter is you are c and d and t into your sentence. Okay. And I have it. Instead of pressing care sentence, I've passed my SE and sent variable and now let's run it. Okay into your sentence. For example, I had or I do not. Do not, like at OBS, at bugs. Okay, and it should print negative because it's a negative sentence, so let's run it. And yeah, Here is my answering that does negative. So this means that our model, the sky to good AAC app. And the more you can provide data do more, it will become aircraft. So you can actually check the probability or the confidence level as valid for our example. Let's hair, I've instead like yeah, like probability p r o b is equals to M RDF model dot prop, classify archives. So progress effect. And here I will pass my sentence. C and D center area. But I want to check the probability of my dissent dense aka. And now I need to print BRI into print, prob dot, prop or guess or what this prompt dot-product. So basically VR classifying our testing the probability and this probability collage basically returns two things. Instances or get the positive probability and the negative probability. And in this variable now we have the positive probability and negative probability. And to check the poll ladder, to check our positive probability, okay, we will pass hair, be positive, okay, So it will print our positive probability. And if I pass here negative, it would print our negative probability. And how we can access that probability for this purpose, we are using this function or so. Just remember, put to this prob, classify and price your sentence. Okay? And then through that variable we can access this prop, the last of this function. And inside that function just Massu adult, you want to check the positive probability or you want to check for the negative probability. Okay? So let's try both of them. Okay, so print the ROP prop dot prob and then I get past negative energy, negative Orca, and these values are between 01. So let's run it. And just crossover sentence for exempt. But this is and amazing as it an amazing be WK book. Okay? So just enter. Okay, So this is my positive probability and this is my negative profitability August, so 0.9 is my positive probability and dad, so I had to showing that this sentence a positive OR gate, and this is my 0.09 is my negative probability, Dan, That's right. Yeah, of course, it dispositive stayed positive because I would negative probability is much, much, much smaller than our positive probability. Okay? So this is how you can check for the probability and you get, classify your sentences or guys are awesome. You have made your on sentiment analysis system, aka, so you can convert these probabilities to person damage as well for our exam, but I will assign a variable to this line. Again, Ben, I've multiplied that do area below at 102, wired these long numbers. Okay, I haven't been working. Directory is dumped into int, okay, so we have done everything previously, so you can use the same logic to convert this value to percentage, okay? And you can print the probability in percentage for better output. Okay, so I hope you have enjoyed this tutorial. And in the next tutorial we will see something you Tim, that went bye-bye. 17. TextBlob Classification: Hello everyone. So in this tutorial, we'll see how to classify the sentence which we have paused to our text about the class. So let's begin the lecture. First of all, let's create a VOR textblock glass. Okay, So I will call it like my blob is equals to t extra text be yellow blob or again, I need to import it. So Dax blog, NPR didn't port text blob. Okay, So this is my explored glass. And now inside this glass, let's pass a sentence or gas or for example, I lo, be at the VA and I love banana are okay. But I HIT, I had, or I do not lie at Airbnb as apples or guess or this is my blog textbook. And now I need to classify this blob using our dismantle log S or just shift distant dense down or get distracted. And just port to this sentence right here, Orca. And now I need to use my model to classify this text orchestral, just strike here. She had add dependence I, classifier is z equals two and then pause, you are Nam of the classifier. And in this case my classifier is this modem archives are dispersed here MOD module aka. Now I need to classify this deck. So just like to blog the blog dot classify organic, I need to print it. So what are you expecting it to return positive or negative because my, this sentence is positive but my descent dense as negative. But to before I run it, Let's comment these sentences. Okay, So that's called meant everything here. Okay? Okay, So that we will get our output. Okay, so, yeah, I've heard this sentence is negative. Okay? So how, why our descendants is negative? This is because the negativity of the sentence is higher than the positivity of the sentence, okay? And that's why I VE are getting negative. So one of the major advantage of this text blob is dead. We can actually check or we can classify our sentences separately. So if I pass my sentence here directly into place value, you cannot test, demo guess, or even if I use a write a paragraph or gay, or if I bust my whole paragraph to this classifier function OR gate to the classified the HOD better graph Lord, does separate sentences of the of this paragraph or get, but using the textblock we can loop through all the sentences AKA, and then we can check or we can classify that these sentences separately. So let's look through these sentences, or ivan right here for SEN in blog.js and dances, our guess I've looped through these sentences, Orca and foster fall. I will print that sentence and then I will tell the user whether the sentence is positive or negative there Orca. So just, I just send notes, classify, OK, and now I need to classify this sentence. Okay, so if I run it, okay, So my, I love banana is positive, it's cool and I do not like apples is negative. Okay, so this is one of the major advantage of this approach using this text blob glass or can then passing our sentences or even a paragraph here and then looping through different sentences and then classifying each sentence separately. So this is one of the major advantage of this approach. So I hope you have enjoyed this tutorial. And in the next tutorial we will see something you till that wind. Bye-bye. 18. Checking Model Accuracy and Model Updating: Hello everyone. So in this tutorial, we're going to see how to check the accuracy of our model and then how to update a mod m. So let's begin the lecture. Okay, so first of all, let's remove these things are ok, so just remove it and note, I hope that you are not thinking that too why we have created this text that that sets. Okay, let me answer your question. No, vivid. Use this dataset to check the accuracy of our model. Okay, So how will we can take the accuracy of our model? So it's very, very simple. You just need to write here mod m dot add WTO accuracy, okay? And inside this Icarus UV need to pass our test data set. Orcas are just bust Boston Tea SD test. Okay, so it didn't return a value, an integer value, not an integer ReLU, a float value. Okay? And we've been said that to you inside this, like I would have gotten it inside this variable, AKA, and now I will print this variable. Okay, so, but I think before running the application, let's convert this value to percentage, okay, so that we're going to get a better output. So either God and devil c is equals to add MOOC multiply by 100 arc. And then I will convert that value to INT, INT, and then I would print edible C. So let's check the accuracy. Okay, so our model is 83 percent accurate. So this is very cool. This is very asha. So this is how you can check the accuracy. Just use this accuracy function and pass the data set, our, the testing data set and district tell you how much accurate your model is. Okay, and now let's update of our model. Okay, So just ITO you'd be DID update, okay, and I get a runModel. So updating our model is very simple, just right here. Model dot UP, up debt, okay? And inside this update, pos, our new data set, okay, so I hope called PDA new dataset, and I've been passed this dataset right here. Okay? So this is my mod beta or gamma, and this is also a list having sent bends and answer, OK, and I've been passed this mod delta hair more data AKA and know, I will check the accuracy of my model. Okay? So just right here, model dot edible, see accuracy. Okay? And just bus haha, test Arquette and I've said it, copy and Bast is equals to this. And just copy these two lines as well. Copy just best to not text DSD desk. Okay, so I hope this will increase the accuracy of my model. And yes, Previously, our model is at 3 percent accurate. Okay. And know the accuracy of our model is 100%. So awesome. You have met a cool, cool text classification system. So I hope you have understood what is a model, how to train a model, how to update the model, how to check the accuracy. Okay, and I hope you have enjoyed this tutorial. And in the next tutorial, Vivien see something new. And till that point, Bye-bye. 19. Sentiment Analyzer: Hello everyone. So in this tutorial, we will talk about sentiment analyzer. And no, I know very well you're thinking that we have done sentiment analysis, then why do we need a sentiment analyzer? So I will answer this question in this tutorial. So let's begin the lecture. Okay, So basically you have a very valid question and that is v have done sentiment analysis and know why we need sentiment analyzer. So this is basically an adorns tutorial or get. So let me tell you the secret and beta's text blob dot sentiment module provide us two different dipole sentiment analyzers. Okay? So one is, one is spectrum analyzer or GPA and debt does best on bedroom libraries, or It is natural language processing library for Python, Orca. And the next analyzer that we have in this text dot sentiments module is never analyzer. So this is NLTK, library classifier or camp. And the amazing thing about this analyzer is dead. This analyzer or this classifier or this model is trained on movies review of gesso. This is an awesome thing, okay, Regarding this never analyzed there. So the default analyzer for sentiment analysis, as we have done previously, is better than analyzer orcas or we have done our analyze sentiment analysis that patron and allies have. And now in this tutorial vivid use our nav and the lies that which is best on NLTK library. Let's start the coding. So flustered fall. Now I need to import my textblock glass or go so I'll just write here from Dexter blog AMP or ought to import and b or b, VN all be import. Text below bokeh and now Vineet of our sentiment analyzer. And that is from text, text, blob dot sentiment or import. Never been analyzer. Okay, So this is again based on, you know, very well, Bayes Theorem aka, which is a probabilistic theorem. Okay, so now we have my textbook glass and my, I have my disk, never analyze it. And let's create a blob. Okay, so Blog be an RB blobby z equals to text blob or get. And here I will pass my sentence like dish is an amazing BW. Okay, this is an amazing book. Ok. And I've, now, I've an override my analyze it. Okay, So the default analyzer is better analyze our orca, but I want to override it. Okay, so I've explicitly defined by endomysium. So just right here and air analyzer is equals to IV, never been analyzed their Orca and put rounded brackets, okay, and now it's very simple. We have talked about distinct just right here, blog, the blog, Log dot sentiment, AKA sentiment and retrieval. Perform my sentiment analysis. Just print it. Okay, and you will notice that my patron analyzer wise returning I think two things, but this analyze that are basically returns three things, okay? So if I run it, okay, So this, basically this analyzer return three things, declassification, the B underscore positive and then B underscore negative. So classification tell us read that the sentence is positive or negative. So this sentence is positive, okay? And this is the positive percentage Argand, this is the negatives. Actually, this is not in percentage. This is a flawed do edu and its value is between 01. You can convert these values or gesso foreign exemple for example, I do not want these values, okay? I just want to print this classification of value. So it's very simple. First of all, just to store the value RE SQL does result is equals to send blog, blog dot sentiment AKA, and then print RES You LDL result dot classifications. Yet add WAS I FI C at TIGR and make sure that you are this nav or and distant m should match. Okay, So there's one more, I will tell you later, but for now just type this name as it is our guard. You can copy it and then paste it here. And if I run it, it doing now, print just the value off classification Orca. Okay, so here is my output and debt is positive. So this sentence is positive and you can print the B underscore positive value as red line that be underscore negative value Azure. But there is one more view of printing all of these values and that disk. You have noticed that when we run this function into dunce three things, the classification PM to score positive and B underscore negative. So instead of starting all of these values in this single variable, I will create three different variables that are just right here. Results or result will be my classification variable. And for P underscore positive, I will go with positive variable. And for P underscore negative either go with negative variable. Okay? And now instead of writing here're a result dot classification, you would just need to write here result. Like for example, I will set our ESU L2 result, okay? And I will print the result here. And then in the new line, new line, Ivan the staff. Pause it, tail, okay? And I will print my positive value here. And then in the next new line, I've inset negative and e, g of negative two. And I will print by negative value here and EG. Okay? And now you can notice that it drew a little brain to three values here. Okay, So this is my output. Okay, so my result is positive. My positive value is 0.6 and my negative value is 0.37. So this is awesome. So I hope you have understood about sentiment analyzer. So just keep in mind, get out two types of analyzer. The pattern analyzer, the navbars and allies aerosol patron analyzer is our default analyzer and you can override the default analyzer, but just by defining on analyzer hair, AKA, and then everything else you know very well. Okay, so I hope you have enjoyed this tutorial and in the next tutorial we will see something you attend that wind, Bye-bye. 20. Tokenizer: Hello everyone. So in this tutorial vivid and see how we can use and multi get tokenizers indexed blob. So let's begin the lecture. Okay, so v have done a tutorial on tokenization. We're view, we're using dot dwarf function, okay? But in this tutorial we will see how we can use NMT get tokenizers in text blob. So a federal form. Let's import my textblock galactose or Gesso from Dexter blog. Import In part text blob. Okay, so it's dexterous. Robots are awesome. And now I need to import tokenizers, different types of tokenizers from NLTK library or case or just stride to hear from and TK AKA and antique add dot d, Okay, and talk kinds, tokenize. And from NOT GUDDAT tokenized, I need to import different types of tokenizer like a forest or foreign lexical that dabbed organize at orca. So tap Tokenizer and I would use my tapped organizer to to separate or to slice my sentence. Okay? So either create a blow both decks TO blob is equals to text blog. And I've had a boss. My sentence here, for example, this is send dense aka n Now slash, this slash d represent Deb. Okay? So this is a sentence slash d 00, which is very simple. It has orchestral, which is very simple. Okay, so which is very simple. And now we need to override the default tokenizer. Orcas are just tried TO do York AN tokenizer. Okay, and now we need to define our tokenizer here. So in this case our tokenize, that is this depth tokenizer. So just try TO tab tokenizer, okay, Deb Tokenizer and make sure that the stepped organize it as a glass Orca. So I'll just put rounded brackets here. Ok, so now we have, or I did award this tokenizer OR gate, and let's tokenize the sentence. So I'll just print it. Brent, be an OB blob dot D or E and tokenized, okay, So at school, okay, So if I run it, you can see that in a single list we will have only two at I said, two values or gas or one we'd be this, okay. And then you wonder you to be this okay, because we are using tapped organizer and it traded slice meisten dense right there. Okay? So if I write it, U and C, v will have at least yes, we will have a list and it has only two values. Okay? This is a sentence and then this is my next sentence, or GAO or next value. So this is how you can use an LT get tokenizers, well, for tokenizing a sentence in textbooks. So there are a bunch of for the tokenizer, like for example, if I write here 2 York a tokenizer. So you can see that we have this debt, okay, sorry, This DAB darker, nicer blank line tokenizer. Okay, this tweet, tokenize, talk, talk tokenizer, a deck styling tokenizer. So we use these different dive self-doubt colonisers in different scenarios. For example, let's go with this blank, blank tokenizer, okay? And just remove this tokenizer and either right here be Alenka blank line Tokenizer and put the rounded brackets here. And instead of slash t, I will say slash n. And again, selection or guess or this end, we'll separate my descendants RK, and this end would move my descent dense to new line and descend will create a blank line or guess or make sure that you put double slashes here and then you will get your required output Orca. And yes, this is a sentence and which is very simple. For example, if I remove this one, a slash, and then you will notice that it will return a list with only one object or one value and distributed be this hard sentence or bizzare if I run it. Okay, so this is a list and it has only one value. This is because we have used our blank line Tokenizer and in dissent dense we do not have any blank line. Okay, so make sure that you have been to DevOps n here. Or for example, let's put double n WN hair as well, but n OK, and that's readable in here as well, installation selection and now it to return 1, 2, 3, 4, four values. Okay. So if I run it, okay, So this is us and dad's, which is very simple. So this is how you can use an LT get tokenizers in textbooks. So I hope you have enjoyed this tutorial. And in the next tutorial we will see something new. 10 bet wind, Bye-bye. 21. Noun Phrase Extractor: Hello everyone. So in this tutorial vivid talk about noun phrase extraction again. Yes, again. So let's begin the lecture. Okay, so v have talked about node Phrase Extraction previously, okay? But in this tutorial, we will talk more about known phrase extraction. So basically detects blog dot np extractor module provide us two different glasses. Well-known phrase extraction. One is this foster NP, it's extractor, which is the default extractor for noun phrases. And we have an other extractor and dead disk calling them extractor or cash or I hope eyelid pronouns are correct and that the C, O, and W, Okay, Con and extractor. So we will override the default extractor and use this content extractor. Let's remove everything, okay, to just remove it ended right here. Dax log, i NPR to import text, blog glass orcas are at school. And now we need to import our known extractors are just right here from text blob dot np, dxdt, be an OB dot and P extract her import, she saw and add a comment extractor. And now we reduce this, extract her to extract our noun phrase. It's okay, so just create or what a blog here blob is equals to what? Dexter blob. Okay. And just right here, the sentence, this is now be HER friends AND and sentence. Okay. I like that pulse or read up WPI, the ADA was very much okay, and now let's override the default noun phrase extractor. So just right here, non-phrase extractor is equals to c 0 and column Extractor AKA. And what the rounded brackets again, because this is a glass AKA and make sure that you should work drone brackets here, otherwise it to throw an error, okay, and now we need to print noun phrases or blog dot noun phrases. Okay? So if I run it, I guess, or non-French sentence is my only noun phrase in this sentence. Okay, so for example, instead of this, Let's write Python, okay? B, right, DHL. And so Biden is odd, so my noun phrase, so if I run it, it will print python as well. Okay, so yes, Python, python is my noun phrase and noun phrase and dense is an adult noun phrase. Okay, So this is how you can override the default noun phrase extractor. Good just by defining end big stricter, and then your noun phrase, Inspector. Okay, so I hope you have enjoyed this tutorial, and in the next tutorial you will see something new Tim, that bind, Bye-bye. 22. POS Tagger: Hi everyone. So I hope that everyone who is watching this tutorial, we'll be fine or damp. And in this tutorial, we will talk about part-of-speech tagging or kept at how to override the default part of speech tagger. So next began the lecture. Okay, so first of all, this is my sentence AKA. And this textbook actually provide as two different types of part of speech tagger. One is best on pattern library and the other one is best on NLTK library. So the parts of speech tagger, which is best on Patreon library is called patron tagger, and this is our default dagger and D1, which is best on NLTK library is God an empty git tag or end at this NOT gate tag are actually requires num by August, so vivid and style num pi. So it's very simple, indeterminable. Open up our terminal and we will just type in vape installed NumPy. And if you already have enough by, then you do not need to install it. I'll get. So let's, first of all, let's import of tagger and in disguise you have an import NLTK tag. Our biggest battle tagore is that what default daggers or even override it using NOT gate tag are. So just right here from Dexter blog.js, debug our taggers import TK and empty git tag or ok. And instead of NP extractor, now I've added right? What I did, right? I would add p All S are gesso. Vr is basically neither presents blocks of speech. I'll get indexed by a discard snot deactivation of position. Okay, So this is actually the abbreviation of barks of speech. Okay, so BOS underscore d edible tag or two and t cat tag or RK and just spurts rounded bracket here. And now in this there, I've added right blob dot. What tags are? Tags? Yes, Blurb book tags. And if I relate it to our okay, so we've got an error. Debt is a list of object, okay? So this basically a return, a list or do not put round bracket here. Okay, So this is my list and Python is my, now is my, what is my d t? And this is how you can actually overwrite the default tag are using this NOT gate tag or aka. So I hope you have enjoyed this tutorial and in the next tutorial, you will see something new and it should be an exciting thing, okay, and I think I did next lecture will be the last tutorial of this course. I think so. Okay, so I will see you in, in that tutorial 10 that point, Bye-bye. 23. Blobber: Hello everyone. So in this tutorial we will talk about blubber. So let's begin the lecture. Okay? So before I talk about blubber or before you understand the concept of a global, it is very important to do some practical work regarding bla. Bla, okay? And that is, for example, for example, I have this POS tagger hair OR gate, and I have ended their blob like I will rename it like blob to our k is equals to t X t stags blob. Okay? And then I've been writing a sentence. This is, this is same as the anti NCAA simple sentence. Okay, and now we need to define our stagger. Stagger here, copy it and paste it here. So defining the properties for different lobes OR gate from four different textbook is quite difficult task. For example, in many scenarios you will have, you are on custom POS tagger. You're going to have your own custom sentiment analyzer, Orca and you would have so many texts blobs aka. So it becomes very difficult for us to define each of these properties, properties separately in this textbook. So V, such that the loss in retrieval define our properties of gases and then vivid, use that glass instead of this textbook. And then for this purpose, VUS bloggers. Okay, so it's very, very easy for desk right here from the text blob, text blob dot blob from text dot dot blob in part b. Blubber, OK, import blogger. And now this blubber allow us to define these properties globally. And whenever vivid use the variable assigned that we will assign to this Blob. Whenever we use that class, it will automatically applied the settings to all the sentences. Okay, So what does it, what does it mean? It means that for example, I will say Custom and the score TB, TB. This means that the textblock a custom text blob is equals to 0. That will be our blow Baraka. And now inside this class, I will define these properties. For example, just remove it and just cut it and paste it here. Okay? For example, I have my ion analyzer. Analyzer is equals to like, for example, let's import and analyze their hair from Dexter blog, dot sentiment in bought. Which analyzer should I import? And and nav base analyzer. Okay, so I have my desk never analyzer and IVE never analyzer hair OR gate. And now I need to apply these properties to all these deaths blobs. So now instead of using textbook, I will use my discuss them textbooks. So what does it mean? It means that we just need to remove this. And right here, see you SDRAM custom textbook. And now inside this custom text blob, we just need to define a VOR, this line, okay? Our text only not the properties or heavy held because we have defined all of these properties globally. And now how we can check that whether this block one, block two, samples tagger or cats or the need to confirm it or get that, then you will believe that I'm saying true or IM true that these blogs have now Sam properties. Okay, so it's very simple. Here I've print that be an OB blob dot. Pos tagger is blob to dot boss Tagore. Okay, So if both of these pos taggers for these blobs are Sam, then this will print true. Otherwise, it would print files and it will make my sentence falls that now these two blobs have a stamp or staggers. So let's try it. Okay, so we've got two different then this means that this blob 1 and Bob to how sandbars Tagore, okay, and if I say here a blog 1 dot tags or Kaplow bundled tags, okay? It will print the tags or the sentence or with the help of this MLT get tagger or gap. And if I say analyzers or it will override the default analyzer and it will use this never been as analyzer or gap. So just tried here and analyzer and district or write the default analyzer. Okay, this is my output and that is V are currently using nav bass analyzer. So this is an other proof. So this is how you can define these properties globally and then you can use them in your heart program. So I hope you have understood the concept of blubber, which really help us in our real-world problems or Kevin, you are dealing with tons of data all get this global really help us. So I hope you have enjoyed this tutorial. And in the next tutorial, we'll see how to detect different country languages using text blob. So I will see you in the West in that tutorial till that point. Bye-bye. 24. Language Detection and Translation: Hello everyone. So in this tutorial, Vivien, see how we can use textbook for translation and language detection. So let's begin. Did actually. Ok. So fester for, let's create our text blob here or some other language or guess or Fromm. Decks. Law bought Dexter, bulbar and intermediate or Blob vehicles to text blob Orca. And inside this blob, Ivan Lord tried to everyone or Hello everybody. I've been to write something other than English ocher, and I would use Google trust debtor because I don't know either languages. Shaw, for example, I have written here this English or gap, and it has been warped in English, in French, orcas, or you can select any language at either copied this line. Okay, just copy it and best a tear and let's see that there are worded this text blob works correctly or not. So first of all, I want to detect the language or the language of the sentence. So I've entered right, print, print below. Dot, detect language or get blob, don't detect language. And let's see whether it's correct or not. And yes, FOR so this means French, so it is a sentence of French language. Okay, and now let's convert this sentence to English, okay, So first of all, let's come into it or GPA and right here, present or blow, dart translate or bad blood auto translate. And I want to trust them from undescribed language is what I say. It's an optional argument are kept so you, without dragging it, it will automatically detect what is the type of language this sentence belongs to, okay, from underscore language as you go to French, I want to convert it to English, okay, two is equals to E and English. And if I run it here, you see that did print hello everyone. So hello everyone is D, English sentence of this line or bizarre, I don't know how to pronounce it. So if I remove this stead rent or disarmament hair it to instill the work fine. Okay. Because it didn't automatically detected and it will automatically convert it orcas or it's awesome. Hello everyone. So this is how you can use textblock for language deduction or language translation or kept. So I hope you have enjoyed this course, this hall course, AKA, and yes, I will see you in the next course, enrich, I will bring some new things, some easy things, okay? So to that point, Bye-bye. 25. HTTP Error: Hi everyone. So in this tutorial, we will solve this problem. So recently, one of my student direct message to me and he asked for the solution of this problem. And let me explain to you what is he doing? So he's basically creating a blow book and then he's detecting the language, but he's getting this error. And if you are getting the same error, then I think that you should watch this tutorial, because in this tutorial we will solve this problem and it's quite a serious problem. So first of all, to solve this problem, you need to understand how textblock detect languages. So basically, text blob uses Google's API to detect language. And you know very well that Google has a very huge APA system and huge so wars and everything again. And Google consistent the updates. It's APA system itself, wars and everything else. So recent be Google has adapted its API system, and that's why you're actually getting this error. So now how we can solve this problem. So the solution of this problem is very simple. You just need to make some changes in text blob, API or URLs, which is basically a reference to Google's API. And then you will get to your required desserts. So first of all, for example, disk up and pi term or camp. And I have this simple program now let's run it. Okay? And this will throw an error. And yeah, Here is my error and that is 400 four not found this error. And this error is Sam or guess or know, we've installed this problem. So first of all, you just need to up and your directory where you have. So you have saved your project files and then your virtual environment files Orca. So I just go to your Virtual Environment folder. And in this case, this is a project underscore one and describing the virtual locker. So double-click on it and then inside this folder, you just need to go to Lubbock. I just double-click on lived and then click on this site packages or get inside the site Beckett is you and see all of the packages you have installed. Aka. So either go with, either go to text above book. So double-click on text blob and here you will find all the text blob files. And so my issue is this translate PY file. Okay. So I need to make some minor changes in distress lead dot PY file. Okay. So double-click on this translate dot PY file and it will up and distress list.html file in your PyCharm. Okay? So we need to change this URL. Okay, So this basically this URL is very frank to Google, FBA, which is actually translating our stuff rocket. So you just need to put here comment or can then destroyed here. Again URL. The URL is equals to develop cards and then paste the code, which I will give indeed description down below. Aka you can copy discord and you just need to best a tear or get. Now, everything will work fine. Now just come to the program or get and now let's run it. And yeah, everything is working fine. It's French. Hello, vote or get everything is very good. Let's run it again. And yeah, everything is working fine. So this is how you can actually solve this issue just by making some minor changes in distrust lead dot py file or can I hope that you have enjoyed this tutorial and if you have some more issues you guiding text blob, then feel free and ask me and I will definitely answer your questions. Okay. So bye bye. 26. Class Project: Hi everyone. I hope that you have enjoyed this series and now let's talk about the class project. So you need to create your own model and then you need to create your own dataset to train that model. And then you need to check the accuracy of that model by providing your testing that I said. And then you need to check ER model on some real-world problems like checking the movies reviews, whether these reviews are positive or negative, checking the Amazon comments. These comments are positive or negative, x sector. Now let's talk about the steps you need to follow to accomplish this project. Foster fall unit to create a dataset for your model. And in this step number two unit to split that dataset into two portions. So you need to train your model on the first dataset or Kim, and then you need to check the accuracy of your model on the second dataset. And then in the fifth step, you need to check your models by providing some real large datasets. So these are all the steps you need to follow and you will easily accomplish this project. And I will see you in the next tutorial series. Did that mind? Bye-bye.