Python Testing with Green | Nathan Stocks | Skillshare
Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
46 Lessons (3h 26m)
    • 1. Green00d Promo90s

      1:56
    • 2. Green02 Community & Help

      2:05
    • 3. Green03 Your First Test

      5:25
    • 4. Green04 File Organization

      7:52
    • 5. Green05 Docstring

      3:02
    • 6. Green06 assertTrue assertFalse

      1:52
    • 7. Green07 Fail

      4:04
    • 8. Green08 Errors

      1:53
    • 9. Green09 Skipped

      6:47
    • 10. Green10 expectedFailure unexpectedSuccess

      2:39
    • 11. Green11 Fundamentals Review

      11:24
    • 12. GreenA01 Intermediate Section Overview

      1:17
    • 13. GreenA02 Green Target Specification

      5:21
    • 14. GreenA03 Coverage

      11:11
    • 15. GreenA04 Meaningful Tests Introduction

      1:06
    • 16. GreenA04a Meaninfgul Tests Design Tool

      3:33
    • 17. GreenA04b Readable

      4:50
    • 18. GreenA04c Meaningful Tests Test One Outcome

      4:42
    • 19. GreenA04d Meaningful Tests Test Boundaries

      4:19
    • 20. GreenA04e Meaningful Tests Test Results

      4:09
    • 21. GreenA04f Meaningful Tests Isolate the Unit

      1:37
    • 22. GreenA04g Meaningful Tests Identify the Reason

      5:51
    • 23. GreenA05 Unit Test Organization

      4:13
    • 24. GreenA06 Fixtures

      13:27
    • 25. GreenA07 assertEqual assertNotEqual

      6:26
    • 26. GreenA08 assertGreater assertGreaterEqual assertLess assertLessEqual

      1:25
    • 27. GreenA09 assertIs assertIsNot assertIsNone assertIsNotNone

      4:38
    • 28. GreenA10 assertIn assertNotIn

      1:19
    • 29. GreenA11 assertIsInstance assertNotIsInstance

      4:01
    • 30. GreenA12 assertRaises assertRaisesRegexp

      5:20
    • 31. GreenA13 assertAlmostEqual assertAlmostNotEqual

      3:54
    • 32. GreenA14 assertRegexpMatches assertNotRegexpMatches

      2:16
    • 33. GreenA15 assertItemsEqual assertCountEqual

      2:21
    • 34. GreenA16 Skipping Tests

      3:43
    • 35. GreenB01 Advanced Section Overview

      0:27
    • 36. GreenB02 Mock Overview

      5:21
    • 37. GreenB03 MagicMock Part 1

      8:19
    • 38. GreenB04 MagicMock Part 2

      6:53
    • 39. GreenB05 Mock Patchers

      9:35
    • 40. GreenB07 Green Output Options

      6:10
    • 41. GreenB08 Green Initializer and Finalizer

      4:51
    • 42. GreenB09 Green Number of Processes

      4:10
    • 43. GreenB10 Green Config Files

      3:03
    • 44. GreenB11 Green bash and zsh tab completion

      2:26
    • 45. GreenB12 Green Fail Fast

      1:44
    • 46. GreenB13 Temporary Files and Directories

      2:36

About This Class

4073c345

Ever wonder why some projects seem able to make huge changes and still ship promptly, while others collapse under the weight of their own code? Testing is the difference! Effective tests enable you to add features quickly, refactor your code without fear, and ship new releases without big new bugs.

Make Your Project Successful by Writing Meaningful Tests

  • Layout your test packages and modules correctly
  • Organize your tests effectively
  • Learn the tools in the unittest module
  • Run your test with Green, a powerful new test runner
  • Write meaningful tests that enable quick refactoring
  • Learn the difference between unit and integration tests
  • Use advanced tips and tricks to get the most out of your tests.

Python Testing with Green

Tests are just a way of writing code that uses your project and makes sure that it really does what you think it does!  We'll learn the best way to write tests and some common problems and pitfalls that you should avoid.  

This course is designed as a practical reference for Python programmers who have real code that they would like to start testing, or test more effectively. I provide real runnable examples that you can type in yourself or download from the resources section.

The beginner section requires zero prior testing knowledge.  I teach the fundamental basics of testing Python code including how to run your tests the "traditional" way as well as with the high-performance Green test runner.  After completing this section, you will be able to write tests and run them.

The intermediate section teaches you how to write meaningful tests, and covers every aspect of the Python "unittest" module that you will use in daily life.  Upon completing this section, you will know more than most Python programmers about how to test your code.

In the advanced section I go over how to mock unit tests, integration testing, advanced usage of the Green test runner, and some tips and tricks for getting the most out of your testing.  Only an elite few gain this level of knowledge on their own.  After completing this section, you will be a master of Python testing.

Transcripts

1. Green00d Promo90s: Welcome to the python testing with Green Course, where your experience with Python will be enhanced by the ability to write and run meaningful tests for your code. My name is Nathan Stocks, and I will be leading you through the course. I've been a professional software developer for nearly two decades. In 2014 I founded Agile Perception, an independent software development in training company where I have fun building products with technology like Rust Python C plus, plus an unreal engine. I designed this course to share what I learned about python testing. I created a successful open source project called Green Ah Better test runner for python. I believe that using great tools to run meaningful tests improves code quality and enables you to re factor quickly without fear. I believe this enhances your quality of life. By the end of this course, you will know how to write meaningful unit and integration tests. Be able to run those tests with a fast, powerful test runner that provides colorful, readable output, which will lead to better code and higher quality of life for everyone. In the first section of this course, I will show you how to start testing in the second section. I will teach you to write meaningful unit tests and go over the unit test and mock modules in detail. In the third section, we will cover Green and go over integration. Testing. The ideal student for this course is an existing Python programmer that wants to expand on his skills or lead a successful python project of his own or who inherited someone else's messy code and has to clean it up. There are no requirements necessary to enroll. I only ask that you come open minded and ready to learn. Thank you for listening. Feel free to take a look at the course description, and I look forward to seeing you inside. 2. Green02 Community & Help: as you go through this course, you may find that you have questions I want to quickly highlight to. Resource is that you can use to get answers to your questions first. As a member of this course, I invite you to join the Free Python Learning Server on discord. Discord is a free text and voice chat application that runs on all the major operating systems and mobile devices. There is a link in the resource is for this lecture that you can click on, or you can pause the video and type this link into your browser. I encourage you to join the Python Learning server now and introduce yourself. Feel free to tell us a little bit about yourself and what you are hoping to learn from this course. The Green Channel in Python learning is a great place to discuss testing strategies and get help with questions you have regarding green or python testing. It's also a great place to mentor others. With the testing skills you already have, you have skills the others don't. Sharing those skills is a great way to keep those skills sharp and build a positive community. Second, if you don't already visit the Python Standard Library reference. It's time to start. You go to python dot orig documentation. Choose your version and library reference. Every skilled python programmer I have ever worked with regularly refers back to the reference to remind themselves how things air named or which arguments a method takes, or to see what functionality is available in a module that they haven't used for a while. 99% of your daily Python programming is documented in the reference there are many, many other great Python community resource is out there. Feel free to drop by the discord server and share what your favorites are. I'm Nathan stocks from agile perception, and this is python testing with Green. 3. Green03 Your First Test: welcome to the beginner section of the course. This section is targeted towards python programmers who know little or nothing about testing python code. If this is not you feel free to skip ahead to one of the later sections. I strongly suggest that you type along with me as we go. If you feel the urge to try something out, go ahead and pause the video and experiment a bit. The important thing is not that you type exactly what I type, but that you actually go through the process of typing out your tests. This solidifies your learning much quicker than passively watching a fire hydrants worth of information going by. So let's get started. You should have python 2.7 or 3.4 later installed already, I will use version 2.7 for this course, but I will try to give heads up about the small differences in Python three. As we go along, the first thing we need to do is install a few models that will, we will be using for this course. I've already typed it in here. Pseudo dash H on Mac or Lennox is what you're going to need to dio on Windows. Don't put the studio. You might need to start the command prompt as administrator. I'm not sure because I don't use windows much. Um, if you have some problems, let me know, and I'll update this video with the text over. Okay, so we go ahead, type in my password, get those installed. Okay, So now let's make a file called test stuff dot pie. By the way, I've made a directory. I named mine python testing with Green. It's just to put files that we're gonna play within. I suggest you make your own subdirectory. I don't care to name it won't make a difference. Um, and as far as editors going go, I am going to use Mac them for my editor being a veteran of the editor wars. I really don't care which editor you use. Go ahead and use your favorite editor. You can even use note, pad or text at it if you want. I don't care. But I do suggest that if you don't already have a good editor, that you should go check out some different options and find one. Has some nice language support for python like, um syntax highlighting code completion, some stuff like that. All right, so we're gonna import unit test and we're gonna create a class. We're going to start its name with test. I want to call that stuff, and then it needs to subclass unit test dot test case. Okay. And then now are test method is also going to start with test underscore like our file name did. And then we can name it whatever we want. So I'll just be consistent will test stuff. It's a method. So it has to start with self now working at a dock string or say stuff works were being super super specific, can you tell? And then we're just gonna pass. We're gonna write that file will come over here the traditional way. The sort of vanilla way that you would run these tests is you go python dash M unit test. So we're running the python interpreter and we're calling the module unit test, and we're gonna pass it The argument discovered Tell just look for tests. And since we named our file beginning with test and since our class subclass is test case and since our method starts with test, it finds it, and it runs it. So it found one test. It ran it. This is the output right here. Not super verbose by default, you notice dot means pass. And this is the summary for the whole test run. Okay, Means all the tests past. So that's great. Our first test works. Now we're gonna use Green to run our test from here on out. So let's try that. If we just run green by default, it does. Discovery. The output actually is designed to mirror the default out. Put in a lot of ways on Lee with a whole bunch of improvements, which you'll see over the course of this course. So first off, we've got colors. We've gotten bold and not bold. Um, and then if we go and at some means to increase the verbosity could see cool things like, ah, versions of what's installed. Here's a nice hierarchy of the module and class and methods that were testing along with their output that will all be aligned nicely. See that that right there, that's our method. But what output was the doc String? That's why a highly encourage writing descriptive doc strings if you want to do just to bees. Two levels of her boast. It would have done the test method name instead. Sometimes that's useful to, and that's it for this lecture. I'm Nathan stocks from agile perception, and this is python testing with Green. 4. Green04 File Organization: Before we dig more into writing tests, we need to pause and make sure that you know where to put the test files in your project. So we're gonna talk about file organization. There are three options that make sense. First, you can put your test file in the same directory. Is the model your testing? So let's take a look at my project. So I have a my package package. Have a my module module. And so this first option is putting the test file right next to my module. So let's make a file. We'll call it test. Underscore my module top. I Okay, so first, well, important test. Then we'll go from my package dot My module import my We're gonna test my function first. Now notice what I did here I did from my package dot my module. Now that test file is right next to my module. So technically, we could do a relative import, but that's really bad style. Then if we move the file around, we're going to go on alter all the imports. Ah, lot of times test runners run from outside that directory and the relative import will break anyway. It's just really good style to do an absolute import. And then it's gonna work. Okay, so next let's go class test my function will subclass unit test dot test case and they will make a method. We'll call it test does not crash. That's if we run my function. We don't want it to crash, So my function does not crash land. What's going? My function will get a number. If you look at the actual function, you'll see it just returns. Whatever you pass it times to so uninjured makes sense. Okay, so that shouldn't crash. This test should pass. So let's right the file. If we look at our updated project layout, there's our test module right next to the module with the coat we're gonna test. So if we run green three V's for the nice verbosity, Okay, my function does not crashing coat past as a period. It's also green. Both of those indicate that a past see, we have one pass one passes. Okay, great. So this style is fine for tiny projects. But as your project gets bigger and bigger as more subject is, more modules, more packages, this method of putting your test file in the same directory as your module really gets messy. Often your directory listings are twice as long. Have to dig past test files to get to your code. When you're opening it, it does mean that you confined your test file really fast. But just having it all mixed in there is just really messy. So another approach and personally, my favorite is in every package making test subject tree and put all your tests there. So let's see what that looks like. So we'll make it a directory. My package test and I'm gonna move test my module into my package test. Okay, so it looks like this. My package is a package Says in it up, I my modules. There. Now, the test subdirectory actually needs to be a package. What makes a package out of a directory for python? The presence of a double underscore, innit? Pile. So if I just go to touch my package test, double underscore in its double underscore dot pie and will the tree again. Now you can see that test is a package that will make this file discoverable otherwise green and the python discovery that's built into unit test won't be able to find this now. There are some other test runners that don't require him to find it. But your built in unit test will and Green tries to stick to unit tests model as much as possible. And so that's what we're gonna dio. And it's just portable. Then you contest under anything in a little work. Okay, so if we run green again Oh, better dash bvb you can see it's exactly the same. It still found the code. It still worked since we had our absolute, um, names facing up here. It didn't break when we moved it. Good. Okay, so that worked. Now the advantage of this over the last one is it's not quite as messy. All you have to do if you want to put this coat out on production but exclude tests is just exclude any directory named Test. It's still really pretty close to the model that you're testing. It's just the same directories your module test subdirectory right there, and so it's pretty close. Almost a close, but it's a lot more organized. You won't have to wade through all your test files when you're looking in your main code for toe open, another file or whatever. I really like the bounce organization. Now there still will be a test subdirectory for every package. And some people really don't like that. So the third option is we could put the test subdirectory completely outside of my top level package and then will mirror the organization of the package. So let's see what that looks like. So let's move our test subdirectory from my package out parent directory. It'll suit that looks like Okay, so here we have my package. It is It is a package and has a module in it that we have our test package separately, not underneath the other one. This has the benefit that there's only one place for tests. It also has the drawback that there's only one place for test. Usually, under this scheme, you will mirror the structure of your package. When you start working in big organizations or big projects that have a whole bunch of subdirectories, that may mean you have my package sub package sub actually, sub package module you're working out and then over in your test subdirectory, you'll then have to go into the sub package sub package sub package Test module to open up your test file. That's a lot of navigating. I personally find it not worth the separation, but some people dio And if you really want to keep your test completely separate the missus approach that will work for you. Now I'm going to do the second approach the one where the tests of directories under the main package. So I'm gonna go ahead and move, test back into my package, clean this up and look at it. By the way, those of you who wondering what my clean in aliases, it's just a really simple alias for clearing the terminal screen and then deleting all the pi ce files. When you're doing directory tree structures like this, you don't want to see all the the compiled by code files. And then, um, tree is a utility you can install on a less 10 using homebrew on every limits that I've ever tried trees in the native package manager. So if you want to use those, that's how you get him. I'm Nathan stocks from agile perception, and this is python testing with green 5. Green05 Docstring: Let's look at doc Strings in the context of testing. The only Doc String that I am aware of that test runners pay attention to is the doc string of the individual test methods. Yes, it's nice when you're reading the code, but it's really nice when you're looking at the output. Let's see how that works. Pythons built in Test Runner has only very rudimentary support for Doc string handling. If you pass in dash fee to the built in test runner, then it will examine the doc string up until the first new line it encounters. If it finds something before the first new line, then it will replace the method name with the doc string in the test run out. But if it doesn't find anything, then it just uses the method name. So let's see how that works. Python dash. Um, unit test discover Dash V. Okay, this Doc string has its first new line immediately at the start of the string. So let's shorten that up and run it again. And this time, instead of the massive name, we should see a dock string. There we go. Now, if you continue, continue your sentence on the next line. That's just gonna be ignored because it on Lee looks at the first line. Now Green takes a more nuanced approach. Green takes the entire first paragraph of the doc String after stripping off any initial whitespace. The band of the first paragraph is the first empty line after Texas started, so let's look at that. So if we can make this nice again, in fact, we can even add some extra white space. We'll see that that will be stripped off. It's run green, actually. Be there. It ISS Nice doc string. We don't need those. We've already shown that works, and we can't show down here. If we put some supplemental information, that's not part of the first paragraph. So that also doesn't show up. You'll notice it. Also re formats. The first paragraph to fit better. That's basically it. The doc strings air really nice for both output because then you can see exactly what's passing and what's not by a description that you write that hopefully has more meaning than the method name. So Doc, string conventions or recommended usage is described in pep to 57 Pep stands for Python enhancement proposal Let's liken RFC for part of Python. If you're interested in learning more about Doc's strings outside of what we discussed here , go ahead and find the link. That's part of the resource is for this lecture or just Google Pep to 57 it'll come right up. I'm Nathan stocks from agile perception, and this is python testing with Green. 6. Green06 assertTrue assertFalse: Welcome back. We're gonna talk about, assert true and assert false two methods on the test case. Object that our one of our most basic tools. So let's make a new test method test true and balls just because that's what we're doing. Okay, using assert true and certain false test things. Okay, so it's on the test case object, which were sub classic. So we go self dot assert true, and then we give it a value. If it's true, it lets it pass. If it's false, it'll raise an exception and stop the test there. So let's give it something that's true. So applying function past five. Now my function just returns the argument. Times two. So my function five is going to return 10. So let's Jack make sure that it's equal to run the tests. It works. The other one is assert false, and it does the exact same thing in reverse if as long as the value is false, it passes. If the value is not false, then it raises an exception. So let's check and see if my function five is equal to nine. Shouldn't be. It should be false test still passes Great. There you have it. I'm Nathan stocks from agile perception, and this is python testing with green 7. Green07 Fail: So far, all our tests have passed. Let's fix that. Time to fail. Let's make a new method to test that my function returns 100 if we pass in the special value. Zero. So make a new method. Test Special zero. Give it a nice descriptive doc. String my function. Zero is special and returns 100. Let's get the value of my function. Zero. And if X is not equal 200 like we want, then we will use our new method. Self dot Fail Fail takes a message. She should always give it a nice descriptive one. So let's say zero is special. My function zero didn't return 100. The fail method of a test case object causes a test to immediately fail. It takes a message to displays an optional argument like you see, you should always right meaningful messages. So let's go run this green baby. There we go. We have our first failure. There's a whole bunch of things to know. Let's take a look First. The status code F Capital F is the code for failure, The color for failures Red. That's nice and intuitive. When you have a failure, you're gonna have a trace back down after all of your tests. So here we have the header of the trace back is the status once again, and then it gives the fully qualified path to the method that failed. So if you look here, my package, but the package test is the sub package test. My module is the module test. My function. That's the class test special zero. There's the method. This is really useful when you want to specify a single test or some subset of tests to run instead of all of them. When you're working on failures, you'll often just want to run the single test over and over until it passes, so I'll show you how to do that in a little bit. Most of the rest of the lines are the actual trace back or, in other words, the path of execution through the Python scripts where it was when the failure was raised. The last night has the actual message that we wrote. Then, at the bottom, you'll noticed that the overall status for the test run has changed from past two failed, and we have a count of how many failures and passes that we've had. So let's show you what it looks like when you run just a single test. We'll still do green pvv, But then we'll pass in its the target. Just this test and you'll see the other two. Test didn't run just the one failure. That way we can focus on it, so let's fix it. So if we go over to my module, we know that we want my function to return 100 when x zero. So first, let's do it wrong. But in a way that passes this test will always return 100. So if we run this again, we see it passes. So let's run it for all the tests again and see the oh, we broke one of our other tests. This is one of the great things about having unit tests gives you the ability to re factor things safely and know exactly what you break so you can go fix it. Makes it so you can refectory things quickly and safely, so let's go fix it. If X is equal to zero, return 100 otherwise will return X doubled. Run all the tests again and there you have it. I'm Nathan stocks from agile perception, and this is python testing with green 8. Green08 Errors: tests confirm fail and tests can ever failures and errors air very similar. The difference between them is when a problem occurs. Failures occur inside the test case. Fail method or assert methods. If there is an unhand aled trace back anywhere else in your test method or in the code, it's testing than that is considered an error. In other words, if it's your test checking a final value that finds a problem, that will be a failure. If it's a crash anywhere else, then it's an error. I don't know why they chose to differentiate between the two, but they did. And, ah, the output looks really similar. Let's cause an error so we can look at the output. So in the special case that the input zero let's raise an exception. Okay, now, to go run green, you can see this time instead of an F for failure, Gotten E for error. Other than that, they're very similar. Same color, the same kind of trace back listing. You'll notice that in the case of an air, it will trace back to the place the error was actually raised. So here we have my module dot pie line three raised an exception. It tells us about it. Analysts, as an error, counts it separately from a failure. The overall test run. If you have any errors or any failures, the overall test run fails. So that's the same. Other than that, that's the difference between an error and a failure. I'm Nathan stocks from agile perception, and this is python testing with Green. 9. Green09 Skipped: the skipped status shows up for tests that have been skipped. I'll show you a few different ways that you can skip a test. First. Let's make a new test case subclass to start testing my class. What are we going to test? Four things that the classed variable counter starts at zero. That the I. D. Is a sign from the counter that the counter is increment ID and that the get I d method actual returns The idea we expect. So let's go over to test my module. A new class test. My glass guest needs to sub class test case. The 1st 1 test, the counter starts zero and we're just gonna pass right now. We're not going to implement these. This is actually a good time to skip tests. These tests haven't been implemented. If we don't skip them, they'll show up his passes and the next person who comes along even if it's us in a little while, it may look at them and get a false sense of security thinking that the codes being properly tested when it isn't you can skip an entire cast test case by using the unit test module. Skip object as a decorator. So it looks like this decorator Syntex starts with an at it's unit test dot skip. And then it takes an optional message. You should definitely give it a good message. Will say, not implemented yet. So we know why these air being skipped. We'll save that. Come over here and see what that looks like. Okay, So skip tests have a status code of lower case s and their colored blue by default. Skip tests also get special lines after all of the other tests of run. This is actually next to the listing of failed and error tests. The reason for this is that test runs of real projects can easily be 10,000 lines long arm or long enough that scrolling back up to review all of the output could be really, really difficult. But the number of items that need action on any given test run tend to be really small. I mean, usually your tests are passing. I mean, the whole reason you're writing them is to get them into a passing state. So usually there's only a couple errors or maybe a few skips. So the skip tests are listed again at the end, near the errors and failures so they can be seen and fixed sometime. You don't want to just leave them skip forever, Usually. Now, if you don't want to see the list of skip tests at the end, for whatever reason, you can pass Green Dash K or Dash Dash, no skip report, and then it doesn't give you that list of skips at the end. That's if for some reason the skits are just going to stay there and you want to ignore them. By the way, the fact that really projects tend to have a very large number of tests is the reason that the default test run Vivo City is so low. Instead of out putting the module on class hierarchy along with the method names and our doc strings, the default output lists just the single letter code for each test. It looks like this, and we'll skip the will not include the skip list just to keep it short. There we go, four skips three passes, so let's show you some other ways that you can skip tests. You may not want to skip all of the tests in a test case. Maybe there's just some of them you wanna skip in our case, we do want to skip all of them, but, well, skip them individually. Now, the same decorator works on a method. You just put it right before the method you want to skip. Just like that will save go, run, exact same out. But notice, by the way, that this skip message gets shown next to the function, the method name excuse me, or the doc string and also get shown down here in the list. Okay, so these methods are both really nice and convenient, as long as you want the test to be skipped all the time. And you just want a hard coat it. But what if there's only certain conditions under which she want the test? For example, What if he only want the test to be skipped? If you're on windows or if you're on Lennox or if your hard drive is full, or if they're whatever the case, what do you do? Well, you can't use these methods. They're hard coated. So the third way is to use the test cases own method called skip test. And once again, it takes a nuptial message. So let's, let's say not implemented yet. But skipping from inside, you hear if you wanted to, you put your if in your condition and whatever, that's the benefit of this way. It's good over here and clean it and run it. You see once again exactly the same behavior. So there you have it. Three different ways to skip tests, and we've discussed why you might want to skip tests now. Final warning. Don't leave tests being skipped. It's just there's no reason to just have a test. Be skipped forever, unconditionally use it as a temporary measure and then come back and fix it. Make its what actually runs, passes or fails, depending on what the code finds. Make it valuable. Otherwise, you're just wasting space because a skip test doesn't do anything except to remind you to change it. I'm Nathan stocks from agile perception, and this is python testing with Green 10. Green10 expectedFailure unexpectedSuccess: if you have the unusual condition that attests, sometimes fails and sometimes passes and you're okay with that, then you can mark that test as an expected failure. I have not seen a good reason to do this in the real world, but the functionality is there, and I want you to understand what is going on. If you see this somewhere, I added a method to my class called unstable method. At the moment it is returning an exception in the test module. I'm importing my class so we can use it. So let's go down to the bottom and at a test method to test unstable method. So if we run this, we'll see that it crashes because unstable method raises an exception. So what we're going to dio is we're going to decorate it with unit test dot expected failure and running again. Now we get the code X for expected failure. This is actually yellow looks orange on my screen because I have yellow set toe orange in my terminal. It will probably look different on your screen, and it doesn't count as a failure. As far as the test run goes. Three overall test on his okay, because there's no failures or errors, and if that's all that expected failure did, then it would probably be a little bit more useful. But what watch what happens when we go back to unstable method? We change it to not fail. So now it's just gonna pass, which means this is going to succeed. So we'll run this again. Now we get you for unexpected success because we expected it to fail. So the successes unexpected. But it's still counted as okay for the test run sake. So, in effect, it always runs the test and sort of reports what happens but ignores it as faras, passing and failing goes. It could be useful if you have some legitimate reason for trying to run something but ignoring the result. But I can't think of a real world example where that would be better than actually testing for specific failures and successes. So there you have it. That's how it works. Now you know what's going on. If you see it, I would encourage you to avoid it. I'm Nathan stocks from agile perception, and this is python testing with Green 11. Green11 Fundamentals Review: let's review everything we have learned in the first section. By running through a more realistic testing scenario, I'm gonna try to use only what we have learned so far. In this course, let's write tests for a game module. This will be the weapon module for next Blockbuster Ninja Game written in Python. All right. First will make the project directories, so make the Ninja directory will make the ninja test directory Will touch Ninja Dunder in it. We'll touch the same thing in the test subdirectory, and we're gonna start writing our tests. Yes, we're going to write our tests first. So it's a weapons module. So its called test weapons. So we need to import unit test. We need to create a subclass. We'll make a weapons dispenser. So that's what we're going A weapon, A weapon dispenser. So that's what we'll test needs to be a subclass of unit test dot test case. All right, let's make a test method test creation weapon dispenser and being created. All right, we've written our first test. Let's go run her first desk. It fails. Not surprising. Global name weapon dispenser is not to find. They're absolutely right. We haven't tried to import it, so let's go run Ninja got weapons import weapon dispenser. Okay, this isn't gonna work either, because we haven't made that module. It's not even going to get that far. It's gonna crash while loading this file so we can't actually get the file loaded to even run the tests. So let's make ninja dot No, not that we're making a file. Ninja weapon. Stop. I Okay, so let's make glass weapon dispenser right now. We just leave it empty. Come back here. Runner tests It works. It could be created. Okay, So what do we want the weapon dispenser to do? This is one of the nice things about writing your tests. First is they will help you. Reason about how you want your object to behave, you'll have to think a little bit more about it rather than just be a stream of consciousness, creating things without thought. This forces you to think about it because you're testing it already. Your not just writing it. So let's say we want it to start with some number of weapons. Let's test that it does test num weapons. Okay, So weapon dispenser should start with the correct member of weapons. Okay, so we create a weapon dispenser. Now what? Where does it store? The weapons has got a storm somewhere. Let's say that we're going to use a list, but we're going to treat it as a stack. And it will dispense weapons off of this list by treating it as a stackable pop things off of it. It's got a stack. We'll call it weapons. And we want to say, Let's use assert. True, We haven't used that yet. So self cert true that the length of the weapons stack is three will say needs to start with three weapons. And if it's not equal to three, then we'll give it a message. There are not three weapons in the dispenser. Okay, that's a problem. Okay, save that. Let's go running. We don't have any weapons at all yet we don't have the weapons stack. Even so, we're going to make an in it dunder in it so that win this object is created. We will say get a weapon stack. Now we get stopped right here and run the test again. See that the error changed now. Found the weapons. There just aren't three weapons in the dispenser. Okay, so let's give it three weapons. Let's give it Catanha. Let's give it Schurick in. That's how you spell it, and let's give it. Mm poison. Okay, three weapons, run the tests again. There we go. What else do we want? The weapon dispenser to Dio. Well, if it's a dispenser, it probably ought to dispense weapons. So let's test dispensing a weapon. We've already decided that it's going to treat the list as a stack, so that's what we'll test. So we'll get a weapon dispenser. And then let's say, let's say, the value of the top of the staff now a stack. When a list reviews a stack, the back of the list is the top of the stack. So the last element and then we'll check. Let's see. This is a great place to use our self dot fail. So we'll say if whatever gets dispensed is not equal to the top of the stack than self fail , you got the wrong weapon. Okay, so we'll give it a try and there is no dispense. So let's go make it defense, Okay? We already know that we want to return self stop weapons, pop the top that'll top pop off the top and return it. Okay, now it's working. So we've done assert. True, we've done a self dot fail. Let's do an assert false the Let's test what happens with an empty dispenser. So after all of the weapons have been dispensed, what happens? Let's say that empty dispenser dispenses no, so we can decide how it works. We're going to decide that it's still returned something. It's just gonna be the nun object. So let's get a dispenser. And then let's dispense all the weapons for I and range the length of the weapons because we don't care how many there are. If we change the number, we're still want this test toe work independent of how many there are because that's not we're testing. We'll just dispense them all out and throw them on the floor, catching them anything. And then we will search false. One more dispense because none is false. E. And if that doesn't work, then we'll given air mission saying Empty dispenser, I thought to dispense none. It's going to try. First we get an air because it tried to pop from the empty list. Well, we can't have that. So we'll say if self weapons, because an empty weapons list or weapons stack will be false e then it won't try to pop it . Instead, it will implicitly return none. Let's make it explicit turn, which is the same thing that happens if you just fall off the end of a method. Okay, try it. There it goes. So we've covered just about everything. We still haven't skipped anything. So let's say that we have some future functionality that we're waiting on a designer for, So let's go. Unit test dot Skip need designer input about future stuff. So that will go test future. Uh, you hear stuff with maybe we know a little bit about it, but not enough to write a meaningful test. And so just leave it like that over here. Okay? That's just like it to do. Whenever you see a skip, it should bug, you should say Okay, this is sort of like to do I should come and fix that, except for in the case where it's something like Well, on this platform, we have to skip it, cause on this platform that that functionally doesn't make any sense that maybe it would be good to say to do to make it explicit. What haven't we done? Well, what we haven't done yet is we haven't tested something that is expected to fail. We already know. We're gonna expect it's fail. So what? What would we expect to fail? Failure? Well, how about an Easter egg? It's a secret. Nobody knows it's there. Probably not very well maintained. Maybe it'll work. Maybe it won't. By the way, we don't really care, Right? So let's say, um, a weapon dispenser and call Easter egg now because it's an expected failure. We could be writing our test completely wrong here, and we'll never know because it expects it to fail anyway. So, I mean, just for fun. We can go in here and say Easter Egg and raise, not implemented it, and we can't even tell the difference. This time. It actually found the function and called the function and raised a completely different trace back. And even if it did work, the only changes will. Now we can see that it's working. So there we go. We got her unexpected success we've got are expected failure. We've gone through all the things that we learned in this section. So let's move on to a challenge. I'm Nathan stocks from agile perception, and this is python testing with green. 12. GreenA01 Intermediate Section Overview: welcome to the intermediate section. At this point, I am going to assume that you are an experienced python programmer and no, at least the basics of testing. If you skip to this section and are not sure if you are ready for it, then I encourage you to go watch the Fundamentals Review lecture. It covers the entire last section in summary. In this section, I focus on how to be effective in your testing. There are lectures on basic usage of green, effective testing practices and just about every method, class or module that you would ever use in your testing. The lectures in this section are all designed to be completely independent from each other . I arranged them in an order that makes sense to me, but feel free to watch them in any order that you like. For the sake of shortened to the point lectures in this section, I will often be putting the code I want to test right at the top of the test file itself. Don't do that in real life. If you need guidance, see the lecture on test file layout in the last section. I'm Nathan stocks from agile perception, and this is python testing with Green 13. GreenA02 Green Target Specification: by default. Green runs every test that it can discover from the directory. You run it in, but sometimes you don't want to run all the tests. That's why Green accepts targets to test is optional arguments, if any targets air specified, then Green composes a test run of Onley the specified targets. There are two ways that you can specify a target. The first way is a path to a directory or file to test. Green will include all tests from inside the directory or file specified. I have made an Army package with equipment and soldier modules and corresponding test modules. If I only want to run the tests on the soldier, I can run still, do clean to clean up the screen green with my verbosity, and then I'll give it just the test soldier dot pie file. So I'll say Army test test soldier dot pie. You'll see the Onley. The tests from test soldier dot pie a run if we specify the Army directory than all the tests in the whole package would be founded. Run just like if we ran green by itself. Let's see that get clean, screen my verbosity and specify the Army directory. The second way to specify a target is to use what I call the fully qualified, dotted name. If you were to create a python module in the current directory and import an object from the Army package into that python module, the dotted path of the object that you are importing is the exact same as the second method of how you would specify your target to green. It looks like this, So we'll to clean again. Do green for diversity. First, you start with the package or module relative to your current directory, So from here, I need to do Army, then use a dot and then specify the object. Inside of that, we want to run tests, which are all inside of the test sub package, so we'll do test next. Next. Let's specify just the test equipment module because that is the module that has the failing test that we want to look at. If I run this now, you will see that it runs the entire test equipment module, but it doesn't run the test soldier module. The nice thing about the dotted method is that you can go further than this if we continue on. We can specify just a single class to run, for example, test rifle. And that runs just that one class from the test equipped module. And we could go even further and specify just the method that we want to run. You'll notice that I ran with just two V's so I could see that Method Danes instead of the doc strings. So let's just look at our failing method test clean. Now you'll notice that in the failure trace back, it provides the fully qualified, dotted name to the test method that fails. So one easy way toe isolate the test that you want to work on is just come to the failure and copy and paste that as your target. So let's go fix our test. So I'm gonna open in a new tab here. Test equipment. Let's look and see what's failing well in test clean, which is right here. Army rifle should start clean. Asserting that are rifle's clean member is true is failing. We're getting a false instead of the truth. So let's open a new tab, their equipment module and take a look. Yep, we definitely have a bug. The rifles not starting clean, so let's fix it. Save it. Run this again. Now our test is passing. Now we can back off our target if we want. Let's run the entire test equipment module. Okay? We're great. What foot on everything inside the test module. All our tests are passing. You noticed that army dot test contains all of the tests and army. So the output for Armenia test and Army is going to be the same one last note. Use a period. That means the current directory. I'm Nathan. Stocks from agile perception. And this is python testing with green. 14. GreenA03 Coverage: If Ned Batchelder is excellent coverage module is installed, been green will automatically detect it and support several coverage options. Code coverage measures the degree to which the source code of a program is executed. When a test suite runs coverage. Only measures that your tests run your code coverage. Can't tell whether your tests are meaningful or not. How to write meaningful tests will be discussed in a separate lecture. However, even meaningless tests are better than no tests running your code. In this lecture, we will focus on the mechanics of measuring and increasing code coverage. An easy way to check and see if Green has access to coverage Is green Dash Dash version. Green will output the version of coverage right here if the module has access to it. If your coverage module is not installed, please pause the video now and use Pip to install it. The Green Help Out has a section for coverage options. I'm going to use the long names in my explanations, but you're welcome to use the short versions if you'd like. Run coverage enables coverage during your test. Let's take a look on the left of the coverage. Output is a list of files whose code was run statements is the total number of code statements in each file. Miss is the number of statements in the file that we're not run. Cover is the coverage percentage or, in other words, the percentage of statements in the file that were run. Missing is a list of line numbers who statements were not run. If there's more than one file in your coverage output than the report will conclude with a total line that shows total statements, total statements missed and the overall coverage percentage. There is also a dot coverage file that has been left on disk for you to use with coverage to generate different kinds of reports or submit toe other tools. If you only want the dot coverage file and you don't want to see the coverage report, then you can run green dash quiet coverage. You'll notice the report doesn't show up here, but the doc coverage file is still generated in case you need to use it with other tools. Green attempts to show only coverage of your actual source files. By default, you'll notice that the test file itself isn't listed. Green does a pretty good job of determining which code you want. Coverage measured. Four. Let's say Helper DuPuy is 1/3 party module that we don't intend to test ourselves. In that case, you probably don't want to include it in the output. Let's use omit patterns to exclude it. I'm going to use apostrophes to prevent the Astros from being expanded. Um, it patterns takes a comma, separated list of patterns to admit the patterns. Match the name. So the name we want to admit it's help it up, I so we could say help star to admit it. If we added start up high, for example, there would be nothing left to show another way to omit everything, except for files matching a certain include pattern is to use the include patterns option. Include patterns, takes a comma separated list of patterns and Onley file names. Matching those patterns will be included in the report. If we only want to see example dot pie, we could say e X star and only example that pie is shown in the list. In the rare instance that Green omits files that you actually do want to see, use the clear, omit flag to empty vehement patterns that green starts with by default. Coverage of every single python filed that has actually brought into your code is listed. As you can probably see. This isn't usually what you want. Some people view 100% coverage as the end all test measurement. I personally view 100% coverage as a minimum bar. It is really not hard to get 100% test coverage. Let's get there with our example code, you will find that increasing code coverage in general is mostly a matter of repeating tests with slightly different conditions in order to get different branches of execution to run. Let's run our coverage report. The first statement were missing. Starts online. Eight. Line eight is the body of the else branch for that to run the variable, something needs to be false. E. We already have a test that checks what happens when something is truth e. So let's copy and paste it and make it false. E an empty list. This false e. Now the line is covered, but the test isn't passing. Notice that coverage does not care whether or not tests pass infected can't tell if the tests are passing or not. All it cares is whether or not something was executed. Line eight was executed, and so it's no longer missing. We care whether the tests pass or not. So let's go fix our test. Great passing tests and better coverage. Let's take a look at Line 12. One of the benefits of reaching 100% coverage is that someone is forced to look at all of the code at least once along the way. Line 12 is the body of a function that never ended up being implemented or used. If you have to add tests to an existing code base, you will run into situations like this all the time. We have four choices. One. Delete the code if it is really unused and unneeded to comment out the code. If we aren't sure whether it is unused or unneeded, that makes it so we can get it back really quickly. Three. Assumed the code is used and test the code, as is four. Assumed the code needs to be finished right tests for how the code should be and finished the coat. In my case, I am certain that this is unused and unneeded. So I am just going to delete it now. When we run our coverage report, we see that the number of total and missing statements have gone down. The coverage percentages gone up in Line 12 is no longer missing. Now let's look at Lines 13 and 14. This try except Block is outside of any function or class, which means that it is executed exactly once, when the module is first imported and never run again. That means that no amount of additional tests concolor the except block. This is one reason why you should avoid having any code at the module level. In general, if you don't have a good reason for putting code out of the module level, re factor it into classes or stand alone functions. Imports that deal with different versions of available libraries are one of the few things that really do make sense to do just once when the module is imported. This is one of the few cases where I recommend using fragment no cover. It looks like this. It's a comment you put pragmatic colon space, no cover. This will cause the entire except block to be ignored for the purposes of coverage. If we run our coverage report now, we will notice that the total number of statements has decreased since the except clauses not being counted at all. Our coverage has also increased. Our last bit of code is the if name equals Main Block. This is the standard convention in Python for kicking off code when a script is called directly rather than imported and used as a library. Since our test runner imports and uses our code is a library, there's no sane way to test this. If statement at global scope, I have three suggestions for if name equals main blocks. One don't put any riel code in the if name equals Main Block. Call another function that does the real work to use. Probably no cover on if name equals Main Block, since it doesn't have any real code anymore. Three. Test the rial code. So to follow my own advice, let's come in here and first move the real code to a function. We'll call it me, and then we'll call the function. Second, we'll add pragmatic no cover since this doesn't have any real code anymore, and there's no way for us to cover it anyway. Third, we need to actually test are real code. So come over here. I need to import. I mean, so that I can call it will make a new class. This isn't a particularly meaningful test, but if we go on, run it, We'll see that were now 100% test coverage. If you follow the recommendations in this lecture, reaching 100% code coverage should be a straightforward task of moving code into testable places and writing tests. I'm Nathan stocks from agile perception, and this is python testing with green. 15. GreenA04 Meaningful Tests Introduction: a unit test tests, one logical unit of code, for example, a function method or simple object. Meaningful unit tests. Don't just run the code they can be thought of as part of a good design process. Here are some guidelines to help you write meaningful unit tests. Use your tests to design code that you would want to use right readable test code. Test only one outcome or scenario. Protest test boundaries. Test results, not internal implementation. Isolate the unit you're testing and lastly, coverage is not a good enough reason for a test. I'll go over each one of these in a separate lecture. I'm Nathan stocks from agile perception, and this is python testing with green on it. 16. GreenA04a Meaninfgul Tests Design Tool: How contesting. Be a designed tool. Well, testing your code is using your code. It's not fun to test poorly designed code because it's not fun to use poorly designed code . Whenever you can use your experience testing your coat to improve your code design, for example, I have some poorly designed code here. I made the mistake of thinking that using a factory pattern was the best way to create my movable objects. Let's take a look at what I have to do to just test my movable object. First I have to create a factory. Then I have to initialize that factory in a separate method. Then I need to call a factory method toe, actually produce a movable object. Then I can actually call the method that I want to test, and then I have to call to separate methods to test that the result came out correctly. Wow, I realize this is a horrible design, so I'd much rather just test it like this. I'd rather just say I create the object giving it. It's starting parameters. I asked to move up specifically, and then I test that it got where it was going in one call. So let's make it work like this. So I really don't need this factory at all. This is ridiculous. Instead, let's button in it, math it on here, and then we'll take those values, sign them to myself. So using these pseudo private variables here self got underscore x of y all right. We've got initialization. No, when you want to call, move up specifically. So let's make a nicer function. You could even call our other function. X is not to get all the wise would be by Mt. And then instead of get execute. Why, let's just dio coordinates. We'll return to pull next. So why, all right. So we learned through our test that we had a lot of extra junk just to test our object that we really didn't need. And so we decided to come and redesign it. Testing as a design tool. How do I want to use this? Why don't have to do all these extra steps? It happens all too often, riel life. Someone thinks I'm going to need this ability or I'm going to need that ability. So they over engineer the problem. They make all this extra junk. That's not really needed. And when you go to actually use the code, you realize that's way too complicated. I don't have to do that well. Testing is a great way to force yourself to use your code. It's a great time to make that realization and go and redesign your code before it's too late. I'm Nathan stocks from agile perception, and this is python testing with Green. 17. GreenA04b Readable: test code can also serve as implicit documentation of your coats design or at least a good example of how to use it. But that means that the test code has to be readable. If it isn't, then when your code changes, you aren't gonna have any fun at all trying to figure out a change your tests accordingly. Let's take a look at some test code that isn't very readable. So first off, we have a class name that's simply a single letter T, probably for test doesn't really tell us much about what it's testing. This should really at least have ah, short, descriptive name that gives us an indication of the unit that we're testing. Next, we have the method name, also short, un descriptive. This should give us some idea of what the test is about, and in the dock string should really spell it out a full sentence explaining what this test is testing. Next, we have some unintelligible one letter variable names, A, B and C not very helpful. 100 Not sure what those are four then. Apparently, the author suddenly ran out of lines and his editor decided, put everything else on one big line. So first of he's got an assert. True, even though he's testing for a quality, we didn't put a custom message. So certain truths just gonna obscure the values that are actually being tested. If we've done assert equal instead, at least it would have said, Well, the two sides were this and this and they're not equal here. Will to say it's not true. Um, we've got a really long, fully qualified, dotted name to this item function. Well, that's just making these harder to read. And then we have some magic values. Magic numbers 1 15 feather. True, if we're not intimately acquainted with this item function, that can be pretty mysterious. And then we're taking the dot color of whatever's returned, and we're comparing it to ABC. Here we can make some educated guesses. OK, well, it's a to pull of three values. Maybe it's a color. Maybe it's a coordinator, something like that. And then that's how we got to go off of not very readable. If he came in here and glanced at this, you would have really no clue. You'd have to go through this line by line and mentally parse this to figure out what in the world it's four. So let's take a look at the same theoretical code being tested in a more readable way. First off, the import is now importing the item in the scope of the item function that we're testing. So we don't have to include this really long string down in our test. We also said, OK, well, these air colors and we're actually going to reuse them between tests. Let's say there was a whole bunch of other tests here, and ah, there's our 100 I read. Now our class has a descriptive name. Test item creator function. Okay, so the unit were testing is an item creator function. That must be what item is. Okay, then over here are method. Name is now test Red feather. Great. So what? Some contest about a red feather here, we can create a red feather. There we go. That's what The testes. So then we created created item is item case. We can tell that what's gonna read return is an item that it creates. You see that collection? It is some arbitrary value. Variant 15 is should be the red feather items feather that makes a lot more sense in the context. What we're doing. Apparently we want to spawn it, whatever that means. And then we're checking to make sure that the returned item dot color is red. Okay, this is very readable. You can just glance at this and get a new idea of what it's doing. You can see really easily how it's doing it. You don't always have to list out all of your individual arguments by name, but when you're not in context or if it's the very 1st 1 that might be worth doing. Or you might just have a really be really familiar with your code and maybe that's not needed. Or you could put these out in variables that are named something about what you want. In fact, a common pattern. Want to be used a fixture If you see the lecture about fixtures to set up a lot of common items that you're gonna want to reuse over and over and over again, and then just change the ones you want to be different for the particular test, we'll talk about that more in the test fixture lecture, and so there you have it readable, not readable. I'm Nathan. Stocks from agile perception, and this is python testing with green on it. 18. GreenA04c Meaningful Tests Test One Outcome: meaningful tests test one outcome. What do we mean by one outcome? Only mean we choose what we're testing for and we test for that. One outcome doesn't mean necessarily just one assert, although often would be just one assert, but only one outcome. Let's go through a bad example real quick. Right here we have our test class and one method. The tests, all the things. So we start by making a widget, we check it starting state. And then we say we do have some nice documentation here. And the testing your pretty decent, but they're all in one method. So when you sprocket a widget and gains mass notice we reuse the same Wichita's we did up there. Then we check its mass. See that it's been sprocket ID. Then we check to make sure if you sprocket too much, it springs. We re used the same widget for 1/3 check, checking its sprawling. Okay. And then we're saying, Well, what if you drop a new Egypt, we're gonna make a new widget now, because we want to check something a new widget and we drop it, see it splattered. We're gonna make another new you widget, but it's marked is used. That's a little confusing. We're gonna drop it and see if it splits instead of splats. Ridiculous terminology. I know, but gets the point across. We're testing all sorts of outcomes in one method. But if this 1st 1 fails, sorry if this 1st 1 fails, we have no idea about the rest of them. They won't run it all. We don't know if they'll if they're still failing or they're passing. Or we also have a lot of dependency and ordering here and reuse. If I took out one of these sections, it might break widgets below that assumed something was in a certain state. Or if I, if I moved this section up to the top, that could cause all sorts of chaos. So your code becomes harder to read longer, more complicated. Quinn's brittle. He can't change it without reading through the entire thing. Not to mention that if you want to run, just say this test well, you can't. You have to run the whole method. So bad example. Let's look, let's look at a good example. Test one outcome. Okay, so you can see it's a little bit longer But all of those comments in the code now become the doc strings. So instead of having a pretty meaningless doc string Sesto, all sorts of stuff and pretty meaningless, um, name, we have meaningful method names and meaningful doc strings for each of them and notice I actually changed this first test a little bit, I said, Well, okay, I want what I'm really trying to do is test all the starting values. So I added in two asserts, I also am cheating just a little bit to make this a little shorter. And I'm using fixtures. If you haven't learned about fixtures yet, you'll learn about that soon. Basically, it runs this before each test sets up an entire new good example. Object runs set up and then runs the one method that it's testing this time. So each time this, this test is gonna have a brand new self dot w, which it so there's a little bit whether or not that gain in this particular case, I'm not sure, because now I have to add self on the front of W everywhere. That makes it a little hard to read, but But we're checking to see if the state is anus pocketed and the masses five. Because the one thing that we're testing is that this starting values air correct. So we can have multiple asserts. As long as we're focusing on testing One thing, that's the principle we're after. The rest of them all. The code is the same, but they're testing stuff just for fun. Actually implemented the widget in my lib so we can go run this when we'll see both versions of the test. There's the bad example. Everything sort of hid behind this vague reason and then the good one. Everything's out and and tells you exactly why out here in the test as well. And if I only want to run one of these, I can run this one. I just want to say, Well, I only want to run test one outcome dot good example dot test starting values. I could do that. So it's good design. It makes your test more meaningful. Do it. I'm Nathan stocks from agile perception, and this is python testing with Green 19. GreenA04d Meaningful Tests Test Boundaries: meaningful tests. Test boundaries. If you're human, you have a problem with boundaries. I don't know why, but we all run into boundary issues. If you have code that deals with boundaries, remember these five things to check before beginning middle end and beyond. Let's go through our example. I have a little stuff class here that just stores a list of characters and then has a method to index the characters and return them is upper case. If it's a valid positive index, it returns the upper case character. Otherwise it returns not implemented error. You can probably already spot one air that index not equal zero. I had to just add one more air. Can you see the other two boundary years? So we're gonna go through, We're gonna test and index. That's before the range at the beginning of the range in the middle and and beyond. So let's actually go Run this. Okay? Two errors, one failure. Let's look the one closest to us. First test beyond range list in nexus. Out of range. Well, yeah, that's what we expected, but why didn't raise not implemented air like we expected. So we passed in 5012345 Yes, five is beyond it, but our check should have returned false. And then we should have got or not implemented air like we designed. So let's check and see what's wrong. Index five. Smaller than or equal to the length of stuff. Well, that's five. Well, there's our problem. Five. Smaller than or equal to. Well, that's true. So there's an off by one error. Okay, fix that now. This should be false. We should actually raised the not implemented air like we want right there. And that should be one less air. Okay, down toe. One error in one failure. Closest one to us. Test the beginning of the range. Okay, so we passing zero and we get not implemented air. Well, that was the gratuitous when I threw in. Sometimes people do silly things where they just could be. Ends of the range is wrong. That check for the wrong things completely not as realistic still does happen, though, so let's fix that one. Okay, so now the beginning works. The metal works the and works the beyond works, but not the before. So what's going on here? We said index before range raises not implemented air. So we pass in negative one. Well, that's clearly before, right? Well, it's our check here. If it's smaller than return here, we don't get a crash. What do we get? Well, Python as nice feature where if you have put in a negative number, it starts counting backwards from the list and so negative one will actually say, Oh, you want the last item in the list that's this E will return your e upper, So I'm getting an upper E instead of a not implemented there. So if I really don't want to allow negative images, I have to explicitly check for it. So I have to say, if the index is smaller, the length and the index is going to be greater than or equal to zero. And there you have it, So those are our problem spots. The problems tend to occur mostly at the edges of the boundaries. That's where the before, in beginning and end and beyond test. It's always good to test the middle to. That could be thought of as a large boundary, sort of the everything else. So remember your five boundary checks before beginning middle and beyond. I'm Nathan stocks from agile perception, and this is python testing with green 20. GreenA04e Meaningful Tests Test Results: meaningful tests. Test results, not implementation. Well designed code presents a usable interface to help minimise how much you have to understand its internals. When you are writing tests, you will often know your code extremely well inside out and be tempted to test how it's implemented. Instead of testing the results or outcome of your coat, let me illustrate we have this increment, er class here. This is an example Just illustrate that you can have a a well designed interface here created an object increment incur minute to call him out to find how much has been committed to whilst simultaneously having a horrible implementation. So the way that we're going to accumulate imagers is we're going to keep an empty list from then append. None objects to it, and then when we want our amount, we are going to count the items in the list. Besides being really weird, that's really low performance, so But I mean, the interface is okay, but let's look at our tests. So here, if I just wrote this code, I'd be really tempted to say Okay, well, I'm gonna I'm gonna check that list and make sure it's start to empty first. That's sort of like my zero. And I'm gonna call increment two men going to see if it has a nun in it, because I know that's how it's implemented. But we're just testing that it was implemented, how we implemented it when really for testing purposes, we don't care how was implemented. We just don't. Sometimes we have to deal with it, especially when we're mocking just to get our tests done. But that is not the focus of our tests. The focus of our tests is the code doing what we wanted to. Are we getting the results that we want? So let's test the results. So down here we have a better example. So here we actually get the results, so you'll notice. I don't even assume we started zero anymore. Instead, I just get the amount once and say That's my starting amount and I do want the increment er to go by one. So I'm gonna call increment and then make sure that now we've got this starting amount plus one. Well, great. That's that's what we want to dio. Let's run those tests even see Well, they both pass their both, essentially testing the same thing. So what's wrong with just testing the implementation? Well, let me show you. You've got a well designed piece of code. Someone's gonna come and change the implementation out from under you. So let's say instead of a list, we're gonna stored amount, which is just a imager, which is a lot more saying let's not even started zero this time because maybe our requirements changed a little bit. And then to implement that, all we gotta do is say amount plus equals one and then turning the length of list. Return himself. Amount. Now, what happens when we were on her tests? Mm. This test is now just trash. We have to come back to it. We have to read it, figure out what in the world we're trying to test. I mean, it's not even immediately obvious that we're testing the increment function because we didn't even say we fit Well. Okay, well, we're testing that it has this list in what? It doesn't have a list anymore. And so you're gonna common. Basically gonna have to throw that away. Try to remember what in the world it was your testing. And while in this case since we tested results, we don't care. If so, if you test for results, you can go and re factor your code optimization wise, at least the implementation wise. As long as you don't touch the interface. Your tests are great. I'm Nathan stocks from agile perception and this is python testing with green. 21. GreenA04f Meaningful Tests Isolate the Unit: meaningful unit tests Isolate the unit they are testing. This is mostly done to the use of the mock library which will not be covered in detail in this lecture. But I will show you what I'm talking about. If you look at this code, I have several functions. String If I simplify, wipe BA gone Index. If I It's not super important what they dio The point is that we want to test just this index if I function in this test method. So what we're gonna test specifically is that index If I leaves white spaces, if you pass it a low enough index, the principle is that we want to test just the index. If I function without calling white being gone or simplify or stream If I So that's the principle test. Just the unit you're testing in this case, the unit is the index. If I function not all these other functions now, Index, If I calls string If I and simplify and sometimes white begun in this condition, it'll also call white begun. So how do you do that if you've already used mocks than you know how? If you haven't wait until the mock lecture and I'll show you how I'm Nathan stocks from agile perception. And this is python testing with green. 22. GreenA04g Meaningful Tests Identify the Reason: meaningful tests identify the reason for the test. This principle really ties all of the others together. What is the reason for this test? If you could clearly state that, then it will be reflected in the code you design your readable method name, and DOC String will help us understand it. It will lead directly to the one outcome we're testing help keep us from stringing attesting other unrelated units. It really ties all of the other principles together. Let's look at an example. I made this sprocket class that takes a beginning rotation and rotation amount. It makes sure that the rotation is within bounds, needs to be from 0 to 359 and then it has a turn method that will turn this Brockett. This Brockett is like a year, and then we have, ah, getter method the rotation method to get us out the rotation because these variables that I've marked with underscores that's a convention to say. These are internal variables, so don't touch me directly. Use money interface instead. So let's look at this test. The reason for this test is that I want to make sure that no matter where you start your rotation value or your rotation amount, and no matter how many times you turn it that the rotation is always a valid value, that it's always inbounds. So that's the reason for this test. So that's our principle. Number seven. Identify the reason. Okay, so how did I use this as a design tool? Well, as I was writing this, I kept asking myself, How do I want to interact with the sprocket? I know it's gonna have these rotation boundaries, and I know that it's got a turn. Well, I said, Well, I think it would be nice if I just called turned alternate and it just stored its own rotation amount rotation. I could have said, Well, you have to pass it The rotation amount every time you call turn. But in my idea for this gear that's constantly turning the collar shouldn't need to keep track of every gears rotation that this is said No, that's going to be an intrinsic part of the sprocket that's turned. And then I said, Well, what about the rotation? Well, I wanna have some nice interface for getting out the rotations while use it getter rather than grab directly into the implementation. Just a nice design thing. I said, Well, we're gonna have to be interacting with the rotation, so I'm going to give you something to interact with so that you're not tied to the implementation directly. That's a design choice, but that's what we're talking about. Using the testing. You're you're testing process as a design tool. Okay, so what about readable? Well, because I chose the reason for this test up front and made it really easy to name the method. It made it really easy to write the Doc Street. Okay, so test one outcome. The one outcome also is directly tied to the reason. Well, the one outcome I'm testing is that the bounds are in the right place. So you notice right after I create the sprocket before I ever turn it. I check that were within bounds once after I turned it were looping through. We're checking it again and again. That's the one. That doesn't mean I only do one assert it doesn't mean I don't even on Lee do the asserts wants. In this case, I'm running. This hurts over and over and over, but it's always the same one outcome that I'm testing test boundaries. Well, the rotation is a bounded value. So here I literally did my rotation bounds checking. Here's the before value. The beginning, the middle. I did a few middle values, the end and the beyond. So we're testing their boundaries. Test results? Well, we're not testing the implementation. I only look at sprocket rotation and I call sprocket Turn. I never meddle with the internal variables or the implementation, so we're testing the results. Isolate the unit. Well, that one's a little bit of a gimme, because this is just a little example, and it doesn't touch any other code. So yeah, it's isolated. You need to take more care for your real code that's tied in tow. Other code. See the mock lecture for that. So this really is a meaningful test for this class. I used the code as a design tool. It's readable. I even annotated all the loops explaining what's going on. I test just one outcome. I test boundaries. I test the results, not the implementation. The testing is isolated. It doesn't stray into any outside code, and the reason for the test is clear now, one last thing. One caveat. The reason for your test cannot be to get 100% coverage. I'm sorry. That is just a cop out. Okay, you can make up just about anything. That is a better reason than that. Even if it's this test is to make sure I can create the object. And I don't have a crash creating it. Even that is better than for coverage. Because for coverage doesn't tell you anything. Find a riel reason for your code and you will have a meaningful test. Oh, and I almost forgot. The tests actually runs. I'm Nathan. Stocks from agile perception. And this is python testing with green. 23. GreenA05 Unit Test Organization: How should you organize your unit tests? There are multiple ways you could do it. Here is my recommendation. First, choose the file organization you want to use. See the lecture on file organization from the last section. If you would like some alternatives to choose from my recommended file organization is for every module. There should be a corresponding test module in a test sub package at the same level. It looks like this here. I haven't armor package and I have a material module and there's a corresponding test sub package with a test material sub module. Same thing up here. I have a blue module and there's a test sub package with a test blue sub module inside the test modules. There should be at least a test class for every unit you want to test in that module. What constitutes ah unit is somewhat of an art form. Often a standalone function or small class will each be considered a unit. The test class should have as many meaningful test methods as you need to test all the functionality of your unit. Let's look at some examples. I have the material module open right here and you can see I didn't implement any of the code. This is just for structure, but I have a small, create material standalone function. I have a small class called material. If we look in the test module, you'll see that I made a test case subclass where a class of tests that tests the standalone function and then I have a class that tests the small material object because that's a unit that I chose for these. If certain functions or classes need a larger than normal amount of tests, then you may want to create MAWR test classes. To keep things mawr organized, you'll have to use your best judgment. Let's see an example of that. So here I have a large class class blue has a lot of attributes, has a lot of methods. It has a lot going on here, and that is without implementing any of it. If we look at the corresponding test module, I could choose to treat the entire large class as a unit and all of the tests in it. If I did, then I would put the method names or attribute names as a sort of prefix right after test and then I would describe what I was doing after that. So here I'm doing the lightens, and I put a comment here to sort of indicate a section. So I'm testing lighten its, which is a method and then a little enlightened a lot and then dark and a bit dark in a ton and so on and so forth. That's one way to do it. But if it's a really large class, you may want to consider breaking it up and treating each method as a unit. So if you scroll down, you can see an example of this. This is the other method I chose the each method of the class was a unit, so I made a class sporting about the class. Name gets a little longer because I put in the object or the class that I am testing, and then the method that I'm testing the method names give a little shorter cause. I don't need to prefix him anymore because that information is already intrinsic to the class. So I just say test a little and it's obvious that I'm lightning a little all I'm testing. Here's lightning and then I got one for dark and one for Dec's tries and so on and so forth . So use your best judgment on, and you don't have to do it this way. This is my recommendation. Keeps things organized. If you want to establish another convention, that's fine. Use your best judgment. Establish a convention for how you organize your tests and then stick to that convention so that whenever you come back to your tests, you can be familiar with the structure and know what's going on. I'm Nathan stocks from agile perception, and this is python testing with Green. 24. GreenA06 Fixtures: Let's talk about fixtures. There are three kinds of fixtures test fixtures, class fixtures and module fixtures. Fixture is a fancy word for set up and tear down. So in all three cases were talking about how to set up things you need before something is run and tear down stuff after it has run. Let's start with test fixtures since in practice, these air probably the only ones that you will ever actually use. I have some example code here. I've got this little goblin class. He starts with some health. You could damage him when he gets low enough. His ah ouch changes to a death message. And here I have my test cub. Now I have everything compressed. I didn't do doc strings. I put my coat on. The same file is my test because this is an example. Don't do this in real life test fixtures set up things and tear down things before an individual test. An individual test is a method, so test dying is a test. So the way you define those is you add another method called set up and tear. You can just to find one of the other. If you don't need the other one. So how this works is that the test runner will take the test goblin class. It will inst. And she ate the class so well, actually create a test goblin. Instance on the instance it will run set up. Then it will run the one method that it's going to test test time, and then it will run, tear down, and then it will throw away the entire instance, and then it will repeat the process for each test method. So then it will create another instance of test goblin runs set up again, Run, test dying again down here, Then it will run, tear down. Then it will throw the instance away. And since we only have the two methods that will then move on to the next class, that's what a test fixture is. You're set up in your tear down methods in your class and they'll be run before and after each test. So what would you do with it? Well, in this case, each test sets up a goblin, and in each test, we want to quickly kill the goblin. Well, if you look at the code up here will actually have to call damage 20 times before they will reach zero. Well, maybe we just decided we don't want to do that. So instead of calling it 20 times, we're just gonna reach inside of it and change health to say one to guarantee that whatever damage is set to, it'll just be dead after the first call. Well, we can do all of that just once here and set up. So let's say self dot goblin, because you need the only way that you can access What is being set up is if it's stored somewhere, you can get to it. So we're gonna put it on the instance. So self dot goblin is a goblin, and then we're gonna reach inside of it. Se Self Goblin Health is equal to one Now, what do we do and tear down? Well, the logical thing to do to reverse here is just to delete ourself dot goblin off the instance. Now, down here, instead of creating a goblin and use it on the next line, we're gonna go call self dot com. And since we've said it's health toe one, this should actually work now, quickly say this and run it will see that this one this test passes this test fails because we're not doing that alteration yet. Pass fail. Oh, and look at that. I forgot to subclass unit tasked dot test case. That's actually a really good example of why we subclass if you don't the test right and doesn't know that it's a test case class of tests to be tested, it will just ignore it If I run this again, there it is. Okay, so let's go and modify this one as well as real quick self dot goblin, not damage. So here we're testing. If you die twice, essentially and you kill you and kill you again because the logic it's gonna come out again . So just sort of a silly behavior that we're testing. But it's just an example. Save that on it were all happy. Okay, Now, if you really just want to focus on what's useful, you can stop here. That's really where you can use test fixtures. If you really want to know about the other fixture stuff, we'll get into it now. So class fixtures. So the way you define a class fixtures, you go classmethod and then you go Set up class No. One death. I got my decorator last tear down last. I have never needed to use this in real life. In fact, I've never seen anybody else need to use this in real life. But it is here. If you're gonna use lots of multi processing features and green or something, you get really advanced. I would caution you to avoid using class fixtures or module fixtures because they make things a little tricky. But it's here. So now. So when does this get called? We'll win. The pest runner is looking for a class to test and it finds test. Galvan says, OK, Test Goblin is a subclass of test case, so I'm going to run tests on it. Before I do that, I'm gonna call directly on the class this method set up class, which can change things on the class itself and then when it's all done in Stan sheeting instances and running individual methods and then throwing them away when it finishes going through all of the methods, it will then run tear down class on the class that can then fix things up back how they were, Um So what could we dio? Well, the only thing I could really think of us was we could make a test goblin cantor set it to zero. And what do we do? We tear down. Well, I'll just delete. He kept gold ones counter number. Okay, so for that to actually do anything, why don't we say that? Each test individual tests that runs increments it. Now if this didn't exist on the class definition, when the instance tries to run the set up than this would crash, So that's how you can know that it's actually working. Otherwise, this this would crash and then let's see. Let's actually see it somewhere as well. Where shall we see it? Schools. We could just print it immediately. Print counter is, Well, let's see. It's a four man street, so we'll actually see for the first time in these lectures some standard out output from how green handles that. Okay, I think that's enough that we can demonstrate it. Let's give it a try, shall we? Perfect. So let's run through what it does test underseas. The test government class says Okay, it's a subclass of test case, so we're gonna run tests so Before I do anything, I'm gonna run test goblin dot set up class, which adds a counter integer to the class definition itself. It's at zero next, it in stand sheets, an instance of Test Goblin, and then it runs set up, said it goes through. It sets up our little goblin and then in increments the counter, which doesnt crass because it's there. It's been created. Then we print. Now Green captures standard out and lets you know which test printed what. So we see that test dying and ran their own first. And the counter was one because we implemented it and then printed it. And then it ran tear down, leaving the goblin and then three away. The instance made a new instance, and then it ran set up again, which got us our counters to for test dying again, going around the test, then around the tear down. And then it was done with all of the methods. So it through a the instance, and then ran the tear down class, which deleted test goblin off of the counter. If we wanted to, we could figure out some way toe between these tests. Actually, go look at the members of Test Goblin and you would see that dirt during the tests. It's got a counter member and afterwards had been deleted and then went on to the next class. Sort of repeated the process only when when these fixture methods are absent, it just skips him. Just doesn't do anything for that step. Okay, so there's a less common, less useful, trickier thing You could deal. I'm not sure why do you do it? But this counter was about the only thing I could think of that was remotely realistic. And then the other one is module fixtures. So we're gonna follow the same pattern, only hear what it's gonna look for is global module functions. So they are. At least they follow a good pattern here, set up module and tear down module, so you can probably guess the pattern by now. So what happens is when this entire module or file is loaded, the test one and checks to see if it has a set up Montreuil function. If it is, it runs that before it does anything else. Then it goes and looks for the The test case subclass is it does all the stuff. And when it's done with all of its testing, then it runs tear down Montreuil. Now, what could we possibly do here? Well, the only thought I had was Okay, Well, if we're doing these tests, we could really just define this globally anyway. But we'll define a global and set up module, get rid of it and tear down Roger. Then we'll use it down here. So in this little example test, I'm going through all of my monsters, of which I've only got one right now, but I'm looking through them and making sure that they start with a non zero health. So let's say we're gonna do a global monster list. We'll set that to an empty list, and then for the tear down, we'll just They are global monster list again in world lead it. Okay. And then down here and test monsters will just change this to Well, first, we need to go global monster list and then change that to monster list. Okay. And then if we run it, we should actually see no behavior change exactly the same. But now this global monster lists have been populated with one goblin and so this Iran and the Sir didn't fail and we were all happy. And there you go, So fixtures, module fixtures, class fixtures and test fixtures. If you do a significant amount of testing, you're gonna find yourself using the test fixtures. I would actually encourage you to avoid using the class and module fixtures from experience that can make things a lot more complicated than they're worth. Most people won't realise they're there or what they're doing, and you can really shoot yourself in the foot and again if you're using the green multi processing. That's actually why green multi processing defaults to handling entire modules by a single processes to avoid the complications. But if you specify a target that is smaller than a module, especially if you specify to separate targets in the same module on the command line, then you can actually get the same module loaded by different processes. And funny things can happen if you touch external stuff with your class and module fixtures . It's always good to avoid touching external stuff with fixtures. Anyway. If you do want to touch external stuff, look it, Ah initialize er's and finalized son treat because those air all multi process safe I'm Nathan stocks from agile perception, and this is python testing with green 25. GreenA07 assertEqual assertNotEqual: let's examine, assert equal and a certain not equal. I have some examples here. This test method is going to demonstrate our assert methods. First, a certain equal fails. If the two arguments are equal, it gives a helpful auto generated assert message, which you can override with an optional third argument. It's you see some message certain equal, has the correspondent assert, not equal, which is just the inverse. So here were given the same input. This will pass because we're starting. They're not equal and one into are not equal. Things get a lot more interesting with some of the other types. Let's take a look. In fact, some of your collection types. Um, you may want to just use the auto generated message and I'll show you why. So text one and text to are not the same, so it's gonna fail. But look at this message first, it says the two strings are not equal, and then it gives you a little diff. So it's saying, Hey, in this message on second line, this part S E. C. O. Is different in this part. The number two. But notice there's no there's no little pointers underneath the other characters because they actually are the same really helpful little output I took it lists. This is, in my opinion, even fans here really nice shows you all these different views. First, it shows you the lists just in sort of a linear format. Then it tells you the first differing element is at index one banana versus Zonana, and then it shows you another format, a diff format. And it tries to point out the actual characters inside the different elements that are different. So saying, Hey B A is different than X, you really nice. Let's look at a set My set. It will tell you which items are unique to the sets. So here we have two. That's only in the first set. 123 But it's not in the second set because in the second set, and we have 173 so items in the second set. But not the 1st 7 Really nice, similar stuff with dictionaries. So again it compares keys, it compares values, and then it goes inside the keys and values for things like strings and tell see exactly which parts are different. So Pete is different than spied, but they both and in her and Parker and Man both have an A in them. But the rest of the words different. Really interesting. For the two poll, I demonstrated a different size between the two different, um, entries. I think some of the other collection comparatives will do this as well. But look at all the different things we have the default, just the whole thing is different. And then they tell you what the first differing element ISS. It's that Index one and there that's what it is in the one in the other. And then it says, Well, you're second to Bill contains an additional element, and it's at index to, and this is what it iss and then give you sort of your your ah little diff format. And here they're entirely different. So I don't think I think it just didn't try to match up any any characters. Very helpful. And then we have objects. So this is where it gets a little tricky. So first I just have this little cow class and all it does is store number of hopes. And these these cows have lots and lots of legs. Eight of them so. But look at this. One cow instance is not equal toe another cow instance. That's because by default, if it's not a built in primitive, if it's a class that you define a user to find class, it will compare the two instance objects and see if it's the same object while they're not . And it's showing here that they have different addresses and memory. They're not the same object. And it's not comparing if that the same types of object it's comparing if they're the actual same object will. Usually that's not what we mean by equal. So we can fix this. Um, a certain equal of the equality operator will look for the double underscore e que for equals method, and this method takes not only self but also other. So what it will actually do is it'll take one of the instances and pass it to the double underscore e que method of the other object, and on the other object, it will check to see if their equivalent. So here we're going to say cows are equal if there number of hooves are equal, so we're gonna just return self duck clubs. Is equal to other dark clothes. That's our quota Now who run it again. We can see that they're equal, and in this case we don't get any fancy helpful comparing. So once again, your message cows a different number of legs, your message becomes very important, and we'd have to do some trickiness, actually get them to have different numbers of legs. But I don't really see a reason. So there's a certain equal and assert not equal. I'm Nathan stocks from agile perception, and this is python testing with Green. 26. GreenA08 assertGreater assertGreaterEqual assertLess assertLessEqual: Let's talk about greater and lesser. There are four methods for testing inequality. Assert greater, assert greater equal, a certain less and assert less equal. They correspond to the fourth mathematical inequality operators greater than greater than or equal to less than and less than or equal to. They're really straightforward. You can visualize the operator being in between the two arguments. So here it would be too greater than 1 100 greater than zero and so on and so forth. You can see in this simple example that these tests all pass. If the test fails, you get a simple message. Tell you what was not met. 100 is not greater than 200. And like all of these certain messages, you can provide your own message. If you have something more meaningful to say. I'm Nathan stocks from agile perception, and this is python testing with Green 27. GreenA09 assertIs assertIsNot assertIsNone assertIsNotNone: let's discuss the is and is not assert methods. You should note that the is and is not methods. Do not check for quality that is the assert equal and assert not equal methods. These methods checked to see if the arguments are the same object, not equivalent objects the same object in memory. To illustrate this, let me show you something here. This may be a little deceptive running to assert that one is one this is going to evaluate as true, not because one is equal to one, but because in C python, the immutable objects that represent integers are reused. So every time you specify a literal one, even though it's an object that will reuse the same object in every instance. So we're actually passing the same object twice to demonstrate that let's actually print the I d of the one object twice. So this is actually four instances of the one object. An implementation detail of C Python that is, the default python that's written in C is that the I D function is also the memory address of the object. So let's save that run it and then if you look here, you'll see that this value is the exact same is that value. They're both the exact same object in memory now. It's even easier to see this on mutable things. So I list is mutable, so it's not going to be reused. So here we have. We're assigning a to be an empty list and be to be an empty list, but it's going to be a different empty list. And then we're going to say, See is the same empty list is a. It's gonna point to the same object in memory, and we can demonstrate that first, let's but be over here and we'll see the A and B are different lists. There, you see, this list is not that list. It's not the same object, even though their equivalent. That's not what we're checking for. This is really useful when you're dealing with mutable objects, especially embedded within other, um, container classes that are all mutable and you want to find out What did I did? I pass the same object and it is the same object a member of these two containers. Or are they copies? Or the equivalent is is one way to tell them apart. So Let's change this to see. We'll see that that is the same object because we just told it to point to the same object is a somebody that I'm going to just comment that back out in here, we just have another example. Two variables, both pointing to the same dictionary, and that comes up true. And then the other three methods are just the same thing. They're just convenience method. So this one is just test is inverted. So not so. We'll test it and make sure things are not the same object. Test is none is the same as test is. It just always passes. It has none. The nun object as the second argument always, and there's only one none object. So that really is a great test. It's faster than saying is none equivalent to none. If you use the double equals, for example, or the assert equal, this is slightly faster because on the certain equal, it actually coerces none on eat both sides to a Boolean value false and then compares false to false. So you're actually doing conversions on both sides, and then a test for equality where here you could just test the object addresses directly and see if they're both of the nun object and then he is not. None is just that inverted, and that's all there is to it. I'm Nathan stocks from agile perception, and this is python testing with Green. 28. GreenA10 assertIn assertNotIn: assert in and assert not in our very straightforward methods assert in checks to see if the first argument is in the second argument, a certain not in checks to see if the first argument is not in the second argument. I have a little example set up. Let's just run it real quick. See, that passes because Python is in this list. Java is not in this list. You notice that it doesn't do partial matching. It looks for an exact match. When it fails, it looks like this. The default message reads the item. First argument is not found in second argument. And like all a certain methods, it takes an optional message that will override the default one and give you what you give it. I'm Nathan stocks from agile perception, and this is python testing with Green. 29. GreenA11 assertIsInstance assertNotIsInstance: assert is instance and assert not is instance, deal with whether or not the first argument is an instance of the second argument. I have some classes here animal, puppy and rock. You will notice that the base classes subclass object in python to there's two types of classes. There's the old style and the new style to get the new style class, you subclass object. These methods will work with both styles, but the message will be a little bit nicer for the new style. And that's why I chose the new style here. So let's move on. Um, do notice that puppy is a subclass of animal. Let's look at our test method. So here we're going to test assert is instance. So we make an instance of puppy. Now there are three things or three ways that you can pass in something for the second argument. The first option is that the second argument can be a user defined class, so let's make an instance of puppy, and then we'll check to see if puppy is an instance of puppy it ISS that will pass and then we'll check to see if Puppy is an instance of animal or an instance of a subclass of animal . It is so that will also pass. Now let's just run it real quick, Kay. Now a common question that comes up is what if I don't want this to pass if it's a subclass , But if I wanted to fail, if it's subclass, I only wanted to pass if it's the actual class. So I don't want puppy animal to pass because I want to check if it's the actual class. Well, you actually need to use the assert is method and then pass in the type puppy because type will get you the class of an instance and then you pass in the class and you see if they're the same object. That's not the assert is instance. That's the assert is. But that's a common question. So here, this one will pass if I type it correctly. But if we put an animal, then it won't. So that's how you deal with exact classes if you're okay with sub classes than leave it like this. And then, of course, if you pass in a completely different class, it will fail, and like all certain methods, you can pass in a better message if you've got one. Okay, so the second option for the second argument is it could be a built in type that's pretty straightforward. So stir for a string list for a list dicked and so on and so forth. The third option is you can pass in a to people of any number of user defined classes or built in types. And if any of them will. If the first argument is an instance of any of them, then it passes. And we've already seen that book just run again. Now that we know better saying first, run it again, See that it passes. And then the awkwardly named Assert not is instance, is just the opposite. So let's make sure that puppy is not Iraq. Well, we ran it, and that's all there is to it. I'm Nathan stocks from agile perception, and this is python testing with Green 30. GreenA12 assertRaises assertRaisesRegexp: Let's look at some methods that check for exceptions. We expect to be raised. I have some functions here that we can use for testing. No error doesn't raise an exception. No, our eggs doesn't take any arguments, but raises a key air with a custom message. Lots of arcs takes four arguments and raises a value error. So first, let's look at assert raises. A certain raises takes the exception that you expect as the first argument for the second argument. It takes any colorable, so that could be a function, a method or a class that you've defined the callable attributes so that you can call it like a function. Let's run that real quick. So as you can see, no org's raises key air like it expects. Let's see what it looks like if the air that we expect is not raised, so no air will not raise the key air. Key air is not raised, is what it gives us. The syntax for when you're function has arguments is following the exception and the callable. Then you put each positional argument as an additional argument to assert raises. When you're done with positional arguments, you can then optionally specify the keyword arguments. The key word arguments don't have to be ordered, but the positional arguments will have passed in order. Another feature of a certain raises is You can pass in a two people of exceptions as the first argument. And if any of these exceptions are raised, then the test passes note that because of the handling of the positional and keyword arguments, assert raises is one of the very few a certain methods that does not take an optional message toe override The default message. An additional feature of assert raises is that you can use it as a context manager for the with statement. So it looks like this, you say, with self assert raises and the exception or to pull of exceptions that you are looking for , then optionally you can give it a variable name as variable name. This is technically called a context manager, so I just named a context manager. The benefit of this is then you can do any amount of code instead of just passing in a single function, you can do in line stuff like I did here, or you can call several different functions and sequence or have some custom looping logic . It's a little bit more flexible, and the reason that you would name that as a variable would be to examine you're exception that was raised afterwards. For example, let's let's do this is the context Manager has an exception, attributes that actually captures the exception that was raised assuming one was raised and the message attributes is populated by key air. If you pass it a string, many of the exception classes do that. So if we take a look at that, you can see here Key Air. I passed in ample sauces. The message. So when I got my talk, Context Manager got my exception out of it and looked at the message printed out Apple sauce Assert raises Red Jacks is very similar to assert raises with an additional check, assert raises Reg. Exp checks to see if the exception that you expected was raised, and then it takes that exception, converts it to a string, and then it checks to see if it's stringer. Representation matches a regular expression pattern that you provide is the second argument . The third argument is the callable and then the positional and keyword arguments follow the same syntax as before. So here we're calling the no arcs function, which raises a key error with a custom message for the beauty of the earth. Now, when you run key error through stir to convert it to a string, it simply returns the message. So this is going to check that no org's raises a key air, which, when converted toe a string, matches the pattern. Any character, zero more times, beauty, any character zero or more times. Or, in other words, it has a beauty somewhere in it. And it does. So it passes. And then, of course, if you get a pattern doesn't match, then it will fail like this, and it will tell you why it doesn't match the string representation. I'm Nathan stocks from agile perception, and this is python testing with Green 31. GreenA13 assertAlmostEqual assertAlmostNotEqual: when you need to test for equality with floating point values, things can get tricky due to rounding errors in the nature of floating point computation. You contend to get values that are not quite equal, but they're really, really close. And that's really normal. If you deal with floating point values a lot, you already know what I'm talking about. So we have, assert, almost equal and assert, not almost equal by default. Assert almost equal will consider two values the same if they are the same up to the seventh rounded decimal point. Note that we're not talking about significant digits. We're talking about decimal point, so literally after the 0.1234567 If to that point your two values are equal than the test passes, if they're not, then it fails. So this one right here should pass because 1.0 in 1.1 which is the at the eighth decimal point, are the same up to the seventh decimal point. Let's look at that real quick. There you see it now it does round the value, so this one rounded down to zero So if I put that to nine so it rounds up, you can see that the test fails now. Places defaults to seven, and that's a and Places is a key word argument. It's actually the third argument in the method signature, which I'll mention again in the second. And so if you change the value of places, then it will change the measurement. So here I'm saying that I only want to compare the 1st 6 so I've shortened this one. So 123456 are the same seven or different. But as you saw, I gotta save as you saw that passes Now there is a different argument you can give instead of places called Delta, and that's simply a difference that is allowed. So if you say that you allow a delta of point a one than as long as the two arguments are within 20.1 of each other than the test passes. So I can even shorten this up quite a bit. Me too much, and you can see it's still passes. Now you cannot pass places and delta to the assert almost equal. If you do, it will raise an exception, one of the other now, because places is the third positional argument, You can't just put message as the third positional argument because it's not there. It's technically they, um, fourth positional argument. But you probably won't remember that. So I recommend for this function. Just naming your message explicitly MSG and use it like that and then assert, not almost equal is just the opposite. Everything else is the same. I'm Nathan stocks from agile perception, and this is python testing with Green. 32. GreenA14 assertRegexpMatches assertNotRegexpMatches: for checking the content of a string. There is the Assert Reg. EXP MATCHES METHOD I made a little text variable here that we can reuse, and so let's look at the method you put the text first. Next, you put either a regular expression string or a pre compiled regular expression pattern for this lecture, I am going to assume that you understand what regular expressions are. I will give a quick explanation of each of these regular expressions, just in case you don't. So this regular expression checks to see if the string starts with pumpkins. It does so that will pass. Let's take a look at that real quick. This regular expression checks that from the is somewhere in the middle of the street. Here's an example of compiling your regular expression and then passing in the compiled pattern that could be really useful if you've already got pre compelled patterns in a library, and do you want to use them in your tests like this? An interesting note on this method is that, unlike most of them, the optional message argument does not replace the message. It prefixes the default message. So right here, there's our message string that we passed in and the remainder of that line is the default air message. So that's interesting. And then assert, not Reg. Exp matches simply checks to see that a regular expression fails to match your text coming back account, and you can see that monsters are not in that strength. I'm Nathan stocks from agile perception, and this is python testing with Green. 33. GreenA15 assertItemsEqual assertCountEqual: a certain items equal is a method that counts the items in two sequences and makes sure that they have the same number of equivalent items in each sequence. Usually, when we talk about sequences, we're talking about lists or two pulls. The same task can be accomplished by using assert equal onto sorted sequences. Except for this method also works with items that are unhatched, hable and thus can't be sorted. Let's take a look. You should note that assert items equal in Python to has been renamed to assert Count equal in Python three. So here we have assert items equal list one and two. If you look at list one into, they really are equivalent the same items in the same order in that passes. Let's take a look now in Python three again, this is called a certain count equal. If we compare list. One enlist three. That will also pass because list three, even though it's in the reverse order, has the same number of the same items. One a one B in one c. In other words, this method is often equal to assert equal sorted. The first argument sorted the second argument We've already seen that that passes. Now this one fails list four has to season it, so it will actually let us know that you read this. As the first argument has one c, the second argument has to seize. It's actually a pretty decent message. And then, of course, list one and list five are nothing like but will demonstrate the optional overriding message. And that's all there is to it. I'm Nathan stocks from agile perception, and this is python testing with Green. 34. GreenA16 Skipping Tests: before we get started, I suggest to avoid skipping tests unless you are forced to. Skipping tests tends to lead to problems. No one pays attention to skip tests. You may be forced to skip a test that cannot run in the current environment. For example, if the test needs a specific operating system or the test requires optional software to be installed that isn't installed in the current environment. So let's learn five ways that you can skip tests. Yes, five ways. They totally went overboard on skip functionality. First, you can use the skip decorator to unconditionally skipped a class or a method. Please don't use this if you're going toe unconditionally. Always skip the test. Consider deleting the test. So here's what it looks like if you unconditionally skip a method. I noticed that all of the ways that we skipped tests require a reason. The reason is not optional. You always have to give a reason, and it's always the last argument. Sometimes the only argument, but it's it's always the last argument. Okay, second method conditionally skipping This is actually useful. If you are forced to skip a test, you can use skip. If the first argument if it evaluates to true than the class or method that your decorating will be skipped. If you skip a class, by the way it will skip everything in it. It won't run the class fixtures. It won't run the method fixtures and won't run any of the test methods. The reason is always the last argument. He should always make sure that the reason reads well, because when you run the tests, the skip reason will show up the next Smith that is, skip, unless I should say the next way to do this. This is a decorator, you know, Test skip, unless this is exactly the same. Skipped. Skip, if only with inverted logic. If the first argument evaluates to false, then it will skip. The fourth way to skip a test is to use the test cases. Skip test method. You just give it the reason message for why you're skipping it. You can do all sorts of complicated, looping and logic before you reach your skip test. So if your logic doesn't fit nicely into a skip, if for a skip, unless skip test is probably the best way to do it, there is 1/5 way. You could raise unit test out skip test, which is an exception that has a required message. And those are the five ways that you can skip a test. So you should note that when you run the tests, all skips look the same. The skip reasons always show up as the last item in the vor vos view, and by default. There is a skip section for each test that was skipped at the end of the test run, and it tells not only the status skipped and the exact method, but it also lists the reason. If you don't want to see the skip reports after the test run, then you can pass the no skip report or DASH K option to Green. I'm Nathan stocks from agile perception, and this is python testing with Green. 35. GreenB01 Advanced Section Overview: Welcome to the advanced section. This section is about the mock module. Everything there is to know about green integration tests and continuous integration and any tips or tricks that I can think of that I haven't already shared. I'm Nathan stocks from agile perception, and this is python testing with green. 36. GreenB02 Mock Overview: I am going to give you an overview of the mock library. The mock library allows you to replace parts of your code with mock objects in unit tests. This is how you isolate the unit you're testing. The mock objects are flexible objects, the automatically adapt to how they're used and record what happens to them so that you can make assertions about them in your tests. The mock library is large and complex. I encourage you to refer often to the Python standard library documentation for the mock library. The documentation comes up as the first link. If you Google Python mock in python to mock is available as a standalone package you can install with Pip in python 3.3 and later Marcus included in the standard library as a sub module of unit test. If you would like your test to run on both python to end python three, then you can do something like this. Try importing mock from one location and in the except block import from the other location . I am going to use the Python interpreter to demonstrate some features of the mock library because I think it will be easier to explore things line by line rather than trying to mentally parse a bunch of code and the file to help me keep my presentation cleaner, I have created a clear function that I will use to clear my terminal window. Periodically, the clear function just calls. The standard posits clear command under the hood. The basic mock object that you will deal with most often is called magic Mark. Let's see how it works. Magic mock objects create attributes and methods as you try to access them. Both calls to the object itself and calls to any methods on the object are recorded so that you can inspect them later. For example, if I create a mock object and then call it with an argument, I can then check to make sure that the argument I expected was in the call. The same thing works. If I call an arbitrary method on the mock object, the method will be automatically created and the call will be recorded. There are several different methods you can use to check the call. I will go over them all in the magic Mock lecture calls to a mock object can have return values use the return value attributes of the mock object to set that up. Mock objects can have side effects when they're called. For example, you can use the side effect at tribute to set an exception. To be raised. Side effects take precedence over return values. You can clear the side effect by setting the side effect. Attribute back to none well designed tests for well designed coat will usually use a patch ER to create the mocks for the tests rather than manually setting. Magic mocks inside each test. The Patch decorator is one way to do this. When you decorate a test method with the patch decorator, the patched object is temporarily replaced with a freshly created mock object for the duration of the method call and then restored afterwards. For example, if we patch the cyst stop version string for this test function, then inside the test function system version is a magic mark. But once we're outside, the test system version is back to normal. Notice that when you patch something, a copy of the magic mock used to patch it is also passed to the method or function that is patched. Sometimes the default behavior of a mock object is just too accommodating. For example, if you want the magic mark to give you an error, if you try to access an attribute or method that wouldn't exist on the rial object auto spec ing can clone the look and feel of an object interface while retaining all the flexibility of the mocked attributes and methods. When you auto spec, if something attempts to access an attributes or method on the magic mock that wouldn't have been in the rial object, then you get an error. But the attributes and methods that would still be on the real object are magic mocks on your mock object. The mock library is fairly large. Rather than try to cram everything into one huge lecture, I have broken it out into several lectures that cover the most important portions of the library in detail. For more information, be sure to check out the standard library documentation. If you truly want to be a mock expert, I suggest reading the mock library source code. It's an interesting read. Like most standard library modules, the mock library itself is implemented in python. I'm Nathan stocks from agile perception, and this is python testing with Green 37. GreenB03 MagicMock Part 1: the magic mock object is so complicated that I decided to devote to lectures to it. In this lecture, I will go over the most commonly used parts of the magic mock object in Part two. I will go over everything else. So have you been typing along? I told you in the beginning of the course that you would learn mawr if you actually typed along. I understand how hard that can be on the lectures where I have a test file already created before the lecture. But this isn't one of those lectures. I will be using the interactive Python interpreter to make things easier to see in action. So I really, really encourage you to type along. First, I'll create the same simple, clear command that I can reuse to clear my terminal when it starts getting messy and hard to read. Remember, the mock library is in a slightly different location, depending on what version of python you are using. You can use a try except block like this one to make your code work with both, I'll be using python to but hopefully you are on Python three. By the time you're taking this course if you are than just substitute import unit test. Stop mock anywhere I import mock. Let's make a magic mock object from here on out. Whenever I say mock, I mean magic Mark, take note of the I D. Value off the mark. The python object i d. Is a unique number that identifies a particular object. It is your friend when it comes to telling magic, mock objects apart when you try to access a non existent attributes of a mock. The mark creates a child Mark attach is the child is the attributes you are looking for and then returns it. Note that this is a different mock than the one we started with. If you would like, you can set the attributes to be some riel, object or value. If you call a mock, a different return value mark will be created and returned. No matter what arguments you pass to the call, the same mock will be returned every time. That means that you could modify the returned mock after you have a reference to it and the changes will persist. I'm not certain that's the wisest thing to Dio, but you certainly can do it when you combine the fact that accessing a non existent attributes causes a child month to be created and attached is that attributes and the fact that calling a mock causes the return value mock to be created and returned? You can probably guess what happens if you try to call a non existent method on a mock. The method gets created as a child mock and then called, which creates a return value mock and returns it. Okay, let's get a new clean mock Most of the built in methods and attributes of a mock deal with affecting call behavior, tracking calls or making assertions about calls. Let's go over the two attributes that affect how calls behave. The first is return value when you set the return value attributes than any call to the mock will return that value. If you set return value to a string than that string is returned, the return value on Lee overrides what is returned. It does not have any extra functionality. If you give it a sequence like a list, it just returns the entire list. The side effect attributes is quite a bit more complicated. It overrides return value and has three distinct behaviors. First, if you set side effect to an exception class or instance than that exception is raised. Second, if you set side effect to any literal object, like a list to pull set dictionary or even the string, then each time you call the mock, the next item in the literal will be returned. If you run out of items in the literal a stop it aeration exception will be raised. The special constant named Default from the Mock Library can be used when you want the original return value. Attribute to be returned instead of a specific side effect value. The third behavior of side effect happens if you set side effect to a function. Then the function will be called with the arguments the mock received and the return value of the function will be returned as the mocks return value. If the function returns the special default constant, then the mock call will use theory. Jinnah LRA Turn value attributes. Instead, note that if you don't match the function signature, you will get the normal type error raised. Finally, you can set side effect back to none to disable it, in which case the return value takes precedence again. There are two simple mock attributes that you can inspect for information about your calls . First, the called attribute is a Boolean, indicating whether or not the mock has ever been called. Second, the call count attributes is an integer indicating exactly how many times your MK has been called last. On our list of most commonly used parts are the assert methods assert. Any call raises an exception if the mock has never been called Assert not called raises an exception if the mock has ever been called. Assert called once raises an exception. If the mock hasn't been called only exactly once. Now it gets a little bit more complicated. Assert called With Takes a set of arguments and checks to see if the most recent call was made with exactly those arguments. Assert called Once with is just the same, except that the most recent call must be the only call ever made to the mob. Finally, assert has calls is the only assert method that will search the call history for specific calls. It is also the only assert method that requires you to wrap your arguments in a call object . This method takes a list of call objects by default. It will look for that exact sequence of calls in order somewhere in the list. If you don't care what order the calls occur in, then you can pass in. Any order equals true, and the assert will be more forgiving. These air all of the most commonly used parts of the magic mark. At this point, you know more about the magic mock object than most professional python programmers. You certainly know enough to start using it very effectively. And yet there is more to learn in magic Mock Part two. Coming up next. I'm Nathan stocks from agile perception, and this is python testing with Green. 38. GreenB04 MagicMock Part 2: This is Magic Mark Part two. In this lecture, I will go over the parts of magic mock that you are less likely to use there. Still valuable to know, though I will also go over a few of the lesser known siblings of Magic Mark. Sometimes magic Mark is just a little too flexible by default. Anything you tried to access on a mock gets automatically created. When you don't want that to happen, you can spec a mop. Spec is an abbreviation for specify or specifications and is used a lot like the word configure. So when you speck a mock, you are telling it which attributes and methods it should pretend to have and which it should not. Let's begin by showing you the number one classic problem with the super flexible Ma. I'll make a mock and assert that it has been called with 100. Hmm, that's interesting. Why did the assert succeed? I didn't call the mock with 100. It should have raised an exception. Can you spot the problem? I misspelled the method on anything but a mock object. An exception would have been raised right there, and I would have fixed my spelling error, but this is a mock. It happily created a new child, mock and attached it as a setter called once with and then called it with the argument of 100 which created a return value mock and returned it. Nothing fail worthy there. This makes for a horrible, useless test that will always pass because it doesn't assert anything. Let's spec it. Let's say that this is the function that we air testing. You speck it like this. Now, when I misspell the method, the interpreter will raise an exception complaining about it. That's great. Why don't we spec all the time? Well, there's drawbacks. One, the spec ing code crawls all over the original object. If you have properties or descriptors that can trigger side effects that can really mess things up to. If you're object, creates attributes inside of its double underscore innit method, which is super common to dio. Then the spec won't know anything about those attributes because it won't call in it, and then the mock will prevent you from creating those attributes later, so you should decide when you want to spect your objects. There are also ways to customize the spec ing manually, but I have never observed anyone using them in the real world. So if you are interested, go read the Python three unit test dot mock documentation for all the rest of the spec ING details. Let's switch over to examining calls again. There are two attributes on a mock that you can manually inspect. For call information, call ARDS contains the arguments for the most recent call to mock. Remember that to compare the values you just wrap the arguments in a call object. Speaking of manually comparing the arguments, there is a special constant called any that you can use in the place of an argument that you don't care about. Call Our Eggs List contains the arguments for all the calls made to the mock since it was last reset. Speaking of Reset, we haven't talked about that yet. All of the call information could be reset, including Called and Call Count, which we discussed in Part one. Just call the Reset mock method. Note that this does not reset any customization is to the mock. Other than the call statistics. Trickier than the call Argh! Stuff is the mock calls attributes, which stores, all calls to the mock to the mocks return value and to all the mocks Children. Why would we ever want such a thing? Well, to check and see if a set of chained calls happened correctly to do the checking, we will also use the call objects Call ist method. Here's an example. First, I'll make a new mock and then make a really complicated, chained call to it. You can see that the mock calls representation is a mess. Lucky for us, we have an easy way to replicate that mess. To compare against it. Note that the only difference is our that we start with Call and call dot call list. On the end, I promised to mention some of magic mock siblings. Noncallable magic Mark is just like magic Mark, except that it itself cannot be called. That same restriction doesn't apply to methods on it, though, so it's nearly identical. There are also mock and noncallable mock objects. They differ from the magic variants in that you have to manually configure all of their magic methods that you want to use like double underscore Len, for example, the authors of the mock library itself realized the limited utility of these MOX and used magic mocks themselves just about everywhere as you'll see in the mock Patrick's lecture. If you really want to customize magic methods, there is a special section in the python documentation that goes over it. The last sibling is the property mock. It is used when you want to mock a property of an object. The only sane way to use a property mock is through a mock patch, er, so I'll save the details of that one for the mock patches lecture other than Patch Er's. I really don't see anyone using the rest of the features in the mock library, but I list them out real quick and you can go look them up in the python documentation. If you are interested, you can delete attributes of a mock. You can wrap a real object with a mock just so that you can observe the calls to the real object. You can customize what attributes of a mock up here in a dirt function call and mocks can be named so they can't be kept his pets. I'm Nathan stocks from agile perception, and this is python testing with Green 39. GreenB05 Mock Patchers: patches are essential when you want to mock something that you aren't passing into a function or method. In other words, when the code you are testing interacts with other code pastures, help you mock the other code. You are almost always going to use the patch function as your patch. Er, the patch function can be used as a function decorator, class decorator, a context manager or manually. We will start with the function decorator syntax. Here is some code that I want to test. Specifically, I want to test that a hypothetical spaceship called an X Wing actually fires four lasers at a time. As documented. If we load the code and run it manually, we can see how the code currently behaves. As you can see, the ship and its parts proudly print to standard out during creation. When we fire the lasers, the sound for each of the lasers is combined into a single string that is also printed to standard out. This is a unit test, so I don't care whether the lasers actually work or not for this test, just that the X wing called fire on them, as I expected. Let's take a look at using patches of function decorator. First, we import patch from the mock module in Python three. You would import from unit test. Stop mock. Next, we use the decorators in tax to decorate the test method. Python tends to use the words function and method interchangeably. I'll use function when I mean a standalone function and method when I mean a function that is part of a class like this one. So, yes, you can use a function decorator on a method they're required. Argument to the patch function is the in portable object, which you would like to replace when you patch a function or method. The mocked object is passed in as an extra argument, since decorators air applied from the inside out, meaning the decorator closest to the function is applied first and then the one above that and so on. This means that the extra arguments appear in the reverse order that the patches appear before the method. So we have laser innermost and then the mock standard out. After that, if we had another patch above that, then it would be passed in after the mock standard out. Sometimes there is a better object to use than a magic mark. In this case system standard out is much more useful if it is replaced by a string i o object, which mimics a file handle which can be written to use the new colorable parameter to change What type of object is used for your mock by default Patch uses a magic mark. The X wing class creates lasers by calling the laser class four times or, in this case, the magic mark we replaced it with. As we learned in the Magic mock lecture, Part one calling a magic mock will return the same return value mock every time it is called. So our four lasers in the weapons list are all the same. Return value, magic mark. This means that we only need to configure its fire methods return value once. Then the string join Call inside the X wing fire method will find a list of riel strings to join and not crash on us. Now we create our own test X wing object. Note that since the laser is a magic mock thean it print message from the rial laser doesn't occur. You could check this by inspecting mock standard out if you want, then we fire. In this test, we simply check that the return value magic mock of the Laser mock, which represents all four weapons, has been called four times once for each entry in the weapons list. Let's run the test to make sure things are working. You will notice that no standard out escaped the test because we patched system standard out. The second test has the same patches but takes a different approach at verifying behavior. Instead of counting the calls to the mocked Lasers Fire method, we count the occurrences of our fire value that reach our mock standard out object. Using the patch function as a class decorator is almost the same, but it applies the same set of patches to every test method in the class. Once again, this relies on you following the naming conventions. If you decided to ignore this advice and use custom test method names, then you will have to set patch dot test prefix accordingly before you use it in order for the patch function to find your special test names. Nothing else is different in this test file, so let's move on using the patch function as a context manager in a with statement, has the benefit of a smaller scope but requires, ah, lot more indentation. And it's harder to read than the decorators in tax. The as clause is optional in this case, If you don't need to access the mock, then you can leave out as mock laser. And as mock standard out, the functionality of the tests once again is exactly the same. Finally, you can also use the patch function to manually obtain a patch er and manage it. Manual patching is most often done in the set up fixture Manual management has the advantage of having full control over when the patching starts and stops for each patch. Er, it has the disadvantages of being way more complicated and taking a lot of typing. There's three steps when you do it this way. One. Create the patch and save a reference to it so that you can call start and stop later to start the patch er and save the market returns. If you want to use it, nothing gets patched until you call start three. Try to make sure stop is guaranteed to be called. This is by far the most difficult part and the part. Most people mess up the test cases. Add cleanup method is your safest choice. If you try anything else, you're bound to mess it up. Definitely don't try putting in a tear down method as the tear down method is not even a run in a lot of cases. Once again, the test functionality is the same as in the last three examples. Now that you know the four ways to use the patch function, let's examine the function itself in more detail. First, how to reference what you are patching the rule is that you reference what your patching, as if you're importing it from the directory that you are running the tests from that is the current working directory of the test runner. In my case, the test runner is running from the same directory as my test file. So to mock a lightsaber that is in the same file called Test Location, I need to reference it as test location dot lightsaber to mark a turbo laser in the spaceship module that a frigate in the spaceship module uses. I still market from the same perspective. You'll notice that we didn't mock out standard out in this case. So we get some fun output from everything except the mock turbo laser. Also a quick trick. If you want to mock something in local scope in the interpreter, you prefix it with double underscore main. It looks like this. What if you want to spec, as with magic mocks though there are a bunch of ways that you can spec patching time, you will only really use auto spec. If your object has all the attributes you need to mock without calling its innit method, then you can just pass. Auto spec equals true and the patch function will auto spec thean portable target That works fine for our laser If we just want to mock the fire method and not allow accessing any methods the laser wouldn't have. However, the sound attributes isn't created unless the rial innit, Methodist called in this case you can create some other test object that has the signature that you want and pass it as the Argument Toe Auto spec. I did that here and now I have access to a mocked sound attribute. And with that, my friend, you have learned the fundamentals of patching. I'm Nathan stocks from agile perception. And this is python testing with green 40. GreenB07 Green Output Options: Green has several output options that you can use to influence its output. First and foremost, Dash age or dash dash help will print the entire help instructions. We will be reviewing the output, options and format options sections. Dash Capital V or Dash Dash version will print out the version of Green, the version of coverage if present the version of Python and then exit without running any tests. The version information is extremely helpful if you are submitting a bug to Greens issue Tracker on Get Hub. Now let's talk about the three verbosity options. The default verbosity of one outputs, one character protest, plus the report lists and summary information. The status characters are the same as the ones used by Pythons. Default Test Runner. Note that the breakdown of status types includes all statuses. Some test runners, such as the default Python implementation, inexplicably omits some of the statuses, such as passes green includes them all. You should note that passing one V with Dash V is still verbosity level one, which is the same as the default with a verbosity of two, which could be set with two V's or two full dash dash for boast flags if you'd rather the test hierarchy is output with method names. The route nodes air bold and indicate python modules or files one indentation level in our the test case, subclass names in plain text to indentation levels in our the method names left aligned in front of the method, names are the same. Single character test status is that you would see in verbosity level one. The status is air left, aligned instead of right, aligned on purpose. One. It is very easy to scan down a straight line visually and see what status is you have and to. If Green is being run by some other script, it makes it really easy to parse the statuses with verbosity of three version information is added to the top of the output, and instead of test methods, we display the dock strings. One thing to note about the Doc string handling is that the first paragraph of the doc string is refloated output. The first paragraph includes all DOC string text up to the first blank line. If green detects that it is not attached directly to a terminal, it will not color the output. This is to make it easier to use Green's output in command line pipes and sub processes of other tools. You can override greens color, auto detection with dash T or dash dash term color to force color on and dash capital T or dash dash. No term color to force color off, for example. Here's what it looks like to force color on, and here is what it looks like to force color off. As a side note, Green tries to auto detect Microsoft windows and switch to using Windows libraries for coloring the output. This is great if you are on an actual Windows box, but if you are on an exotic set up that emulates UNIX terminals, you can force green to use UNIX style color codes with dash capital W or dash dash disabled ash windows. This option will be ignored on non Windows platforms. If you use the APP Feyer Continuous Integration Service, you will want to use this flag because at Bayer is one of those exotic setups Our next Three Flags Effect report list behavior. First, let's look at the skip section of the report list by default. Every test that is skipped is listed in the report section. If you don't want to see this, skip information in the report section, then use Dash K or Dash Dash No Dash Skip Dash report. The skipped status will still show up in the test status, but not have a separate listing in the report section by default. Both standard out and standard air are captured during tests and then reported in the report section. There are two options to change this. First, you could simply choose to discard the captured standard out in standard air out. But by using dash Q or dash dash, quiet dash standard out. The other option would be to simply not capture standard out in standard air at all and let them interrupt the test result output, like most other test runners do. I can't imagine why you would want to do this, but if you do it is the dash a or dash dash. Allow dash standard out flack. It looks like this. Note that since the tests are run by multiple worker processes that run in parallel, the location of your standard out in standard air output in this case will be non deterministic and could vary from run to run. If you use the route logger object from the python logging module, you may notice that green disables it. That is because Green uses it itself with dash L or Dash Dash logging. You can re enable the Route lager. Finally, if you wish to debug green itself, you can turn on greens internal debug output with dash, dash, debug or dash D like for Bos teeth. E amount of debug output can be increased by specifying it multiple times. Debugging also enables the route lager, which we use for the debug output. I'm Nathan stocks from agile perception, and this is python testing with green. 41. GreenB08 Green Initializer and Finalizer: using multiple worker processes to run tests in parallel is a simple concept with profound consequences for functional tests. The purpose of a functional test is to see whether systems work together. Sometimes the systems your code interacts with are completely external toe python. You may want to interact with files, an A P I on the network, a database, attached hardware or any number of things. In this example, I'll use an SQL Light database, which stores data in a file. Typically, a functional test will assume it has exclusive access to the database it is talking to. The test could easily break if other tests are simultaneously modifying the database. Green solves this problem by introducing the concept of a process initialize er and finalize. Er, you simply tell Green which function to run to initialize external resource is and which function to run to finalize or clean up. The external resource is each worker process runs the initial Isar before begins in the final Isar when it completes all of its tasks. Let's take a look at this magical powers library that I have here. The code I want to test is the get powers function here a The bottom. This function connects to the sq a light database and choir, is it for the names of all the magical powers and then returns them in a set. I have two tests for this function. The first test is a functional test that expects get Powers toe actually retrieve some information from the database. The second test is a unit test, which mocks out the external database and only tests the logic of the function itself. If I run the tests now, you will see that the functional test hits an error. Because the module doesn't automatically set up a global database connection like the function is expecting. The unit test pass is just fine. It mocks the database connection so it just interacted with a mock object. Just a warning. Do not default your coat connecting to a live or some other type of important external resource. If you dio, your tests will likely destroy it when you accidentally run it sometime, without specifying your initial Isar finalize. Er, let's add the initialize er to get the functional test working. You can use Dash I or dash dash initialize ER to specify the function as if you were importing it from the directory that you are running green from. In my case, the Powers module is in the same directory I am running green from, so I specify it as Dash I powers dot create testy, be the create test DB function, chooses a unique file name for the database file and then creates a connection to it and then puts a little bit of data in it. Note that you can use the process I D or pit of the worker process when you need to do something like create a unique name for a file. The operating system guarantees that every process has a unique process. I d. When I run the tests, the functional test passes. But look at this. Not only did each worker leave behind a database file, but since the database transactions weren't committed, each worker also left behind a journal file. Since I have eight processors on this machine, there are eight of each. That's pretty messy, which is why we have the final Isar option. You specify the final Isar with the same dotted syntax, but using dash Z or dash dash, finalize er clean up test db closes the database connection, which removes the journal file and then removes the database file. Now, when I run the tests, we can see that they cleaned up after themselves. Now that we have this basic structure set up, I could write any number of modules and corresponding tests, and they could all interact with the data base safely during testing. The other functions and modules would just need to import the database cursor that the initial Isar set up in this powers function. If you have any additional external systems that you need to initialize or finalize just ADM or logic to your functions to handle them as well. I'm Nathan stocks from agile perception, and this is python testing with Green. 42. GreenB09 Green Number of Processes: I make the claim that greens fast, it's time to back up that claim. I have a test file with eight test methods in it. Each method takes at least a second to run, so naturally it takes at least a second. Surround them all. Let's take a look. One thing to note is that the currently running test is in a bold font, which then turns the appropriate color. Once the test is completed, Green spawns a worker process for each CPU in the system. So why didn't my tests running parallel? The answer is that by default we divide the work up by python module, so each file gets sent separately to the queue for the workers to process that way, we only have the overhead of loading each test file. Once, however, each target that you specifying the command line will also be sent to the worker que separately. So if you specify targets was smaller than module granularity, then you can run them in parallel. For example, let's specify the two test cases separately. I'll leave it at default verbosity This time. The tests finished in half the time because two workers processed each half of the test in parallel. Now, why did we get the output for the 1st 4 tests? One second at a time. But the output for the 2nd 4 tests all at once did you notice that? Let me run it again. Green reports the results sequentially, even though they're running parallel. Otherwise, the tests would be in some random order every time they ran, which would be a mess. If the next test that green wants to output isn't done, then green waits for it. If the next test the Green wants has already completed, then it will just output it and move to the next one. So while Green was waiting for the 1st 4 tests to complete, the 2nd 4 tests were completing at the same time. So once green got to the 2nd 4 tests, all it had to do was display the results as fast as it could. I have eight logical processors on this machine, so let's see what happens if we specify each individual test method on the command line. I typed this command out ahead of time, so just copy and paste it. I specified each of the eight test cases individually. Just over a second. Much better. You can override the number of worker processes that green uses with the dash dash processes flak or dash s for short. For example. If I reduce the number of processes to two, then running these eight tests will take much longer. The number of processes that you can run is limited by the settings of your particular operating system. In our case, running more than eight won't do any good because there are only eight tests to run. But if you do have a large number of tests, feel free to experiment with higher numbers. You should see your CPU and memory usage go up is you use more processes while you're test time goes down. If you said the number high enough, you will eventually max out your allowed number of processes or your CPI use or your memory or hit some other bottleneck. Now, in a real project, the default granularity of one test module per worker usually works out pretty well. Let's take a look at that. I have created eight identical copies of this test file, named a through H that's eight files with eight tests each for a total of 64 tests. Pythons, test runner would take over 64 seconds. Run these tests. Let's run it with green. Now let's look at a crazy example just for fun. I copied the eight test files into eight separate packages. If you do the math, that ends up being 512 tests on 64 files. So let's run green with 64 worker processes and see how it goes. Green is fast. Run your tests faster. I'm Nathan stocks from agile perception, and this is python testing with Green. 43. GreenB10 Green Config Files: In addition to using command line arguments, dream settings can be stored in a number of CONFIG files. Four config file locations are searched in order. If multiple CONFIG files. Air found the settings and later convict files overwrite the settings of earlier CONFIG files. The first config file that green looks for is a file named dot green in your user's home directory. Second Green will examine the value of the green config environment variable. If it points to a readable CONFIG file, then it will be used next. Third Green looks in the current working directory for a dot green file. This method will commonly be used to set per project configuration. Fourth, you can specify a config file with the Dash C or dash dash config. Command line argument. This config file will take precedence over all the earlier ones. Finally, it is worth mentioning. Any command line arguments you provide will override the same setting from all the config files. Now let's take a look at config. File syntax. I have a project here that I will configure using a dot green file in the current working directory. The CONFIG file setting always uses the long name of the command line option. So if we want to increase the verbosity than we write it out as verbose and then a space and unequal sign, we set accumulated flags like verbose and debug as integers, corresponding to how many times you would have specified the flag on the command line. So for Dash V. V. V, we will put verbose equals three. Let's save the file and run green again for flags that toggle a feature you set the value to. True, to enable the feature or false to force disabled the feature. Let's disabled the skip report and turn on coverage. Now when I run green again, there is no unsightly skip report in its place. We now have a useful coverage report. Options that takes string arguments are pretty straightforward as well. Let's say we don't want to see the double underscored in it dot pie file. In our coverage report, I just put omit dash patterns equals the pattern I want to limit. Now we have the config file that we like, and it will apply to just this project since we put the dot green config files in this directory. If I really liked thes settings and wanted them to apply to all my projects. I could just move this dot green file to my home folder, and then Green would find it, no matter where I run it from. I'm Nathan stocks from agile perception, and this is python testing with Green. 44. GreenB11 Green bash and zsh tab completion: if you use bash or Z shell than I have a treat for you. Tab completion. This is super simple to set up and provide some awesome tab completion benefits. First, just run green dash dash help under the second to last section called Enabling Shell Completion. Copy this line and put it at the bottom of your dot bashar sea or dot Z Shell R C files. Either restart your terminal or manually source your config file again so that the setting will take effect. Now we can have some fun with Tab completion. I'll run through this twice first with bash, then with C show. If I type in a garbage command, we can see the I am in bash. You start by typing green, and then if you press tab, you can see that green tab completes the target. If you press tab multiple times, then you get a helpful list of all possible completions. You can continue typing to choose one of the completions and then press tab again. Rinse and repeat until you have the target that you want. Now I'm going to exit, bash and go back disease show. I'll type another garbage command to verify. I am in Z show now. I followed the same processes before Tab. Type a little tab, type a little more. You can see it works just the same on Lisi. Shell presents the completion list a little differently. Warning. Each time you press tab Green runs the test auto discovery on the entire file tree from your working directory into all of its subdirectories. So if you're working directory has a 1,000,000 files in it, you're going to get a huge paws, so don't tab complete in massive projects. If anyone wants to contribute tab completion for other shells like seashell power shell, etcetera, you will need to use Green's dash dash completions option to generate the current completion list in your implementation. For example, if I pass this partial target to the dash dash completions option, then Green returns the list of all possible completions. Happy tap completing. I'm Nathan stocks from agile perception, and this is python testing with green 45. GreenB12 Green Fail Fast: Sometimes you want your test run to stop as soon as something goes wrong. That's what the dash F or Dash dash fail fast Option is for Fail Fast makes the run stop when it hits a test with a fail or error status. I have a test fail fast module here that I will use to demonstrate. First, let's create a passing test. Since tests are running alphabetical order, I will put a Z near the start of the method name to make sure it comes last. Let's check and make sure this runs properly. Good. Now let's add a failing test. The fail method will cause this test to immediately fail with a short message. Let's run the tests again. Two things to note. First, there is no output for the passing test because the test run stopped at the first failing test. Second green. Um, it's a warning to let you know that some tests may not have been run. Now it's at a test that hits an error. The air status is caused by unhand Aled exceptions, so I will just raise an exception. When I run the tests again, we hit the error and stop You should note that due to the details of Greens implementation , you may actually have a few extra tests run before everything gets shut down, but you won't see any output from them. So if your test run pauses for a moment before exiting after the first failure, it is just coordinating the completion and clean up of all the remaining tests across the worker processes. I'm Nathan stocks from agile perception, and this is python testing with Green. 46. GreenB13 Temporary Files and Directories: I always recommend using the temp file module from Python Standard Library whenever you need to deal with temporary files or directories. This is doubly important when running tests and green. Since Green launches multiple worker processes to run your tests, there will be multiple tests running at the same time. If two tests try to use the same temporary file, the tests could interfere with each other. This is a classic race condition, which is hard to debug. Besides that, the temp file module is designed to avoid name collisions and security vulnerabilities. It is always safe to use objects and functions from the temp file module like temporary file or named temporary file Make s temp or make D temp as an extra bonus temporary file and named temporary file. Clean up their own files when they're destroyed. Here's the pattern for cleaning up after make s temp and make detailed as an extra precaution. Green ensures that the temp file module uses a new, unique temporary directory per worker process. If you need to retrieve this value, you can do so with temp file dot Get tempter. Green actually cleans up the entire workers temporary directory during workers shut down, which helps protect you from building up in extreme number of temporary files on your disk . You should still follow best practices and make sure each test cleans up after itself. You can use the process I d provided by os dot get P I D. As a way to uniquely identify the worker your tests run in. This can also be helpful to ensure uniqueness when provisioning other external resource is . For example, you could use the process i d. As the Suffolk's Oven External database name. See the lecture on initialize er's and finalize ER's for a more in depth discussion about that. The worker processes themselves are long lived in synchronous so you can reuse External resource is among the tests that run on the same worker without any chance that those tests will use that external resource. At the same time, it is still up to you to ensure that every test cleans up after itself. Otherwise, your external resource, maybe in an unexpected state for the next test that the worker runs. I'm Nathan stocks from agile perception, and this is python testing with green