Mastering Playwright & Python: Build a Complete, Job-Ready Test Automation Framework from Scratch | Avi Cherny | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Mastering Playwright & Python: Build a Complete, Job-Ready Test Automation Framework from Scratch

teacher avatar Avi Cherny

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Main intro

      1:16

    • 2.

      Designing a Scalable Project Architecture

      5:08

    • 3.

      Creating the Project Structure in PyCharm

      2:06

    • 4.

      Setting Up a Reliable Project Configuration

      4:31

    • 5.

      Understanding pytest

      2:49

    • 6.

      Managing Dependencies with requirements

      2:41

    • 7.

      Building the Base Structure and Testing Principles

      2:02

    • 8.

      Planning Your Base with Pseudocode

      1:56

    • 9.

      Coding the Base Structure

      5:02

    • 10.

      Adding Logging and Screenshots for Better Reporting

      7:30

    • 11.

      Adding Logging and Screenshots for Better Reporting

      13:13

    • 12.

      Integrating Utilities into the Framework

      5:58

    • 13.

      Introduction to the Page Object Model (POM)

      3:21

    • 14.

      Planning Effective Page Objects

      3:20

    • 15.

      Building a Login Page Object with POM

      16:06

    • 16.

      Test Design Foundations From TDD to Automation

      3:20

    • 17.

      Designing Real-World Test Scenarios

      5:16

    • 18.

      Using Fixtures for Smarter Test Setup

      14:13

    • 19.

      Writing Your First Real Test Case

      7:18

    • 20.

      Test Validations That Matter – Part 1

      11:14

    • 21.

      Test Validations That Matter – Part 2

      10:16

    • 22.

      Data-Driven Testing with Parameters – Part 1

      15:31

    • 23.

      Data-Driven Testing – Advanced Techniques – Part 2

      6:54

    • 24.

      Running Tests From IDE to Full Execution

      3:12

    • 25.

      Debugging with Breakpoints and Analysis Tools

      19:05

    • 26.

      Generating Reports with Allure

      12:29

    • 27.

      Course Wrap Up and Next Steps

      4:44

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

2

Students

--

Projects

About This Class

Ready to fast-track your career in QA and test automation? In this hands-on, project-based class, you’ll build a complete, professional-grade test automation framework from scratch using Playwright and Python—two of the most in-demand tools in modern software testing.

You’ll learn how to:

  • Set up your environment with Python and Playwright

  • Design a maintainable test framework using Page Objects

  • Implement data-driven testing to handle multiple scenarios efficiently

  • Capture screenshots and logs for effective debugging

  • Generate beautiful Allure reports to share results with your team

We’ll also cover the “why” behind each best practice, so you’re not just copying code—you’re learning how to build, structure, and scale a framework like automation engineers at leading tech companies.

This class is ideal for:

  • QA professionals looking to expand into automation

  • Manual testers ready to start coding

  • Developers interested in strengthening their testing skills

Basic knowledge of Python and web selectors (like CSS or XPath) is helpful but not required. Whether you’re just getting started or looking to refine your skills, this class will provide a clear, step-by-step roadmap to mastering real-world automation with Playwright and Python.

As part of the class, you’ll also complete a class project: building your own automation framework from scratch. This hands-on exercise will reinforce each concept and give you a professional project to showcase in your portfolio.

Meet Your Teacher

Teacher Profile Image

Avi Cherny

Teacher
Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Main intro: Two ever tried building an automation framework only to quickly feel overwhelmed? Well, you're definitely not alone. Whether you're a manual tester or an automation developer, you don't need special talent. You just need clarity. My name is Avi Cherney and I've been building successful automation frameworks for over a decade. In this course, we will simplify automation step by step. We'll build a clear, reliable, and maintainable automation frameworks from scratch. You'll gain skills that make you stand out at work in interviews or anywhere else it matters. By the end, you will have practical automation skills and a production ready framework. You'll gain confidence in your tests and accelerate your career growth. Because at the end of the day, nothing is truly automatic until someone writes an automation for it. Ready to be that someone let's diving. 2. Designing a Scalable Project Architecture: Hi, I'm Abi Cherney, and welcome to the tutorial on building automation project. Before we dive into the code, it is important to understand how our project is organized and why the structure is so crucial. I will walk you through the project structure, explain each folder and its purpose, and show you how the setup help us to write clean modular and maintainable automation code. Why this is so important? Because the structure is based on best practices and automation design principles. It's ensure that each part of the project is organized according to clear responsibilities. Let's go through the project folder by folder and understand the role of each one. I've named our project automation project. Yes. The first folder page objects. This is where all our page object classes are stored. For those unfamiliar with the page object model, this is a design pattern that represents each webpage as an independent object with its own functionality. Why this is beneficial? Because it simplifies our code, reduces duplication, keeps everything organized, and ensure that all our logic related to a specific page contained within its corresponding page object. What does this folder contain? Base page, an abstract class containing general functions used across multiple pages, such as clicking buttons and entering text. Login page represents the login page and includes all login related functionality. MainPage represents the main page and includes all the main page specific functionalities. The next folder is tests. The test folder is where we organize all our test case by functionality. Let me remove those arrows to avoid confusion. All login related tests like successful login and unsuccessful login will be inside the log in test file. Each test file will include setup and tear down logic so we don't have to write it repeatedly in every test. So the test cases remain short, clean and easy to understand. Now, right under the test folder, we have the com test. This file plays a crucial role in managing our test execution. Why is contest important. It defines features which help us reuse setup and tear down logic across multiple test cases. It includes hooks that control how tests run. Next, we have the helpers folder. This folder contains utility functions and general configurations that aren't specific to a single page or test. Inside this folder will have configure. It stores base URLs environment configurations. Utils, it contains helper functions, such as journing test data, taking screenshots and handling logs. Validation includes functions responsible for test validations. With this structure, our code stays flexible and follows the DRY principle, making it easier to maintain and scale. Let's remove those arrows as well. In addition, we need the following files, bytest.in, and stores all BytestRlated configurations. Requirements. I list all project dependencies that need to be installed and read me data usually serves as just documentation, but we are going to make it something much more. A well organized automation project ensures scalability and efficiency. Separate logic and tests. Scalable easily at pages or test with minimal changes. Simple and clear hierarchy. M 3. Creating the Project Structure in PyCharm: As we saw in the diagram, our project is called Automation Project, and as you probably remember, it includes page objects. This is where we will store all our page objects and the tests. This folder will contain all our test cases. And the helper utility folder that will hold additional support files. Inside, page objects will have the base page, a base class that other pages will inherit from. In addition, it also includes the login page. It will handle interactions with the login page. It also will include main page, and it will represent the main page of the application. Test folder will contain all test cases like login test, the test script for the login functionality, and CF test. Helpful folder, including config file, it configuration settings for the project, and also the utils a utility function that can be used across tests. And let's not forget validation. Methods for validating expected results. And of course, at the root level, we will also have Pie test.in I configuration settings for Pie test. And of course requirements dot TXT, which contains the project's dependencies. And finally, read me dot MD. Perfect. This structure matches exactly with the diagram we created. 4. Setting Up a Reliable Project Configuration: Imagine this. You have just been assigned a task to write tests, and you're actually excited. You have read all the guides, and you know exactly what needs to be done. You feel totally prepared. You're still writing the code. Everything looks perfect. I've even bet that deep down, you believe that the test will run smoothly on the first try, maybe in pass, and then boom, the screen floats with red Hermes chess. You stare at it and have no idea what went wrong. So you're starting to dig. You're starting to debug, guessing, is it the path A mistic dependency, something wired with the pytest? Hours go by, and still, you'll have no there's no clear answer. But what if I told you that you could avoid all of this, that there is a way to set things up correctly from the start and save yourself hours of frustration. Let me introduce you to the three ultimate musculars of your project. The first one is requirements, the text. Let's make sure everyone is on the same setup. I mean, have you ever run your project on your machine and everything worked perfectly? Then you've tried running the same code on another machine and everything crashed. You probably got model but founder or maybe missing dependencies. You know why? Because everyone has a different working environment, what is the solution? Well, the solution requirements dot TXT file. This ensures that everyone is using the same libraries, same versions, no surprises. Instant sync and your project works like a magic. By test dot EI takes control over how your tests run ever notes how sometimes Pits prints tons of output, and other times it's almost silent or how sometimes it opens a web window, and other times it doesn't. Do you know why? Because there is no pits that in file setting clear rules. With the pie test in file, you decide how your test should run. One more detailed output, one setting, and you're done. Do you want a consistency across all tests, another setting, and it's fixed. Less confusion, more control. It's exactly how it should be. The file read me dot d, made so you don't lose your mind or make others lose theirs. There's nothing more frustrating than getting someone else code and having no idea how to run it. You search for documentation, ask around, dig through internal company workers. Well, now imagine someone else getting your code. What do they do? Ask you the same questions. Well, rim dot md file saves everyone from this headache. It's not just for documentation. You can turn it into a script. With a single click it installs everything, gets you up and running in seconds. Imagine how great it is to start a new project. Press only one button and everything just works. 5. Understanding pytest: Okay, so now we are ready to use Bytest and run our automation scripts, right? Well, sort of. What happens when our project grows larger? The test execution begins to take too long or when we are swamped with logs, rather than waiting for issues to arise. Let's proactively address the challenges by configuring our PTS settings. If you're looking for better control, the first thing we should do is set up the BytestNIfle Think of this as the control center of your automation test. Your pytest dot E must always start with PyTest. Without this section header, Pytest wouldn't recognize your configuration settings at all. Next, let's use Adops. Essentially, it allows us to automatically apply options like Minus V and Minus S. These two flags gives us a more detailed output during our test execution, helping us quickly identify issues when they arise. You will definitely want to consider adding headed as well. This option opens the browser window visually, meaning you can actually watch the browser as it runs the test cases, which is great for debugging. Additionally, it's a good practice to include lure Deer, allowing you to generate beautiful, detailed u reports. These reports are incredibly valuable for monitoring and analyzing your tests. Let's also set log CLI to true. This will enable logs to be displayed directly in your command line interface. You should also define the log level option, setting to something like info. It ensures that you only receive logs that are informative but not too overwhelming. If needed, you can adjust this level to debug warning or error depending on your needs. And here you go. You have proactively addressed scalability, logging and reporting. A common challenges that can overwhelm you as your automation projects grow. 6. Managing Dependencies with requirements: We have reached a point where we need to ensure that everything in our environment is fully synchronized. No unexpected issues. No more situations like it works on my machine, but not on yours. Well, to achieve this, we will utilize a requirements file. The idea behind the requirements file is to list all our dependencies clearly in one place, allowing us to install them all at once. This approach eliminates the need to repeatedly run beep install multiple times. Instead, you'll have a single comment to install every dependency we need. Think of it as your project's grocery list. You write down everything you need before going shopping. So what's on your project list? First, our project depends heavily on Bytest. By test is essential because it manages test discovery and execution. Allowing us to easily run our tests. It also provides helpful features like fixtures and building logging. Next, we have playwright. Playwright enables us to automate browser actions. Lunch browsers, interact with web elements, fill forms, and perform clicks. Essentially, it handles all interactions we need for browser amination. Additionally, we will need the Py test playwright package. This package integrates Pi test with playwright, saving us from manual configurations and setup. Without it, we have to manually handle broader initialization and test management. With Pie test playwright, everything is managed automatically, streamlining our testing process. Finally, we will include allure. Allure provides clear and visual test reporting, ensuring our results are easy to interpret and share. This combination forms the minimum essential setup needed for our automation project. 7. Building the Base Structure and Testing Principles: So what do you think should go on a rhythm file? Of course, you can put everything here because it's not just for new people joining the project. Even you might forget what you did. You probably won't remember what you did six months ago. Even if it was something really interesting. But what really interests me more is next step. For example, I have a dependency. I want to make sure that anyone using my project installs that dependency. Sure, they could go and do it manually. But that's not what I want. I want a script to handle it for us. To do that, we need three of them. Then we write Bush, which puts us into script mode. And this is where the magic begins. Of course, you can expand the script to do all sorts of things. But that's not the call here. The point is to give you an idea. So let's write Beep install our requirements that text, and let's install the browsers too. Of course, we could expand it further, so it also runs test automatically. But the idea here is just to get a solid setup in place. Now, let's see how it works. Click the Run option to execute the B script and watch it an action. We are all set to get started. Ii was just a setup, but now we are ready to dive into the actual coding. In the next section, we will jump straight into the core of the project. So get ready because things are about to get interesting. 8. Planning Your Base with Pseudocode: Think about this for a moment. What happens when your tests fail? How long does it actually take you to find out what went wrong? Can you pinpoint the exact issue? Is the bug in the code? A problem with the website? An issue with the internet connection. I can guarantee that if you don't know where the problem is, finding it could take you even longer than rewriting the entire test from scratch. On the top of that, you have a team to consider each team member has their own coding style and before you know it, you will spend most of your time debugging and searching for issues instead of progressing with your tests and tasks. How do we avoid wasting so much time just trying to figure out what went wrong? This is exactly where the base page comes into play. It ensures that every action is locked, screenshots are captured and clear error messages are provided. Dita logs are available. Well, everything you need to identify the issue as quickly as possible. Often without even looking at the code, and I hope now you understand why. In my opinion, this is one of the most crucial parts of any automation project. If you implement it correctly, you will save yourself dozens of hours of unnecessary work. If you don't, your entire automation framework will be fragile, cumbersome, and difficult to maintain. Now that we understand the importance of the base page, let's dive into the implementation. 9. Coding the Base Structure: Before we start clicking buttons like there is no tomorrow, let's take a moment to talk about structuring our playwright based automation framework. At the heart of everything, we have the base page. Think of it as the control center for all page related actions in our test. Instead of writing duplicate code for every page in our app, we create the base page class that centralize all common actions, clicking, typing, navigating, you name it. Now, why do we inherit from page? Simple. Playwright gives us a page object, which is like our remote control for the browser. Instead of reinventing the wheel or worse, copying and pasting code across tests, we extend its capabilities by prepping actions in a safe execution layer. This lets us handle errors, log ovens, and take screenshots without breaking sweat. If this sounds like overkill, think again. You mentioned a team of developers which with their own creative approach to automation. Some prefer try cache, others believe and hope for the best. Our best page ensures consistency, error headling and most importantly, less the bugging at 2:00 A.M. Before we dive into the implementation, let's install playwright. Open your terminal and run, peep, install playwright. If you skip the previous lessons, surprises like this are bound to happen. Hopefully, this serves as a good reminder of why watching the lessons in order is a smart move. Now let's make sure our code actually recognized page. Without an import, Brighton would know what we are trying to extend. Now, let's create a protection layer for our actions. Instead of letting actions fall silently, which is nightmare to the bug, we will do the following. One, Lock every action before it runs to capture errors and describe what went wrong. Three, take screenshots for better bugging, four, fail gracefully instead of leaving us guessing, how does this work? Every method, whether it's clicking, typing, or navigating, will go through safe Execute. This method is all about protecting your test steps. Think of it like I said in it, you wrap crucial actions like clicks, text inputs or form submissions in a triblog. If everything goes smoothly, great. No one even knows the dt was there. But if something fails, you can catch the exception, log the details, take a screenshot, and then write the same exception to ensure the failure is still visible in your test reports. Logging and screenshots on failure are crucial. Without them, it's like you're troubleshooting in the dark. After that, an error catching mechanism will capture any errors. Then a log will describe the issue and its severity. A screenshot will be taken and finally, the test will fail. Of course, this can be done in different ways. You could take a screenshot before the action to see the before state and compare it to the after state. Doesn't really matter. What matters is that you have at least this minimal structure. In my opinion, this is the essential setup. It serves as protection for any basic operation you perform, especially in a large team where each developer has their own coding style. This pattern is a hallmark of robust test automation frameworks. I plans clarity, maintainability, and better debugging. Now back to implementation. 10. Adding Logging and Screenshots for Better Reporting: SL Dot page will be assigned to page. Storing page in sol Dot page allows the class to access and use it across methods without needing to pass it around repeatedly. The safe execute method will receive three parameters. Action, the function or action to execute. Action name serves as a clear identifier for logging, debugging, and error handling. When an exception occurs, having a meaningful action name helps pinpoint exactly which action failed. Making troubleshooting much faster. Think of it as adding a name tag to each action. ARCs allows the method to handle dynamic inputs. Making safe execute flexible enough to execute various functions without knowing their exact parameters beforehand. For example, if safe Execute is used to enter text in a field, arcs might contain the string to type. If it's clicking an element, arcs might be empty. This enables calling different actions with different parameters seamlessly. Using Asterix arcs also ensure cleaner reusable code, as you don't need to define separate methods for different arguments variations. Just pass whatever the action requires, and it will be executed properly. We'll make sure to take care of logging effectively later on. The action parameter is simply a function reference. You can pass in any collable like a button click or a validation, along with the necessary arguments. The method then invokes action extra arcs within a tri except block for logging a screenshots on failure. This way, you can reuse one safe execution pattern for a variety of actions without cluttering your code. In the case of failure, we log the error details and capture a screenshot to provide clear debugging insights. This ensures that every issue is documented and easily traceable. Which methods should use safe execute? Any action that interacts with the page and can fail like clicks. Let's take a look at the click element method. This method is designed to safely handle element clicks. It receives locator, which represents the element we want to interact with. Once provided, this locator is then passed through safe Execute, ensuring that the click action is executed with proper error handling, logging and debugging support, and executes locator dot click by doing this, we ensure that any failures like missing elements or timing issues are locked and handled properly. The action name click element helps with tracking and debugging, making our automation more reliable and transparent. Looks like we are missing an import for locators. Unless you enjoy error messages more than actual test runs. Well, quick fix, just add this at the top. And we are back in business. Similarly, we are going to apply the same flexible approach here, a method that will handle type text, giving us a straightforward way to input data. Just like our previous example, it will receive a locator, which tells us exactly where we are sending the text. And also a parameter typed text, STR, making it explicit, we are dealing with a string. Just like before, safe execute we run in the background, passing the locator, as well as the action we want to perform in the case tapping. Notes that Pill does not include parenthesis because the action will add the parenthesis automatically later. The action name parameter is a descriptive label for the action being executed. In this case, type text. The arcs parameter in this case, text is the actual input passed to the field action. It tells safe execute what data to provide when executing the action. By passing text, we ensure that fill receives the correct value to type into the input field. Similarly, we can expand this thirder. That's the beauty of the base page. It's extendable allowing us to perform any action. The next method I want to add is navigate to which we receive a URL this brings us to an important question. Why do we need this method? How can we ensure smooth navigation between pages? The GT method helps us achieve this by allowing navigation to a specific URL within the browser instance. As you can probably guess, we will use safe Execute for the navigate action. Go to method is used to navigate to specify URL within the browser instance. As I mentioned, we need to remove the parenthesis. The action handler will take care of that. This will be called Navigate too. Similarly, we can expand this further. That's the beauty of the base page and safe execute. It's extendable, allowing us to perform any action. Every action that might fall is wrapped into a mechanism. So we have full control over its failure. Additionally, we are separating concerns here. One part is responsible for executing actions while another handles error management, like a good DevOps engineer, separating deploy from par. This allows us to expand the code without modifying its core functionality. 11. Adding Logging and Screenshots for Better Reporting: All right, time tech things up. Let's add logs and of course, screenshots. These are the bread and butter of test automation. So where do we actually put them? One thing is painfully clear. If every class handles logging and screenshots by itself, you're basically writing the same code over and over again. That's how you end up with bloated files, inconsistent formats, and a developer slowly losing faith in humanity. So where do we centralize this logic? My go to answer is utils, because let's face it. In automation, we repeat ourselves a lot. Here, we there, check this, screenshot that logs and screenshots are two of the biggest culprits. But here's the catch. Logs are only useful when they're done right. You ever open a log file and find lines and lines of random messages like step started, still running, step finished. Cool story, bro, but where is the part that tells me what broke? I've lost count of how many times I had to scroll through 100 lines to love just to find one useful line. So here's my question to you. How do you find the sweet spot between too much and not enough logging? Hit me in the comments because, yeah, I really think it's an art. Now, when I log, I aim for smart, scalable and clean, not just dumping print everywhere. First step, I define clear log levels using an enum. Let's name it log level. It will inherent from num, nothing fancy. But trust me, I will pay off. Just make sure to import it first from IAM. Now, which log levels do we want? Well, info for general updates, stuff like test iron. The but for those, I'm about to lose my mind moments. Warning when something fishy happens but doesn't break the test error four, well, actual errors. And crucial when the servers on fire, and we all need to go home. So far so good. Now, let's think what should a single log message include? Well, it obviously needs the actual message. It should be of type SDR string. Also, clude the log level, which should be of type log level, then we defined ardil. For full control over logging, my experience shows that it's best to include a flag. This flag will determine whether the log should be attached to a log. Since it's a flag, it should be a type of bullion, true or false. So we can decide what gets into the airport and what stays in the shadows. By default, yeah, logs are attached, but sometimes we want to keep it clean. Logging is crucial for thebgging and understanding test execution. But not every log should be attached to a lure. Based on my experience, having full control over which logs appear in a lure reports can significantly improve readability and debugging efficiency. Here's a real life case that happened to me. We once had a test suit where every single log was attached to including low level debug logs. At first, it seems like a good idea. But when we ran the tests and scale, the reports became huge and practically unreadable. Finding a crucial failure was like looking for a needle in a haystack. On the other hand, there was another case where we didn't attach enough logs. A test was failing randomly, and we couldn't figure out why because only shows the high level steps. After adding detailed logs, only for failures, we immediately spotted the issue. All right. Let's talk about logging levels. You ever work on an automation framework and feel like your logs are either too chatty or too silent. Either you're drowning in the bag messages or you get nothing until the whole system crashes. That's exactly why we use log levels. They help control how much information gets logged and when. So what's happening here? This function is basically a message router. You give it a log level and it makes sure that message gets sent to the right login function. If we attach something to lure, that basically means we don't want it to show up in the report. So yeah, we call lure dot attach. It's going to get the message we are logging, the actual log content that gets attached to the report. This is why we don't just see the log level, but also the exact details of what happened, making it easier to debug. For that, we need a name something like prefix log and level dot value dot upper, which gives us a nice uppercase readable title. Perfect for visual clarity in the report. Now, to avoid running into errors with this, we obviously need to install a lure. Run Peep, install a lure or just follow the steps in the Rhyme like we discussed earlier. First, we need to import the lure model itself. That's our gateway to think Allure related. Then we define the type of attachment. In our case, plain text. To achieve that, let's use Allure attachment type dot text. That text type is exactly what we need for simple log messages. Now, here is the cool part. This setup means only selected logs go into allure. No more dumping the entire console. So when something breaks, you are not left guessing. You'll have exactly what you need nicely structured by severity level. Nice, logs are set and ready to go. Let's move on into screenshots. We are going to create a function that captures what's on screen when things go wrong because, hey, logs are great, but sometimes they're just not enough. When a test fails, a screenshot can make the issue crystal clear. You know, the saying a picture is worth 1,000 stack traces. We'll call it take screenshot. So what will this function receive? It will take in the page object. That's our browser content. And also a name from the screenshot as a string. We can even default that to screenshot if nothing is passed. We'll wrap it all in a try block just to be safe in case something goes wrong while taking the screenshot. Inside that, we call the screenshot method. Here's what happening. Page is the current browser page. Screenshot is the method that captures an image of what the user would see. By default, it gives us a snapshot in a binary format held in the memory. And to make sure we get a clean image with good poding, we specify the format, E and G. We store the result screenshot data. And just like that, we got a screenshot ready to go. No file, save to disk. That's a win. If you want to save it locally, just pass a path as a parameter and handle it yourself. Just like with logs, we attach the screenshot using calor dot attach. The first argument, screenshot data is the actual image in binary format. That the screenshot we just captured. The name part gives the screenshot a title in the report. That could be something like the test name or the log level. Whatever helps, give it the context. And finally, attaching type, it tells lure how to treat this data, so it knows to render it as an actual image in the report, not just robtes. We are explicitly saying, Hey, this is the PNG image, and we return screenshot data for good measure in case you want to do something, something else with it. Now, if something goes wrong, let's say the page is already closed or inaccessible. We will attach the exception. And return none. That way, we avoid crushing the whole seat just because the screenshot didn't load. Screenshots also ready. So that we actually build what we actually built here, we centralized common actions, logging and screenshots, which means no more messy duplicates all over the code base. We gain full control over what goes into lure and keep the report cleaned and readable. Soon, you will see how this all comes together, how log levels and screenshots appear in the final report to help you the box smarter, not harder. And when you combine structured logs with a screenshot, you get a unified modular and easily maintable solution. Now that both logs and screenshots are ready to go, you are heading back to the base page class to see how this actually works in practice. In the next part, we will integrate everything into safe execute so that every failure gets automatically locked and captured visually. This is where things start to feel really polished. 12. Integrating Utilities into the Framework: We are going to take the log message function we implemented earlier inside Utils. And this time, we are going to use it inside our safe execute function, which as the name suggests is all about wrapping actions with proper logging and error handling. So let's start by writing Self dot Logger. Now, what is Logger exactly? Self dot logger is our instance dedicated logger. It's created using logging Git Logger. Let's break that down. Self refers to the current object. Self class gets the class that the current object is an instance of. Name gives us the name of that class as a string. So what we're doing here is saying, Hey, give me a logger that tied to the name of this class. That's why when we look at the logs later, we know exactly which page object or component generated them. This is especially useful in large test suites where you've got dozens of page classes and want to keep logs organized. All right. So now that we got our logger set up, let's import. And then we can start using it. We will begin by logging the action that's about to run. Let's call it execution action and pass in an action name along with the arguments used to perform it. We format those using arcs. Log this message with a level of low level dot info. This tells us we are about to do something. It helps us to track the high level flow of the test in the logs, making the all process easier to follow. Same idea on the flip side. If the action throws an exception, we will log action failed with the same context. Action name, arcs, and this time, the log level is log level dot error. At this point, I think it's clear. Adding logs like this makes it super easy to track what happened. Where and why? You don't need to dig through raw output anymore. But word of advice don't go overboard. Too many logs can actually make the bugging harder by throwing out the important stuff. Now, let's bring screenshots into the picture. We call our take screenshot function, which takes two parameters, the page object, so we know what browser context to capture and the action name. So the screenshot is labeled correctly in the report. With that in place, we now have a really solid base page. Structured, handles action safely, and it gives us visibility into everything that's going on. Every time we run an action, we log it. We take a screenshot if needed, and we capture exceptions gracefully. So if something fails, boom, we instantly get both a locked and a screenshot. Wait, something missing. All right. We also need to pass in the URL as an argument. So we can log which page we were on when this action failed. Another small detail that makes a big difference. This kind of setup isn't just convenience. It's a best practice. With a foundation like this, we are ready to start building actual page objects that inherit from base page logs, screenshots, error handling, all wording. From here on out, even someone not reading the code can spot what went wrong just by glancing at the report. Heck, you could walk into a job interview, open your Alu report, and say, let me show you what clean automation really looks like. O 13. Introduction to the Page Object Model (POM): You mentioned this. You're running your beautifully crossed alphamon test. Everything is green, everybody happy. You're sipping your coffee like a boss. And then boom, you realize the login never actually worked. Yep, the test run, the steps passed, but the user never logged in. And you sit there thinking, wait, what did you actually test? Now, that kind of silent failure that makes debugger cry. So the question is, how do we catch this early? So what now? How do you avoid this kind of silent failure in the future? That's exactly where one of the best design patterns and test automation steps in ready. Say hello to the page object model, your test code new best friend. It's not magic, but it sure feels like it when you realize how clean and maintainable your tests become. It's not just a fancy term for interviews. It's a serious tool in your toolbox. Here is what it does. It helps you separate the what from the how. Instead of writing, click the login button. Enter the username, enter the password, click Submit. You just wrap it up in one clear call. Now, your tests like an actual story. Log in. Do step one. Do step two, validate results. That's not just pretty. That's maintainable because here's the kicker. Let's say tomorrow, the product team states that login now requires a retained scan. Without page object model, you're updating 100 tests one by one. With page object model, you update one method in one file and bom. All your tests still work. That's not just good design. That's career saving design. It's like having an update all button for your framework. And here's why it matters. Automation isn't just about clicking buttons faster than a human. It's about designing something robust, scalable, and future proof. And the page object model, that the foundation. It gives you clean test cases encapsulate page logic. So if you like sleeping at night, you want page objects. All right, enough theory. Let's roll up our sleeves and build it because this is where clean code meets automation magic. 14. Planning Effective Page Objects: I always recommend starting with a template. It may sound basic, but trust me, a solid starting point sets the tone for everything that follows. It gives us a bird's eye view, helps us think ahead and avoid those. Wait. What was I even trying to build moments? So let's start there. We will define the method, something with a clear purpose, something like we're phone logging. Nice and clean. Now, take a second thing. What do we actually want this method to do? Here's how I like to approach it. First of all, and this might surprise some people, I want to love a message. Why? Because logging is our best friend when things go wrong. When you're staring at a wall of red errors, a single line like starting logging process can be a lighthouse in the fog. It tells us, Hey, we are supposed to look at this point, something went sideways. So that's step one. Then we move into the action. We enter the username, we enter the password. Nothing fancy, just the essentials. Then we click the Submit button. That's the basic login flow. But here's where we rise the bar. If everything goes smoothly, no exceptions, no errors, I want the method to return the main page object. Why? Because that's where we are now in the flow. Login succeed. We are inside the app. Let's move forward with the test. But if logging fails, we don't just shrug and move on. No, we stop. We take action. We log the error with such details as we can. And yes, we grab a screenshot because a picture is worth 1,000 stuck thraces. Especially when someone else needs to debug this later. And finally, we fail the test because the failed login is a showstopper. No point continuing with broken assumptions. So to recap, we start with a log, Enter username and password, click Submit. If success return the main page, you fail or log screenshot and fall. That's the plan. Simple, structured and easy to follow. Now let's bring it to life. 15. Building a Login Page Object with POM: In this part, we are setting up a new page object. Let's name our class, the login page. We are inheriting from a base page. Think of the base page is our shared toolkit. It holds all the common functionality that every page in our test framework will need. And here is where the cool part starts because all the logic from base page kicks in automatically. That means our login page instantly gets all the goodies. Logs, screenshots, safe execution, everything. We don't need to rewrite any of that, and that's win, victory because it kills duplicate code. Less repetition, fever bugs. So now, login page focuses only on logging related stuff, leaving the technical behind the scenes to the base page. All right, time to create a constructor. When the login page is created, it receives a playwright page object. That object represents the actual browser tab we are working with. We pass it up to the base page using Super, which means, hey parent class. Here is the browser instance you will need to do your magic. And just like that, the login page is wired up and ready to use all the base functionality, but scoped to the login flow specificity. This sets up the stage for adding actions like filling in the username, entering the password, and clicking the login button. Now, we're moving on to the fun part, implementing perform logging. Let's walk through what this method will actually do. We will call it perform logging. What does it take in Username, of course, as a string. And naturally the password. Also a string. All right, first part done. Next step, logging. And we have already got the helper for that. The one and only log message. Now, what do we pass into it? We start with self dot logger, then the action we are about to perform, which is performing login. And finally, the log level info. Nothing fancy here, logging what we are doing. No errors, no surprises and logging. Done, we are in a good spot. Et's zoom in so you can see better. Before we move forward and actually type in the username and password and click that login button, we need one more thing first, the locators. Now, here's how I recommend setting those up. We will use something called the factory pattern. If you haven't heard of it, it's a smart pattern that helps us initialize all page elements in one go. The second, the page is created. Boom. Everything is ready and up to use. It makes setup super clean. Inside the constructor, we define self dot username field. I'm leaving it empty for now. I will do the same for the other elements. Then self dot password field Same structure here. And finally, the login button. That's our submit element. We will use it when we trigger the login action. Now for the demo, I will be using Facebook just because it's easy to access. To grab the locators, we will inspect each field we create about. To do that, just right click the element in your browser and choose inspect. Select tutor name. Name equals, email, that's exactly what we will target in our locator. Password field, the name is passed and the logging button. If you're not sure how to find locators. Here's a quick tip while we are looking at the element. Just hit inspect and look for something unique, like a name, ID, or even a data tested attribute. In this case, we have code name equals email and that's exactly what we will use in our locator. Now that we have seen the locators, let's define them inside our constructor. View the locator method to target elements using something unique. We'll start with self dot page. There are different ways to locate elements. In our case, we are using a CSS selector. Then, we use the Locaro method. It looks for an element by name. In this case, name equals email. Once we got it, we can interact with it however we want. Same thing for the password field, and also for the login button. I like to call it login button, just to keep things readable and clear. Now we are fully set. Time to type that username. We use the type text method to start filling in the username field. Now let's break that down. We are passing in two things. Here is the locator, which tells playwright where on the page to type. And second, the actual value, you want to enter. In this case, the username. Pretty straightforward, right? But here's what nice in this thing. Under the hood, type text isn't just typing raw text directly. It's calling another method from the base page called Safe execute. This is once a wrapper around the actual action. In this case, locator dot fiel. Instead of running the action directly, we use safe execute to make sure things go smoothly. If the element isn't ready yet or something weird something weird happens in the browser, it knows how to handle it. So you are not left wondering why the typing failed. Username done. Now onto the password, and here is the cool part. The flow is exactly the same. We just swap out the locator and the value. Instead of username, we use the password field. Instead of the username string, we pass in the actual password. Same method, same structure, same safety. That's consistency and makes your automation so much easier to maintain. Password done. All right, time to hit submit. To do that, we use another helper, click element. Just like before, we give it a locator. In this case, our login button. Play right, finds the button, and clicks it. That's your submission right there. Personally, I like to call this field login button just to keep things crystal clear and in the code. Now the login process is complete, but don't close your laptop just yet. Because here is the thing. Even if you think we click Login, did it really work? Once login succeed, we return the main page object. Now comes the big question. How do we know if the login actually worked? We want to check if we land it on the next page. Main page represent the screen the user lands on after successful login. But we will implement that part a bit later. Another much simpler approach is to check if the login button is still visible. If it's still there, something probably went wrong. Maybe invalid crednlls or a failed request. But if it disappears, that's a good sign we're in. Since we are not using real credentals here, we will stick with the simple visibility check. We check if login button is still visible after the click. If it is, we assume the login field. In that case, we log the message logging field using self dot logger and mark it with log level dot error. And of course, we capture a screenshot just like we would in any proper failure scenario. We also tag the step with logging fail, so it's clear in the report. It's quick, effective and works perfectly for our demo. So we check if login button is still visible after the click. If it is, we assume the logging failed. And here's why we explicitly log this. Even though nothing technically crashed, when it comes to logging, we always want clear visibility. Logging is crucial step, so we want full visibility when it falls. We can't rely on safe execute here because technically, everything worked. The click went through the page responded, but the logging didn't succeed. And that's exactly why we log it explicitly. And mark it with log level dot error. And of course, we take a screenshot. When something goes wrong, having a screenshot is like having a time machine. You get to see exactly what happened, and we give it a clear name. Login failed. After that, we return none. Why? Because we didn't reach the next page. The login didn't go through. Returning none signals that the flow failed and lets the test decide how to handle it. This also gives us flexibility. In positive tests, we expect a real page object like main page. In negative ones, we expect none, and that's our confirmation that the system blocked the login as it should. That's it. We have completed the part where we handle login failures. Now, here's something important to understand. Just like in math, where one problem can have multiple ways to solve the same applies here. There are two valid approaches when handling failed login. First approach, the method itself fails the test. So I login doesn't succeed, perform login, throws an error or raises an exception, and the test stops right there. Second approach, which is what we are doing here is where the method doesn't decide. It simply returns the result. Either a main page of it worked or none if it didn't then the test is the one responsible for deciding what to do next. This gives us maximum flexibility. Why? Spoiler alert. Later in the course, we will start writing negative tests. And in those cases, we want the login to fail, and we don't want the method to throw an exception. Instead, we will validate that none was returned and treat that as a successful test outcome. So by keeping the method neutral, we support both positive and negative flows. And I'd love to hear what you think. Which style do you prefer? Let me know in the comments or bring it up in the course discussion group. Quick recap before we move on, we build the full logging flow, username password, and click. We add logs for both success and failure plus a screenshot when logging fails. And instead of failing the test inside the method, we choose to return none, giving our tests the flexibility to decide what to do next. That's it. Clean, reusable, and ready for the next step. 16. Test Design Foundations From TDD to Automation: Who here hasn't run into this before. You write a test, run it, and suddenly, wait, is this a real bug or is the test just trolling us? It's so frustrating. 1 minute the test passes. The next boom, red, broken. No clue why. So what do we do? Out of pure desperation, we toss in a five second sleep. Praying to the automation gods, it will magically fix everything. But right there, that's how bad tests become real problems. Over time, they get flagy. Unreliable. Developers stop trusting them. They ignore them. And at that point, you might as well toss your wall automation seed into the trash. So how do we write this that don't make us question our life choices and open a fullful stand instead. Glad you ask because in the next lesson, we are diving into how to write smart, stable and maintainable tests that actually do what they're supposed to do. But before we touch a single line of code, let's look at the tools we have to level up our testing game. First, up data driven tests. Why settle for dataset when you can run the same test logic on multiple datasets with dro duplication. Let's say you're testing login processes. One test for an incorrect username, another for wrong password, same logic, different inputs. That's the power of data driving tests. Next, pictures. These are game changers. Pictures handle all the messy setup before each test runs. So you don't have to repeat the same boil played code over and over again. Think about it. You're testing order history. Obviously, you need an order first, right? But that doesn't mean every single test needs to re implement login, card, checkout, pictures, let you skip the clutter and jump straight into what actually matters. They create the perfect environment for your tests, automatically and last but not least assertions. Because let's face it, if your test doesn't verify anything, it's not a test. I just code that runs. We'll talk about how to use assertions smartly, how to separate logic from verification, and how to keep your tests clean and to the point. So buckle up because we are about to put all of this into action. 17. Designing Real-World Test Scenarios: A while back, I was working on a new project. Tight deadline, lots of pressure, typical stuff. We wrote our login test, username, password, click login, green light. All good, right? Pass forward to release day, support calls starting perking. Turns out, login is failing. But just for one user. Why? His password had an exception mark. Yep, we never tested that. And trust me, explaining to the management why your test missed an exclamation mark. Not fun. That's when it hit me. Testing isn't just about checking if things work. It's about checking if they still work when reality gets weird. So before we touch any code, let's get crystal clear. What exactly should login test verify? If your first instinct is enter a user name and password and check that login succeed, you're absolutely right. That's called the hip path, the core user experience. Let's quickly map out the test together. We launch broader because we're simulating real user behavior. We never get to the app's login page, we interact with the form, enter a username, enter a password, and click login. Now, the important part, fine login succeed. Maybe we check if the dashboard loads or if a welcome message appears or even confirm that a logout button is visible. Bottom line, we need evidence the user is successfully logged in. But here's the thing. Testing isn't only about confirming that works and what works. We also want to know what happens when something goes wrong. That brings us to negative testing. Imagine the user enters the wrong password. We absolutely must confirm the system handles this gracefully. But don't stop there. What if the username is invalid? What if the username is missing altogether, or if the password is blank? These scenarios happen in real life, and our tests need to cover them. Here is the cool part. We don't rewrite tests from scratch each time. Instead, we use the same flow with different data inputs. This is known as data driven testing, and it's how we create one clean scalable test that covers multiple H cases. Let's quickly break down a few classic negative scenarios. First scenario, an invalid username with a valid password. Why? Because we need to check if the system correctly identifies when a username simply doesn't exist and prevents logging gracefully. Second scenario, a valid username with an incorrect password. Super important. We want assurance that users won't get it unless they provide the exact correct password. Third scenario, missing user name. Think about it. A user forgets or skips filling in the username field. Our test ensures the system catches that and clearly prompts the user to enter it. Fourth scenario, missing password. Same idea here. The system should immediately notify the user that the password is required. We never want confusion or silent failures. Each of these scenarios runs through the exact same steps. Fill the form accordingly, click login, and verify that the login attempt fails appropriately with clear, helpful error messages. With this approach, we have transformed a single test into a robust scalable set of scenarios covering crucial H cases. Clean, efficient and easy to maintain exactly what great automation looks like. Quick recap. We covered by testing both positive and negative login scenarios matters and build a simple scalable data driven approach to handle them effectively. Now that we have nailed down what we are testing and why, let's jump right into implementation. 18. Using Fixtures for Smarter Test Setup: Now that we understand how our tests should behave, what potential issues can arise, and how to design them efficiently, is finally time to translate all these into actual code. Third things first, let's give our test a clear name that describes its purpose, something simple like test successful again. According to our steps, we need to open a browser, and for that, we'll use Playwright. We'll call playwright dot chromium dot launch. This does exactly what it sounds like. It launches a Chromium browser instance. Think of Chromium Chrome's lightweight cousin without Google's ad features. When this command runs, it returns a browser object, which we'll use to interact with web pages in our tests. Next, we call browser dot NewPage. Open a fresh page or a tab. Inside the browser, we just launched. This section returns the page object, which is essentially our canvas for performing of Foer actions such as entering text, clicking buttons, and interacting with elements. At this stage, our browser and page are ready, but the page is still blank. We haven't actually loaded any website yet. Finally, with the line Navigate to, we are instructing playwright to load a specific page. Of course, you'd replace URL with the actual address of the application you're testing. But as you probably know, these setup steps will repeat in every single test we write. Repeating setup logic in every test. Well, it's messy. It's redone, and it clutters the real purpose of the test. Which is exactly why we will soon move the logic into usable Bye test pictures. I want to start each test with everything already ripped and good to go. I can focus on what actually matters. Testing the login process. And we haven't even talked about how the test ends because, yeah, we also need to close the page, clean up after ourselves, and manage resources. Trust me, I don't want the logic inside every test either. That's exactly where PTS comes in. It gives us a clean way to handle the stuff that happens before and after a test runs. We are talking setup and tear down. What we're going to do is define a setup. Something that I launch, the browser opens the page, navigates to the URL, and sets the stage. When the test finishes, whether it passes fails or totally crashes, we need a down. We need a tear down to always close that browser cleanly. How do we get that? These fiitures. And we're going to define those inside a file called Comtest. This file is kind of special in Pits. Lets us define reusable logic, like pictures that are automatically available to all our tests. No need to import them manually every time. They're just like magic, global and ready to roll whenever you need them. So let's go ahead and define one. We will decorate it with the Pi test dot pictures. And let's call this fixture setup. Simply write or even better, set up playwright, is going to receive playwright as an argument. And the first thing we need to do is launch the brower. If you're asking yourself, okay, but how? Well, it's with playwright Chromium lunch once the Browder is up. We'll open a new page with browser New page. Now, here's the important part. We want to wrap all of this in a tri final block. And instead of return, we will use Yield. If you're wondering, wait, why not just use return? Because return will just return. It exits the function. Never looks back. But we need cleanup. We need that tear down to run no matter what. If we just use return, we skip cleanup entirely. And that a great way to end up with 12 zombie browser windows. So using yield gives us a real tear down logic. We will also sprinkle in some logging. For example, closing broader at the end, just so we have visibility into what's going on under the hood. Think about it like renting a car, use it, get to your destination. And when you're done, you don't just leave it in the middle of the highway with the engine running, right? Same with the browser. If you don't clean up properly, we end up with open sessions, memory leaks, flag tests, and a bunch of invisible tabs, just hunting our machine like browser zombies. And trust me, when you're running hundreds of tests, this as part of a CI pipeline. Those little brower ghosts can peel up real fast. So yeah, wrapping this in a finally isn't just good practice. It's your safety net. It ensures that no matter what happens in the test, you always leave things clean. You're not just writing a test. You're building a system that people can trust. And if you want to make things even more flexible, we can take in requests as well. That gives us even more control dynamically scoping the fiture, sharing state or customizing behavior based on the test. Total game change, right? So now we want to control whether the UI is displayed or whether the browser runs in headless mode, meaning no idable window. And if you're wondering, Okay, but how do we actually pull that off? Here's the trick. We tap into the request object that PTS gives us. From there, we access.com and then dot Get option. And what are we looking for? A Custom flag. In our case, head now, if the flag isn't provided with default false. What does that mean in plain English? It means Bytest will go look inside the Bytestt in Config file to see if head has been set. If if it finds it, call. We'll assume the user wants to see the UI. If not, well, head is false, and the browser runs in hellss mode. Which basically means it's running without opening an actual window, like a Ninja browser doing its job silently in the background. Headless equals not headed. Same, what we did there, if the user does pass the headed flag, we will open a fully browser window. If they don't, we run headless. Simple, Elegent, effective. This is especially useful in CICT pipelines because let's be honest. When your tests are running on a bill server, nobody's sitting there watching the browser. We want things fast, quiet and efficient. All right. So now we have handled launching the browser with or without a visi WUI. But we are not done yet. We also want to handle page loading because again, just like we don't want to launch a browser in every test, we don't want to reload the page every time either. We can take care of that with another fixture. I'd call it something like setup load page. Because, well, that's exactly what it does. It loads the page for us. What does this picture receive? It will take in setup playwright as a parameter because we need access to the page that was created there. Now that we have the page, we can go ahead and create a login page object. Something like login page. Remember, set a Playwright is our picture, doctor returns ready to use playwright page object. It's already launched, the browser opened a new tab and got everything set up for us. Now, the login page class needs access to that page, so it can interact with elements like typing into the user name field, clicking the login button, checking for errors. There's a small naming issue here, just a tiny tipo. I forgot to add an underscore in the name. Happens to the best of us. Let's think that real quick. Here's the yap here too. Perfect. Everything else looks great. Oh, now we got the input. Beautiful. So what are we going to do with this login page object? Simple. We'll call the Navigate to method and pass in the URL. We want to load. And of course, we're not just going to run silent here. Let's also add a log message, so it's clear in our logs what's happening and when. We'll pass in the logger and maybe something like navigating to URL just to get things traceable once the page is fully loaded. We finish the picture with yield login page. That way, the test will get access to a ready to use login page object, already navigated, already logged, fully prepared. Now, obviously, we are not going to leave the URL hard coded. No one wants to hunt through code files, changing strings when the environment changes. So let's move that URL to a config file. We'll define a variable, say URL, and store our target address there. Drop it into a central conflict model and just import it wherever we need it. Like here, at this point, both our setup and tear down are fully in place. We've got reusable pictures, clean browser handling, and a page that loaded and ready for action. Now we can jump into writing tests that just focus on the behavior without worrying about setup noise or environment confen. And to use all of this magic, simple. Just inject the setup load page fixture into your test function and you're ready to rock. 19. Writing Your First Real Test Case: So anytime we want to use our picture, all we need to do is to call it. That's the beauty of pictures. They're usable, routable, and clean. One simple call, and we're off to the race. What do we get back? An instance of our login page fully initialized and ready to go. And now that we have this page object, we can say goodbye to some ugly repetitions. Remember that chunk of setup code you kept copy pasting into every test. Gone. That line, gone. That other one gone too. The third one you forgot you even needed, also gone, 0.1 simple call, and we are off to the races. So yeah, this part of the test setup is officially production ready. We have actually established a consistent starting point for every single test. No guesswork, no variation, just ability. That kind of consistency is what makes test suits robust and maintainable in the long run. All right, so what next? Now we just call login page, then run from login, and naturally, we pass it two pieces of data, a username and a password. And here's where it gets fun. Let me ask you a quick question. Where do you think logging credentals should actually live? Think about it. Should they be inside the test, hard coded into the script? Hopefully you are good at screaming no. This topic honestly deserves its own fully length course, maybe even a whole chapter on secrets management. But this course, we are keeping it focus. So here's the one rule to remember. Never store crientals directly inside your test code. Ever. Instead, always important from an external source, something that can be managed and replaced without touching your tests. In our case, for the sake of simplicity, we are going with a confint file, quick, easy, and totally fine for demo purposes. So let's define a variable valid crientals. That variable will include both the email and the password. Think of it as your little creentile vat locally scoped. For our crentiles, I provided a fake email and password. We are just illustrating the structure here. Nothing really is being exposed. Clearly, this wont let us log in. That kind of the point, we are just illustrating the structure here. For real logging, you will want to replace it with the real deal when you go live. Just like that, our test is now using external data for logging. We are not hard coding anything inside the test body itself. Now let's hop back into our test case. We we previously had hard coded values. We'll now pull directly from our wallet coordinatals. For example, instead of typing out a static username, we will write wallet Crantals email. Same goes for the password. No quotes, no strings. Just pull it straight from the object. Wallet CrantalsPassword, cleaner, safer, more maintainable. At this point, were fully implementing the login flow, username input, password input, and the actual login action itself. So now we can confidently delete those planning comments. They are no longer just ideas. They are code. Next step, call verifi logging. Once that in place, we are nearly done. Our test is almost fully operational. Only thing left is to handle the main page. Now quick heads up. Since we are using invalid corintals, the login will obviously fall, but that doesn't mean we can keep main page. We still need the class in place, even if you are not navigating to it yet. So what do we do? We define main page now as if it's already part of our framework. In other words, we design for success, even if it's not happening just yet. The main page should inherit from base page. This way, it automatically gets all the base functionality, like locators, common methods, et cetera. Now we'll define a constructor for main page. It takes in the page object just like every other page classes we've got. We also need to import sync API so the constructor can recognize and use it properly. Without the inport, simply bond compile. We are calling Super here to initialize the parent class. Base page with the page instance. This gives main page access to all the core functionality we already find. Main page is ready to be used, even if you don't land on it just yet. Final touch, we input main page into our test file. All right. Moment of truth, let's run the test. I'm excited, N. Do you think it will pass on the first try? Let's find out? So far looking good. That's what we like to see. Now, just one more thing left. Handling aldation. Even though our crantils are fake, we still want to test how the system responds. 20. Test Validations That Matter – Part 1: Is the test really a test if it doesn't include udation? Of course not. I mean, if you're not checking your outcomes, you are basically just clicking buttons for fun. As we mentioned earlier in the course, we are going to validate that the login process actually worked. And we are going to do that in the simplest possible way by checking whether the login button disappears. Yep. That's it. If the button is gone, we assume logging was successful. If it's still there, it means something went wrong. So we have clarity on what we want to validate. Now comes the next big question. We should we place that validation? And believe me, the where is just as important as the what. There are a few different ways to approach this. Let's look at the first one. Doing it right inside the test itself. To make that work, we need to take a few steps. Since the test doesn't inherently know anything about the login button, we'll manually navigate to the login page and we will locate that button. That will be our target element, the login button. And in the test code, we'd write something like expect logging button to be visible. With message failed to login, we are telling the test, Hey, I expect this login button to still be visible on the page because if the login button is still there, it means the login didn't work. If this expection fails, trust me, you don't want your test to just throw a message like assertion error, expect element to be visible. That's not helpful to anyone. Give it some context. Be kind. Create Custom failure message like failed login. Makes it immediately clear what went wrong. The more explicit, the better. At first glance, this feels perfect, clear, direct, readable. So why not just go with it? Well, because this approach skips over a bunch of awesome functionality, we already bake into our framework. For starters, if the login fails, we are not capturing a screenshot. We are also attaching logs to. We are missing observability, one of the key pillars of great taste of summation. Now sure you could wrap this expectation in a try kept block and attach screenshots or logs manually. But then we are breaking core principle. Test should be as simple as possible. No extra logic, just steps. Ideally, your test code should be like lego blocks, small, purposeful and easy to snap together. Even by someone without deep technical knowledge, when you reach that point, we built a truly scalable framework. But once you start introducing conditionals, tri blocks or logic branches, you are no longer writing a test, you're writing a script, and that's where things get messy. Let's not forget if you're repeating this validation across multiple tests, you're also introducing duplication. Same line, same check over and over. Now, if that check ever needs to change, guess what? You'll have to update it in every single test. That's just not sustainable. So if the test file itself isn't the best place, where do we put this logic? Well, since we are validating something on the login page, it makes sense to consider the login page itself. We could define method that may be called verify login, and that method would handle the entire validation logic internally. We'd import whatever components we need. And since the login button is already defined as part of the page object, we just reference it using self. This looks pretty solid. Everything related to log in validation stays inside the login page. It's centralized, clean, and logically grouped. Inside that method, we could even add a tricep block. Include logging and maybe capture a screenshot if something goes wrong. Sounds like you win, right? Well, sort of. The problem is over time, that method will start growing. You are going to add more conditions, more edge cases, more checks, and before you know it, your nice little logging page class has turned into monster. Hard to read. Harder to maintain. Even though it technically makes sense to put validation here because the locators are already in place. You end up with business logic and UI validation all mixed into one place. And once that boundary blurs, it's hard to tell where the logic ends and the test begins. You lose clarity, you lose structure. Some developers might say, Well, just keep your logic and your validation in different sections. Sure in theory, that sounds nice. But in practice, unless you're super disciplined, that separation tends to erode fast. Suddenly, your page object becomes a catch all class for everything, which is exactly what we are trying to avoid. Defining separate areas, one for logic and another for validation is totally double. But it assumes something big. It assumes that your developers are disciplined, like really disciplined. It requires that they stick to a structure, that they follow conventions, and let's be honest. In the real world, that really lasts. The truth is once your project starts growing and deadlines get tighter, structure tends to slip and things get messy fast. So what's the better approach? My recommendation place your validations inside the helper class. When you do that, you're creating a clear separation of concerns. Your logic lives in one place, your validations in another. Clean boundaries, and this separation becomes even more valuable as your project scales. Picture this. You got dozens of pages. Suddenly, you realize that some vdations are identical across multiple pages. Why repeat the same check over and over again? Well, with a centralized helper, you can use those validations with a single method call, no duplications, no headaches. You can even build the general purpose validations that apply across your entire test suite. That's powerful. But hey, every strain comes with a weakness. If you dump all valdations into a single oldation class, guess what happens. It will blot, give it time and that neat little class turns into a beast. Navigating becomes a full time job and it gets worse. To old something, you will always need access to mate locators, which means you will end up duplicating locators inside your validation class. That's a huge cold smell. Not only it's messy, it directly violates the dry principle. So how do we fix this? Because, yeah, it's a real architectural challenge. One idea might be, let's just import all our pages objects into the validation class. Some folks will look at that and go, Wow, hold up. Are you abusing the page object model here? And to be fair, they have a point. That's why there's another option floating around. Putting validations inside contests. But honestly, I'm not a big fan. I like to keep contest as clean and minimal as possible. It should be focused on Cthing and fixtures, not business logic. So yeah, for me, stuffing fvodations into contest is the least through command option. So much so, I won't even waste your time breaking it down for Let's talk practical structure. If you're working on a small project, keeping validations inside the relevant page model is totally fine. It's quick, convenient and keeps things close to the elements they're tied to. But if you are building something that's going to grow or you're already working on a large scale test suit, you should absolutely consider creating a dedicated validations model from the beginning. Trust me, you'll thank yourself later. Alright, so now we are mapped out the way, the where, and the tradeoffs. In the next lesson, we will actually build out the validation method and see how it comes together in a code. 21. Test Validations That Matter – Part 2: All right. So let's start by defining a new class apodi. A dip, we want this class to inherit from base page, just like our other page objects, because we want to use all that foundational functionality. Every class in Python has an optional init method, also known as the constructor. Next up, we need access to all the page objects we have created. To streamline that, we will use Contest to centralize the setup. So in contest, we define a fixture. Let's call the function setup O pages. This function will receive setup playwrights so it can grab the page instead we need. That's our gateway to the browser. Once we've got the page, we can instant all our page objects from it. For the login page, we are instating the login page by passing in the page object. This gives us a fully initialized page object that's ready to interact with the browser. Same goes for the main page. We insidated by passing in the page object just like we did for Login page. Each one receives the page object as its driver. Now, here's the K part. This setup pattern scales anytime you create a new page class, you simply add it here. Otherwise, it won't be available to your ldations. In the IL statement, we return all the page instances we just created. That's way our ldation layer will have access to everything it needs, clean and centralized. Of course, this approach comes with a trade off. Yeah, it's a little more overhead. But the benefit, you get the true separation of concerns and that pace of big time in large projects. Sure, we could just take the validation inside the page object itself. That would be faster at first, but this is a long term investment in clarity, modularity, and reusability. And honestly, it's a price I'm willing to pay for that kind of structure. Let's take it one step farer. We are going to define another picture this time specifically for the validation logic. We'll call it validation. So what does this picture actually do? It calls setup all pages, gets back all our page instances and passes them into our up validation class. The return statement looks like this. Right now, this picture still under construction. Totally fine. We'll flesh it out step by step. Now let's take a closer look at your constructor. So what exactly are we getting here? We are calling the picture, we just created. Set up all pages. Nice. I like this. You know why? Because with a single call, we gain access to all our core page objects. Login page, main page, and say others, we have registered inside the fixture. Not sure this takes a bit of upfront work. But the effort we are putting in now is an investment. It pays off when we scale to a bigger project with more moving parts. Yeah, it can be a little bit frustrating to wire this all up, but trust me, it will be grateful. Note the call to Super. To initialize the base page with a browser context, we are using login page here because it's always part of the flow and provides everything base page needs to operate. With the construction fully defined, we are now ready to move on and create our first validation method. Let's call it validate user login. Inside our validate user log in method, the third thing we need is the element we are about to validate. And that's the login button. Now, here's the beauty of using page objects. We don't need to redefine the locator or go hunting for selectors again. We already defined login button inside the login page. So all we have to do is reach in the grab and grab it with this gives us full access to that element. It may look like a small line, but it's doing a lot behind the scenes. It helps us avoid duplications, keeps the code maintainable, and follows one of the core principles of good software design. The DRY principle stands for don't repeat yourself. Let's wrap this validation in a tri expect block. Because when a test fails, especially something like logging, you don't want it to just fall quietly and ghost you. You want it to scream. Hey, something's wrong here. Inside the try. We assert this. So here, I'm saying, I expect the login button to still be visible. And if it's not, throw an error with the message. Login failed. Simple as that, we are using the presence of the button to tell us whether the login actually failed just like we planned. But here's the thing. This time we are actually expecting the opposite. We expect the login to succeed, which means the login button should disappear. So inside the try, we flip the assertion. I accept the login button not to be visible. And if it's still there, something broke. And yeah, that's a strong signal that the login didn't go through. If that happens, we drop straight into the cap block where we don't just rise and error and move on. We document it properly. We write an error log, log in failed tagged with error levels. We snap a screenshot of the current page, naming it something clear like longing. A good screenshot can save you minutes or even hours of debugging. It's like catching a bug in the act. Here we are saying if something blows up, catch it. We exception as E, give it a name, and let's handle it like adults. We're not just raising a new exception. We are attaching it to the original one using from E. That way, we don't lose any of the original error context. And then we write the custom exception. With contest. Log in felt. The login button still appears, but we are not stopping there. We are actually changing it to the original error using for me. That way, we are not left guessing. We get the what the message, the where from the stack trace, and even the how original exception and the screenshot. All right. Now let's see how it all comes together inside the actual test. We inject two pictures here. Setup load page, which gives us the browser and initial page object and validation, which gives us access to all our validation logic. Then we call Valdate user log which runs the full validation we just built. Check the login button, gone, logs any fallers, and grabs a screenshot if needed. At this point, the test is clean and foxed and all the setup and logic handles elsewhere. That's the beauty of a well structured test. 22. Data-Driven Testing with Parameters – Part 1: Previously, we looked at how to build valdation around our features. This gave us a clean testing setup, neatly separating test logic from the actual result validation. Now, let's dive into negative tests. The test that helps us see how our system behaves when things go wrong. Specifically, we want to check what happens when logic fails due to incorrect credentials, missing fields or invalid formats. There are many ways to approach negative testing. But first, let's discuss a common approach that might seem logically at first, but quickly becomes problematic. Imagine writing a separate distinct test for every possible scenario. For example, you'll start by writing a test case for wrong password, typing out the full scenario, carefully in putting the senales and setting up validations. Then you would do the exact same thing again. This time, testing a wrong username. After that, you would write yet another separate test for missing username and yet another for an empty password. Each scenario involves manually typing out nearly identical tests logic and validation steps. As you can guess, this approach rapidly becomes repetitive. He use in the nightmare to manage a scenario accumulate. Let's break it down and through how this usually plays out in a typical project. As you start by calling perform logging, carefully passing in a valid username and pairing it with intentionally incorrect password, the goal here is to simulate a real world failed login and see if the system correctly handles it. Once the login adept is initiated, you proceed to observe and assess whatever the system behaves the way we expect it to behave. Maybe we are expecting an error message to show up. Maybe we are checking that the user isn't directed or even that a specific field is highlighted to indicate a problem. This is where we add a validation step, something like wallet user login failed. Now, let's say you want to test a different scenario. Maybe the user enters a wrong username this time. You repeat the same process, call perform login, enter a wrong username with a wallet password. And add a similar validation. Then comes a case where the username is missing altogether and then another where the password field is empty. Each new test follows the exact same structure with only minor differences in the inputs. The problem, every one of these scenarios requires you to retype the same test scaffoling same logging logic, same validation calls over and over again. It may not seem like a big deal at first, but in this case, pile up. The test file gets longer, harder to read, and pain to maintain. Before you know it, a simple change in the login flow can break ten different places in your test suite. This is the pattern we are trying to avoid. You can already guess the issue here, excessive repetition. Imagine adding even more scenarios like checking email formats or special characters. Soon enough, these tests become overwhelming, growing exponentially. Trust me, one day, the login logic will change and you will have to revisit each of these duplicated tests. So what's the better solution? The smart way to handle this is using Bytest parameterized. Think of it this way. Writing tests is like building a secure fortress. Weldon is the solid wall protecting it, and data driven tests act as the gates, controlling precisely who gets in and who's locked out. So how do we implement it? We use spy test dot mag, door parameterize. We'll pass parameters such as uteraM and password directly into our test. Now what scenarios what scenarios might we test? Here are a few examples. Let's start with a wrong username and with a valid password. This helps us verify that the system doesn't authenticate users just because the password is correct. We expect to see a user not found or similar error message. It's a common scenario where someone accidentally mistypes their email or username or or system should clearly reject the attempt without exposing any sensitive info. Second, valid username with a wrong password, this checks that even if the username exists, the system still blocks access if the password is incorrect. We want to make sure that the autation logic only passes when both fields are valid. Third, empty username with a valid password. This checks how the app reacts when the required field is left blank. Fourth, valid username with an empty password. Same principle, testing input validation. This simulates a user entering their username, but forgetting to type password. The system should stop the submission early and guide the user to correct the mistake. It's time to put it all together in a real test function. Next, we'll define the test fell login function, and the structure should look very familiar by now. We start with our trusted fixtures. Then the test receives two inputs, the username and the password. This will come from our primatized values. Inside the test, we grab the login page object. This is our entry point for interacting with the login form. From there, we call perform login. Passing in the username and password that were provided for this specific scenario. This could be a wrong username, an empty field or anything else we want to test. Finally, we assert the result using our validation method. That's step that confirms whether the login was correctly rejected and that the system responded as expected. With a proper error message, redirect and clear feedback to the user, this structure gives us a reusable, scalable way to test many H cases with minimal code repetition. This approach ensures our tests remain clean, maintainable, and scalable exactly what we want. Before we move on, let's do a quick sandy check just to make sure we didn't miss anything. Yeah, already spotting a couple of small issues we can clean up. I oh, and look at that one more tiny thing hiding in a plain sight. And from here, just about wiring it together in our test, we will pass the expected error message as a parameter to validate user failed login. Disconnects the actual test logic with our assertion layer. Inside that validation method, we will reference the locator we build pointing to the UI element that holds the error message. We will fetch the text and compare it to what we expect. If the match fails, we'll capture a screenshot and fail the test with a helpful message. One more thing worth highlighting here, you are not just testing functionality. We are also locking in the experience. When someone enters invalid Corinthials, we are ensuring that they see the right message in the right place every time I've checked, and in all the scenarios we test, DUI consistently returns the same error message. That makes validation simpler and more predictable. We don't even need to we don't even need the full text. Just a partial match is enough to verify the failure. Now that's not always the case. On some websites, each type of failure might return a different message. Just to be clear, this isn't a message we generate in our code. It's a real error coming from the applications from tent. And we are simply verifying that it appears correctly when login fails. Time to implement while the user failed login. So what this method actually going to get? Well, it will receive the error message input. Inside, we'll start by accessing the login button element. At this point, you probably know why, but let's make it explicit. Because if login fails, that button should still be visible. That's our signal that the login attempt didn't go through. So we'll assert expect login button to be visible. This is the heart of the validation. Because if the login fails, the button should still be there giving the user another shot. And if that expectation fails, we'll capture the issue with a clear message. Now, if the button is missing, that usually means the login succeeded, which in this case is a failure. So that's a red flag we want to catch. We will wrap this in a Tri expect block. In the a cap block, we lock the failure, take a screenshot, and fail the test with the descriptive message. So the format and severity levels stay consistent across the framework. And if we reach this block, it means something went wrong. That's exactly why we are using a log level of error to make sure this failure is clearly highlighted. So we'll take a screenshot. This gives us a visual snapshot of the failure state, which is super helpful when debugging. And finally, we'll fail the test explicitly with a descriptive message. That way, the original error context is preserved, making the traceback much more informative. At this point, it's a good idea to rename that variable to something more descriptive, like expect error message. Expected error message. Now let's move on to the part that actually connects us to the UI. The locator that finds the error message on the page. Inspect the page, you will see that the error is inside the container with the ID email container. That's the element that visually holds the error message shown to the user after a failed login attempt. It usually wraps the email input and any related validation message and is part of the login from form structure itself. So we'll create a locator for it right inside the login page object. And we'll call it error message. That way, we keep things organized and easy to maintain. In the next lesson, we'll take this step further and implement this validation in action. Run the test and walk through the actual results together. See you in the next lesson. 23. Data-Driven Testing – Advanced Techniques – Part 2: All right. Continuing from where we left off, let's wire things together clearly in our login page class. First, we define a new locator called error message. As we saw when inspecting the dome, all of the error messages related to login attempts are continually wrapped inside the parent container called email container. Think of this container as our error inbox. Whenever something goes wrong during logging, that's where we will find the details. Now, to make our test smart and dynamic, we'll create a helper method called error message. This method has one simple job. It retrieves the exact error message that we expect to see on the UI when the login fails. We passed one argument, expected error message. This argument represents the precise text we expect the system to show. Like the message we saw earlier, the email you entered isn't connected to an account. Now, here's the trick that makes this approach powerful. Instead of returning aesthetic locator, we built one dynamically. We passed one argument, expected error message. This argument represents the precise text we expect the system to show. Like the message we saw earlier, the email you entered isn't connected to an account. Now, here's the trick that makes this approach powerful. Instead of returning a static locator, we build one dynamically. In other words, we create a dynamic selector on the fly, which directly searches for the error message text past into our method. T is important because now our locator knows exactly what to find and where. If the text changes tomorrow, as often happens, we just update the into text. No complicated refactoring, no headaches, and no late night debugging. Each time we call get error message, the method dynamically adapts to the element we need. Now our locator knows exactly what to find and where. If the text changes tomorrow and soft happens, we just update the input text. No complicated refactoring, no headaches, and no late night debugging. Each time we call get error message, the method dynamically adapts to the element we need. Now let's up into the validation step, and let's quickly finalize our our validation logic by adding the last missing piece. We already have the logic button from earlier, and now we are simply adding one line. We call our dynamic helper method, G error message. Passing it the expected error message we are looking to verify. That's it. With just this addition, our validation method now clearly and dynamically checks both conditions. The login button remains visible and our specific expected error message is correctly displayed to the user. With this, our validation logic is complete. With this, our validation logic is complete, clean and ready for action. We don't just leave this validation check hanging on its own. Instead, we move it directly into the existing tri block. Why? Because wrapping this validation within the tri block ensures that if something unexpected happens, if the error message doesn't show up, we can gracefully handle the issue. We will look precisely what wed prong, capture a screenshot of the UI at the exact moment. Here's a quick recap what we have done. We started by clearly pointing out what not to do with negative tests, writing separate tests for every scenario. That approach quickly becomes repetitive, bloated and hard to manage. Instead, we chose a smarter, cleaner solution using Pits parameters. We defined a single flexible test function, basing in parameters like username, password, and the expected error message. We implemented our validation methods directly in the code. First, we retrieved and checked the error message displayed on failed logins. Then we verified that the login button remains visible, confirming the login attempt didn't succeed. Automatic screenshots and meaningful exceptions simplifying any debugging efforts. We also build a dynamic locator, we also build a dynamic locator by defining a method called G error message, which locates error messages on the page based on the specific message text provided. This allows us to support multiple error scenarios without hard coding, separate locator for each one. And if something doesn't go as expected, we've got our logger tracking everything behind the scenes. So we'll know exactly what went wrong and where. On the screen with our data driven tests where we define all the login scenarios and the expected messages, and by the way, those messages, they come straight from what the real app returns. So we are not guessing. We are validating exactly what users would see. The last check box in our plan, we turned it into a real working code. Mission accomplished. 24. Running Tests From IDE to Full Execution: So far, we have laid the foundation, build a solid test architecture, connect all the moving pieces. Now comes the most satisfying part of the journey, watching it all come to life. And here's a question for you. Do you think the whole thing will run like a dream, smooth, clean, beautiful, or are we about to trigger a glorious explosion of stack traces and failed assertions? Here's a deal. Writing tests feels great. It gives you that I'm getting things done buzz. But the real test happens during execution because the test doesn't run, that's like compass with no needle, looks polished, even impressive. But when it's time to navigate, is absolutely useless. So what's next? We're about to launch our tests and watch them move through the system layer by layer, starting with pictures, the unsung heroes of setup. Then through the page object model, where structure meets strategy. And finally, we hit the validation layer where theory meets reality. But we are not just going to run tests, we are going to watch them, trace them, understand them. We'll crack open the logs and read them like a detective at the scene for a bag. Not just what failed, but why it failed. And more importantly, how to fix it fast and never let it happen again. After that, it's time for the cherry on top test reports. This is where invisible effort becomes visible impact because, yeah, a green checkmark is nice. But if you want to build confidence, trust, and real quality, you need insights the kind that turn training into performance. And our reports, they will give you everything execution flow, step by step breakdowns, logs, screenshots, like turning your test suite into mission control. And the part and the best part, you will get all the visibility with just one click. That moment when everything runs like clockwork and your dashboard lights up green. That's when you lean back, smile, and think, Yep, this is why I love automation because without it, we'd still be stuck running tests manually, clicking through flows, repeating ourselves, losing time, losing sanity. So if automated testing ever felt like a black box, by the end of this section, you will crack it wide open and see exactly what's really hiding inside and become a true automation master. 25. Debugging with Breakpoints and Analysis Tools: All right, here comes the moment of through. Let's run this test and see if all the stars align or if you're about to meet an our stack trace. Okay, the runners spinning, what do you think? Is it going to pass, place your bets? Honestly, just seeing it execute is already a good spin. No exceptions on lunch, no conf errors. That's half the bottle. Nice. Page loading, and we can see the address is being tracked in Look smooth so far. You know that feeling when everything feels like it's about to work. Yeah, hold that thought. A note. Test failed. But here's the thing. This isn't bad. This is where the fun begins because when a test fails, that's not failure. It's a feedback. Now we get to put on our detective heads and figure out what's broken. Let's start by opening the logs and scanning for clothes. Where we go. Looks like the failure happened right here. But the reason still a bit murky. Let's revent a bit through the log trail. Just before the crash, it clicked on an element, and a few steps earlier, it entered the password and the username. So from the surface, it did go through the login flow, but then boom, crash. Let's take a look at the error message. Login failed. That it that's like your GPS saying Rot feeled. What now? Here is a quick best practice. Always write error messages that helps you or your teammates. Or who's ever going to be stuck debugging this at 2:00 A.M. Login Feld is basically telling us something somewhere didn't work. Not helpful. So let's continue investigating and see exactly where we tripped. We can click right here and the champ us to the failure point in the code. Looks like the failure happened during the validation process. The checks where the users logged in. Let's take a look at the error message. It through login failed. The login button still appears. That's it. That's cold. Now we are not just guessing, it's pointing us directly to the issue, something filed in the login workflow. The percents of that button confirms it never moved forward. Now that we've got a working theory, it's time to validate it. Let's drop a breakpoint just before this part runs. So we can step through and see what the app is doing moment by moment. Actually, let's go one level up. Since we are already debugging, let's go full Sherlock. We will scatter breakpoints across the entire process from the test layer, through the fixtures to the actual implementation. That way, we can trace exactly how the test flows, how data moves between layers, and where things start to drift. Because sometimes it's not about what broke. It's about where and how that broken piece got passed along. So where do you want those breakpoints? Well, first and foremost, right inside the test itself. That's our entry point. Then I always like to set one at the start of every fixture. And I also write after the yield. Fixtures can hide some sneaky state related bugs. And it's the perfect spot to check initial conditions. Once we've set up the pictures, there's one place we absolutely can't afford to ignore. Inside the validations, that's where business logic meets the test logic. We can also trace how the test flows through the constructor and even deeper into the base page. Watching that movement between layers gives you a real sense of how everything is wired together. It's like watching the nervous system of your automation. Maybe we also want to trace what happened inside config. Probably not too exciting, but in the utils, oh, yeah, Utils are sneaky. That's where logs are processed. Nice to see how the test flows through the page object structure. Nothing particularly interesting here. We'll already cover this. Can't really add the breakpoint in Bytest dot N, or in RDM or even requirements. We'll already walk through this theory. Now it's time to see things in motion. But here's the twist. This time, we are going to run it in a debug mode instead of the usual run execution. Let's see what that gives us. So during execution, the first step it reaches is setup playwright. That's where it checks which mode to run the browser in headless. Headed, visible for demo, you name it. In our case, visible mode because let's be honest, sometimes you just want to see what's going on. The browser opens the page, and once it's done, it returns the page object fully initialized and ready for action. Next step, setup load page. Inside the load page function, we see it navigating to the desired URL. A classic let's get the show start moment. It flows through safe execute and then enters the login flow, grabbing whatever context it needs from the logs and attaching them. The log type is info this time, which means it just letting us know things are chill. Then we attach it to allure, so we'll be able to read it later in the report. At this point, we have reached the stage where the actual action takes place. That means the navigation step has completed successfully. And right here, we added a log message, mainly just to keep track and make sure everything's flowing smoothly. We are basically verifying that the navigation to the URL actually happened. Lines like that helping you understand where you are in the flow All right. Moving on with the bug, here we are setting up all the pages we'll need for the test in one shot. So instead of initializing each page separately in every test, we bundle it all into a single fixture, called set up all pages. Then we yield both of them, so they're available in our test functions. It's kind of like packing your bag once before a hike. Instead of going back to your car every time you need a stake, aldon is just a wrapper around all pages. It exposes test friendly assertions. Here I am unpacking setup all pages into login page and main page. We've got setup all pages returning the page object. Let's see how this plays out in the actual test code. All right. Now that our setup is fully loaded and valdation is injected, setup load page gives us the login page, and valdation gives us the up validation object. Fully prepared in advance by Pi test, we call perform login with valid corintials. Inside perform login, we simulate the full login flow. First, an info level log p login helpful for tracking what dest is doing. That flow was attached to allure. Then we type in the username and password. And when we do that, each input goes through a helper method called type text, which abstracts the actual typing logic. It doesn't just send keys to a field, it's wrapped the locator and the text value, and behind the scenes, it runs through safe execute instead of typing directly. Like always, we look where we are in the flow. Exact same flow for the password field. It goes through type text, gets wrapped in safe execute and gets logged just like the username input. Same goes for clicking the login button, the actual button that triggers the login action. The same control flow using safe Execute, which means the click is locked wrapped with error handling. Fully traceable, and like the typing steps, it's also included in our final report. That way, when something breaks, we don't just know that it failed. What you're seeing now is the validation logic itself. This lives inside the up validation class, and it's the same method we called Erdl in the test. Wallet user logged in. You see this, we grab the login bottom from the page, and we expect it to no longer be visible, because if the user was logged in successfully, that button should be gone. And if it's still there, we read that as a failure. Log the error, take a screenshot, and return. So this method is doing exactly what the name says. While dating that the user is no longer on the login page and giving us clean reporting when something goes wrong, so once we hit the kept block, that means the login filed. At that point, we log the error using log message with a clear login field message and log level error. And of course, we are attaching that log with log level to u and immediately take a screenshot labeled failed login. That way, it's fully documented in the log, and we have got visual proof to go back and to investigate. Next, we explicitly raise an exception, Login failed. The login button still appears. This line ensures the test fails immediately. We don't want it to keep going after a failed login that will just lead to more noise and confusing results. By raising this custom exception, we make it crystal clear. Something broke, and we're stopping right here. That's how we keep our test striked readable and easy to debug. At the bottom, in the console, the test failed, and the reason is crystal clear. The error message we raised is shown right there, and that's intential. Now, let me ask you this. Does this look like a messy heart to the bug test, or it is clean, traceable, impossible to ignore? If that's the case, we have reached our goal. Now, here's something I totally forgot. I should have dropped breakpoint right here inside the final block of the setup playwright picture. This is where the browser gets closed, no matter if the test passed failed, and that's exactly why the spot matters. If the test crashes early, the browser might close before you get a chance to see what went wrong. Adding a breakpoint here gives you that last window to inspect the page and catch the issue before it disappears. Classic the back trap. But yeah, I cut it in time. Here's a quick look at the breakpoints I've set across the project. Let's remove them all. Now, let's talk about a super useful feature when debugging called evaluate. The evaluate window lets you run any code snipped on the fly while the test is paused. You can inspect values, code functions, and even trigger interactions all without changing your code or restarting the test. This is a game changer when you are trying to understand the current state of the browser or want to test a single line before committing it to the actual script. Let me show you an example. Right now, we are post inside the perform login method. And we've got access to self, which means we can directly interact with the page. So I'll use valuate to manually run type text, just to see how it behaves, or maybe to the bug input that's not going through. This is perfect when you want to isolate a specific common test in real time and get instant feedback without rerunning the whole flow. And here's the cool part. You can use evaluate, not just for one liners like type text, but to actually walk through the test manually line by line, while the test is still paused. And that's exactly what makes this so powerful, honestly. Once you get used to working like this, it's hard to go back. Not even through the test past. You can still use this view to see the entire flow step by step. Every action, every method calls all right there, and that's thanks to the VS flags. We set in Pits dot E. Vs gives us both output and makes sure anything printed is sued, including logs. Plus, we've enabled logic equal true, which means all our log message got displayed directly in the console as the test runs. No need to dig through files or wait for the report. You get instant feedback right where you are working. Let me wrap this up with one last pro tip. And trust me, this one separates the pros from the amateurs. You ever had that one test that always pass no matter what you do, it's always green, always clean. Sounds great, right? But here's the catch. Sometimes that's a false green, a magic test that looks solid. But in reality, it's not tasting anything meaningful. So here's the tip. Make sure your test can actually fail at least once on purpose. Break the flow. Even something silly like this change the input mismatch, the expected value do something to prove that when things go wrong, the test does catch it. It's like testing a fire alarm. If it never goes off, how do we know it actually works? That's exactly what we wanted to see. One test failed, just the one with the intentionally wrong expected messages. And all the others pass, which means now we can relax, knowing this isn't a magic test that always shows green. It actually knows how to fail when something's off. This is how you build trust in your tests, not just by watching them pass, but by making sure they can fail when they should. Just don't forget to regret that line pushing a test back on purpose. Yeah, I wouldn't do that. Not even as a joke. 26. Generating Reports with Allure: Before we dive into the actual test report, I want to highlight a few key sections. You should definitely keep an eye on because let's be honest, these reports can get dense fast, and knowing where to look can save you a lot of time and guesswork. First, you've got the navigation panel on the left. That's your comment center. One of the most useful types is behaviors. This gives you a high level overview of which features were tested. Nicely grouped by functionality or story, it's perfect for understanding test coverage from a business perspective. Next, there is SIs. This shows your tests organized the same way they're structured in your code base. So if you ever wondered how well your test structure reflexs your implementation, this is the place, and definitely check out the timeline tab. It visualize the sequence of your test executions and how long each one took. Think of it like a time lapse of your entire test run. Okay. Now that we know what to watch for, let's look at the report that was generated after running our suit. If you are using Allure and you're wondering how this world report even gets generated, here's the trick. Inside the pytest NE, this little config is what tells PITSt to save all the data Au needs. This is the results, logs, and all the juice metadata into a folder called Allure results every time you run your tests. Think of it as laying the foundation. Without this, allure wouldn't have anything to visualize. So just by adding that line, you're essentially giving your test runs a memory. One that's searchable, clickable, and incredibly useful. All right. With that set up, let's zoom in on the failed tests. Each failure comes with its own detailed breakdown. You will see the stack trace, the error message, and even a screenshot, assuming you've set it up right. That alone can save you hours of head scratching. Instead of guessing what went wrong, you've got real actionable contexts. This is where alert takes reporting from meant to meaningful. If everything passed awesome, you'll see a crisp green dashboard that's perfect for sharing on slack and basking in the glory. And by the way, all of this data we've been talking about, it gets stored right here in the lure results folder. Each test execution, each log, each screenshot is all saved as individual files. That's the raw data Au uses to build the report we see in the browser. But when things fail, that's when Alu becomes your best friend. So the takeaway here is simple. The solid test report isn't just for show. It's your safety net when things go sideways. Now, let's talk about generating the actual report. Once you go to your test results sitting in the Au results folder, you can generate the visual report using a simple command. First, make sure you navigate into your test directory. That's where the u results folder lives. Once you're there, run the folding command, serve allure results. Let me zoom in so you can see it better. What this does is spin up a local server and open the report in your default browser. No extra steps, no need to manually click through folders. It just works. And here's the cool part. Every time you run your test again, the content in that folder gets updated. So you are always working with the latest snapshot of your test execution. Server is great when you're debugging locally. It spins up a live server and opens the report in your browser, which is perfect for quick feedback during development. But in a CICD pipeline, you will generally just need to replace the word serve with generate in the comment. Now let's take a quick look at what the report actually contains. At the top, you've got the overall summary, how many tests passed, how many failed, and how many were skipped. It's like your test suites health check at a glance. Scroll down a bit and you'll see a breakdown by feature or functionality. Depending on how you have structured your tests, this helps you instantly spot which areas are stable and which ones needs a little of. For example, here we got a test called Test Successful Again. Now, at first glance, you might think this test best is marked in yellow, which typically means it's failed. If you dive deeper, you will see exactly what happened, how much time it took to run. Here, it's about 11 5 seconds. That's why it's so important to pay attention not just to the final status, but to the steps, logs, and execution time. You can also dig into logs and other attachments, which is super helpful when you're debugging a flaky test or trying to reproduce a bug. And just like that, you are no longer guessing what went wrong. When you open a specific test, you can explore its full execution details like which username and password were used. Of course, the full log output in this example, you can trace exactly what the test did. Set up and tear down steps. Let's now focus on a test that failed. Here, you can see test successful gain, which despite its name, actually failed. And right away, I note something strange here. This test failed, and yet there is no screenshot attached. That's odd. We already know that the failure happened during validate user logged in, which should have triggered a screenshot capture. But for some reason, it's missing. Wow. So let's investigate why that didn't happen. Here is what likely going on. The screen logic is probably tied to the page object itself, meaning that in order to take a screenshot, the test needs access to the actual page instance at the moment the failure occurs. If the page object is no longer available or if the assertion falls outside the context where the page is accessible, then the screenshot method simply doesn't get called. The fix, make sure your validation functions like validate user login, either receive the page object explicitly or are structured in a way that they always have access to it when needed. This could mean passing the page intervaldation layer or having the screenshot utility smart enough to grab it from the test context. It's a small thing, but it makes a big difference when you're trying to debug. So always make sure your assertion and your error handling have access to the page when it counts. Let's run all tests again because we need it and see if our fix works. We are hoping that now with the update logic, the screenshot will actually get captured when the validation fails. So I'm running the suite one more time. Okay, once it's done, let's open the update Alu report and check that specific test again. If everything went well, we should now see a screenshot attached right where the failure happened. Let's take a look. At first glance, everything looks consistent. That promising now comes the moment of truth. Let's open up the test successful log in case again and check if the screenshot was captured this time. But before jumping straight to the button, I want to expand all the logs and really see what's going on. Step by step, we've got right there in the logs, two clear entries saying login failed. That's our failure being logged exactly when and where it happened. With log level, a so if you're already deep diving into a feeling test, take the opportunity to refactor your assertions and include more descriptive error messages. Always leave the code and the logs in a better state than you found them. There it is our screenshot. That's the visual proof we were missing earlier. Screenshot attached, error locked, everything in place. That's how you can close the loop and confirm your debugging workflow is solid. And here's the real takeaway. In a real world scenario, this kind of a bug could have easily slipped under the radar. Now imagine this bug made it to production. Users can log in, support tickets, start flying, and the team scrambles to figure out what went wrong. But in our case, automation cut it quietly, consistently without needing coffee or sleep. It flagged the failure, locked every detail, and when we applied the fix, it confirmed the issue was resolved. Down to the screenshot. That's the value of solid test coverage. It's not just about catching backs, it's about preventing regressions, gaining confidence, and protecting your users. This test didn't just pass or fail. It did its job. I stood guard. And that's exactly what good automation should do. 27. Course Wrap Up and Next Steps: We made it to the finish line, and Hey, that's for celebrating. So first of all, heads off to you for sticking through, staying curious and pushing yourself to grow. That drive, that's what sets Pros apart. Now, let's get real. This course wasn't just about slapping together a few test cases. The goal was to teach you how to build a high level scalable automation framework when you can actually use in real life projects. Not just textbooks example. Before we part, ways, I want to take a few minutes to walk you through what we covered and how it all ties together. We start by understanding why automation even matters, not just because everyone doing it, but because it saves time, reduces human error, and boost your team's velocity. Then we moved on to planning how to structure folders, organize conflicts, and lay the foundation for something maintainable. We talked about configuration files, why they matter, and how they keep your set of flexible and dry principle. From there, we build the base page, the bidding heard of your framework. This is where we wrapped our common actions in self execute that needs a little mechanism, but acted like a safety net for the whole system. We didn't just crash and burn, we locked everything, took screenshots, captured the evidence, and when things went sideways, it fed all that data straight into our report, totally visibility into the page object model because clean code isn't just nice to have essential when you're working in a real team with the real deadlines. We split logic from tests so that our code would be readable, reusable, and easy to scale as things grow and of course, we didn't stop just writing tests. We talked about how to write them while using pictures for setup and tear down to ensure every test starts clean and runs in isolation. We also used those same pictures to initialize all the necessary pages for validation, making sure each check had the full context it needed. Then we got into data driven testing, running multiple scenarios of the same test structure. Super useful when you're dealing with lots of permutations. And at the end of the day, all of this fed into a smart centralized report that told us exactly what happened during each test run. No detective work required. Now, if there is one key message I want you to walk away with, it's this automation is not just code, it's strategy, strategy that helps you write cleaner, modular, maintainable and scalable test suites. One that adds value from day one, not just someday. Think of it like designing a city, not just building roads, you're planning for growth, flexibility, and long term clarity. So I'd love to hear from you. Drop a comment and let me know what your biggest takeaway was. And if you have got an idea for another course, ping me. I love building content that actually helps people solve real problems. In a language we all speak, clean code. If you enjoy this course, I truly appreciate your view. It helps keep me motivated to create even better content next time. And hey, if you know someone who could use this course, share it forward. Let's grow this community together. Thanks again for being here. I had a blast creating this course for you, and I have no doubt we'll cross paths again one day, whether in another course, a project or somewhere out there in the test automation universe. Until then, keep coding Smart. See you soon.