Software Testing: Learn with Interview Questions & Answers | Jimmy Mathew | Skillshare

Software Testing: Learn with Interview Questions & Answers

Jimmy Mathew, Agile Consultant

Software Testing: Learn with Interview Questions & Answers

Jimmy Mathew, Agile Consultant

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
10 Lessons (1h 33m)
    • 1. Course Introduction

      1:49
    • 2. Software Testing Introduction

      8:25
    • 3. Sample Application

      1:31
    • 4. Test Classifications

      31:12
    • 5. Software Testing Basics

      12:40
    • 6. Traditional and V Model

      8:37
    • 7. TDD ATDD BDD MBT

      15:41
    • 8. Test Plan

      6:02
    • 9. More Testing Types

      6:07
    • 10. Outro Our Courses in SkillShare

      0:55
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

2

Students

--

Projects

About This Class

Learn theoretical basics of software testing with a course flow based on Interview Preparation with Questions, Answers.

This course is designed keeping job interviews in mind. We proceed based on interview questions.

Here we will be discussing the theoretical basis of testing. This course covers questions from basics to advanced topics, traditional testing approaches to the latest trends in software testing.

This is for anyone who is preparing for interviews for software testing jobs. This is for anyone who want to pursue a new career in software testing, or want to strengthen their fundamentals in this field.

We will start our discussion with a quick introduction to software testing. We discuss why is it important, principles of software testing, and key skills required in this field. There are different ways to group, or classify software testing methods or approaches. We will discuss commonly used classifications and types of testing. We will discuss test scenarios and learn to write test cases. There are lessons on defect life cycle and its classifications.

There are modules on traditional testing approaches, and new approaches like test driven development or TDD, acceptance test driven development or ATDD. We will discuss all these, and there will be an introduction to Model Driven Development and model-based testing.

Along with this, a list with different types of testing and short descriptions, which are not covered in other modules are provided at the end of this course.

Content:

Introduction

      Course Introduction

      Testing Principles

      Testing Skills

Test Classifications

      Test Types

      Testing Levels

      Testing Approaches

      Testing Techniques

Test Basics

      Test Scenarios

      Test Cases

      Test Data

      Requirement Traceability Matrix

      Defect Classifications

      Defect Life Cycle

Testing Processes

      Traditional SDLC

      V-model

      Software Test Life Cycle (STLC)

      Test Driven Development (TDD)

      Acceptance TDD(ATDD)

      Behaviour Driven Development (BDD)

      MDD & Model Based Testing

Test Plan

      Key Elements of Test Plan

      Criteria

More Test Types

Meet Your Teacher

Teacher Profile Image

Jimmy Mathew

Agile Consultant

Teacher

Around 15 years in IT, 7 years of Agile experience, playing various roles of Agile Coach, Scrum Master, Trainer, etc.

Publications  

Book: - Scrum Tales: Stories from a Scrum Master's Diary  

Book: - Agile Life: Understanding agile in a non-software context  

Certifications

 

.      ICP-ACC - ICP Agile Coaching   

·      CSP – Certified Scrum Professional  

·      CSM – Certified Scrum Master  

·      SAFe - Scaled Agile Framework-Agilist  

·      PSM1 – Professional Scrum Master  

... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

phone

Transcripts

1. Course Introduction: Hello, welcome to this training on software testing. This course is designed keeping job interviews in mind. We proceed based on interview questions. Here we will be discussing the theoretical basis of testing. This course covers questions from basic to advanced topics and traditional testing approaches to the latest trends and software testing. This is for anyone who is preparing for interviews for software testing jobs. This is for anyone who want to pursue a new career in software testing or want to strengthen their fundamentals in this field. We will start our discussion with a quick introduction to software testing. We discuss why it is important principles of software testing and key skills required in this field. There are different ways to group or classify software testing methods or approaches. We will discuss commonly used classification and types of testing. We will discuss tests scenarios, and learn to write test cases. There are lessons on defect lifecycle and it's classifications. There are modules or traditional testing approaches and new approaches like test-driven development or TDD, acceptance, test-driven development, or TDD. We will discuss all of these and there will be an introduction to model-driven development and model-based testing. Along with this, a list with different types of testing and short descriptions which are not covered in other modules are provided at the end of this course. 2. Software Testing Introduction: We will start our discussion with our first question. What does the phrase software testing mean for you? Please explain. Software testing aims at assuring software quality and testing. We inspect if the product or component that we have developed is as per the requirement and it is usable for the end customer. That is, we check, is the product developed right? And have we develop the right product? The product is tested, define any existing defects, deviations, and missing functionalities. Why do we test? What are the benefits? Why is it so important? Why should we give a lot of attention to this area? There are many reasons, including, but not limited to, product quality, testing. Make sure that we develop the right product and we develop the product right? The product must be usable and satisfies the end customer expectations. Why do we try to detect issues as early as possible, cost-effective? Earlier we find a bug, the less costly it will be. If a defect is found at the early stages of the development, it will be easier to fix as the development progresses, it becomes costly. So the aim is to find the defects as early as possible. Security. This is one of the most sensitive benefits of testing. Testing, make sure that the software is secure and safe. Customer satisfaction, all activities, all other benefits, in turn focuses on this goal. Customer satisfaction. The customer should get the product that he or she was looking for. It should satisfy his or her quality and usability expectations. What are the principles behind software testing? There are many principles behind software testing. There are many studies, many schools of thought resulting in different ways of summarizing these principles. Here we are discussing a commonly used set of principles. There are seven principles of software testing. Early testing, defect clustering. Testing shows presence of defects. Pesticide paradox, testing is context-dependent, absence of errors, fallacy. Exhaustive testing is not possible. Early testing, as we have mentioned, it is comparatively easier to fix defects detected in the earlier stages of development. So testing must start as early as possible. We need not wait for a working software to start the testing activities. It can be started even in the earlier stages of development. Requirements can be reviewed for its completeness and correctness. Also, we can be prepared for the testing during the later stages. We can be prepared with a test plans, test scenarios, and test cases. We can understand the test environments and be prepared with required testing infrastructure and test data. What is meant by the term defect clustering? Defect clustering is based on the Pareto principle that 80 percent of occurrences will becoming from 20 percent of the system under reference. In other words. 80% of the defects will be clustered around 20 percent of the modules in the software. This helps in identifying the high-risk areas of the software which contribute to major part of defects. Do a 100 percent green test results guarantee a defect free software. Testing shows the presence of defects, not the absence of them. Testing uncovers the defects existing in the system. This helps in fixing those defects and reduce the probability of issues existing in the software. It never guarantees the absence of defects. In simple words, we may not be able to find any more defects in the software, but it doesn't guarantee that there are no defects in the software. What is pesticide paradox? Pesticide paradox are tests might have given great results in the past. It has discovered many defects and helps us in improving the software. But using the same test repeatedly over a time may become useless and finding new defects. It is like insects developing resistance to the same pesticide over a time. The underlying idea is our tests must be updated, improve time to time. We should review and enhance the test so that it remains relevant and improves its probability of detecting new defects. Can we follow the same testing strategy for all types of applications? Testing is context dependent. Not all software is the same. Same will be the case with testing. How would we test the software depends on its context, the type, nature, and the intent of that software. Testing a point of sale software will be different from that of an inventory management software. Depending on the nature of the software, we select the testing approach, strategy, test environments, and test types. Can a defect free state of the software alone guarantee its quality? Absence of errors, fallacy. Absence of errors doesn't guarantee that the software is useful for the customer. We might have developed the product right, as per the documented requirements, but it may not be the right product. We might have developed it for a wrong or incomplete requirement. So testing is not limited to finding the defects. It includes checking if the product is useful for the customer. At which point can we confirm that our tests cover all possible scenarios? Exhaustive testing is not possible. This can be read along with another principle. Testing shows presence of defects, not the absence of them. It is nearly impossible to do up 100% testing. Even in the case of many simple software applications, there are a lot of possible ways of use. The environment may change. There are different ways that people use the application's user inputs vary a lot and so on. It doesn't mean that we need not test complete software. We must do it. We should select the right approaches, prioritize and try to reach as close as possible to the state of completeness. Before ending this introduction, let's quickly visit some of the skills expected in a testing professional. What are the key skills for testing professional? How do analytical skills help a tester? Good analytical skills will help in better understanding of the business scenarios. Analyzing, coming up with effective test cases, analyze the business case, understand the user requirements and expected user interactions with the system. We are creating and executing the test cases. So communication skills are not as important as in development. What is your opinion? Like any profession, good communication skills make a huge difference. Both verbal and written communication skills are equally important. A testing professional interacts on a daily basis with many stakeholders, including, but not limited to, business developers and customers management, just to name a few. He or she should be clear about the test plan and test strategies and create well-documented, clear tests, scenarios, and test cases. He or she should be well organized and manage the time to optimize the chances of moving closer to a 100% tested error free software. Other skills include but are not limited to, a great attitude and passion for the job. If you are into test automation, you must have technical skills with expected levels of expertise. This can be programming skills, expertise with testing tools, database knowledge, et cetera. 3. Sample Application: Going forward in this course, we will be discussing a few examples for better understanding of concepts. For that, we are introducing a small application. This is a webpage for flight book. More precisely, this is a part of a flight booking application where the user can log in and search for available flights. We are keeping this simple with very few requirements. Requirement one says a registered user should be able to login with his e-mail ID and password. Requirement to a new user should be able to register with his e-mail ID and select a password. Requirement three, a logged in user should be able to search for flights between two cities for a given date. This is enough for the time being. Please remember this small set of requirements. A registered user should be able to log in with his e-mail ID and password. A new user should be able to register with his e-mail ID and select a password. A logged in user should be able to search for flights between two cities for a given date. Hi. 4. Test Classifications: There are different ways that we can classify testing approaches, types, and techniques. You might have heard about many classifications and hundreds of fancy names for test types. Here we will discuss the main categories and commonly used test types. A bigger list is added at the end of this course. What are functional and non-functional testing? Functional and non-functional testing. Functional testing validates the software for the requirements or in other words, functional specification. It makes sure that the software is doing its job as per the requirements. There are many considerations, including, but not limited to the expected functionality of the software. It's usability is the system accessible for the user and the expected location and so on. This is a type of black-box testing. We will be discussing black, white, and gray box testing as the next topic. What are the steps followed in functional testing? The usual proceeds as follows. First, we understand the functionality, the expected functional behavior of the system. Let's take our example, the flight search. This is one of the functionalities. Then we decide and create the input data. In our case, it is a source, location, destination, and date of travel. We will consider different possibilities. Then we will find the expected outputs. What are the routes or flight options that we expect our system to display? This is based on the test data we have. We then run the software with our input data and compare the results with the expected output data. What are the usual considerations in non-functional testing? As the name suggests, non-functional testing addresses the non-functional aspects of the software which are not covered in the functional testing. For example, consider our booking application. How much load the system can withstand. Or in other words, how many users can log into the system simultaneously. Non-functional testing deals with usability, efficiency, maintainability, and profitability of the software. There are many parameters under consideration. A few of them are listed below. How reliable is the software? How long can it perform the functions continuously without failure? How safe is the software? This is very important when it handles personal and financial data, how effectively it can protect itself. And in turn, users of the system from possible external attacks are hacks. In case of a failure, how efficiently can the system recover? That is a survivability aspects of the system. The next parameter is related to what we were discussing so far. To what extent we can depend on the system. Will it be available for the users? All the time to perform the desired operations. Again, we have developed the software as per their requirements, but is it the right product? Is it usable for the end user? The usability aspect? Can the user easily understand the ways to interact with it in the software and perform operations without any issues. Next, the point we have discussed earlier in this section, how far the system can scale up. How many users can be handled by the software at any point of time. How easily are software can interact with other systems? In our example, our application needs to interact with other systems like payment gateways, airlines, and so on. How safe and efficiently can our software handled these interactions. Other useful parameters can be efficiency, flexibility, profitability, reusability, and so on. There are different types of tests addressing these parameters. Remember, these names vary a lot. The same type of testing may be addressed with different names. Some of these types are load testing, failover testing, usability testing, stress testing, volume testing, security testing, and so on. Explain manual and automated testing. Manual and automated testing. In manual testing, test activities are performed manually without using any automation tools. The overall process is understanding the requirements, identify the test scenarios and test cases. Defined testdata, execute, report issues or defects and retest once it is fixed. Manual testing holds its space and maintains its relevance even in the era of test automation. And the application needs to be manually tested before it can be automated. Remember the fact 100% test automation is not possible. What are the benefits of manual testing? Manual testing gives accurate and visual feedback. It is easier to respond to changes in the requirements and can add or adapt to new test scenarios. Changes are less costly as we don't have to change any codes or tool configurations. What are the drawbacks of manual testing? Issues with manual testing? As it is carried out manually, there are possibilities of human error. The execution and reporting depend on human judgments. Testing and retesting will consume a lot of time. Each time we have to manually execute the steps. There are scenarios which can't be executed manually. In automated testing, test cases are executed with the help of tools. Test steps are programmed or coded and configured with the help of automation tools. This execution can be triggered manually or on certain events like a successful build. It can also be scheduled to run on a given time. This can be a partially or fully automated process. What are the advantages? Test automation. There are many advantages, including, but not limited to. It helps in finding defects easily. It provides a speedy and inefficient process. These tests can be recorded, reused, and repeated with less effort it has conducted using software tools, so less chances of human error. It gives predictable testing coverage. As we know, these tests are executed. What are the limitations of automated testing? On the other side, there are limitations as well as there is less human involvement, it is difficult to test visual aspects of the application. Tests are dependent on the code behind it. If the steps are not coded or configured, error-free, faulty results will be produced each time we execute them. Animation tools have their own limitations, which limits the extent to which we can go with the automation. Maintenance will be costly as we have to keep the test scripts updated for any change in the requirements. A few more points, a summary before we close this topic. In manual testing, tests are executed manually. Human beings. Whereas an automation, it is with the help of tools. Automation saves time, cost, and manpower. As once automated, it is easier to run the test cases. Almost all kinds of tests can be executed manually. But in automation, it depends on the tools used and the stability of the system. It is recommended for stable systems and for regression testing. Manual testing can be repetitive and boring. It may bring in more human errors. In automation. Tests are executed repeatedly with the same levels of accuracy. White, black, and gray box testing. Another classification of the testing approaches as white-box testing, black-box testing, and gray box testing. This depends on the level of visibility or access the tester has to the system under consideration. What is white box testing and white box testing? The tester has full access to the system, the code behind the system, and its internal architecture. We know how the system is processing the inputs and creating the outputs. This analyzes the internal structure of the software and the logic behind it. A good amount of technical knowledge is required to perform the white-box testing. This can be done at different levels of testing. We will be discussing the testing levels in the coming sessions. An example of this can be control flow testing. We examined how the control flows in the system for given input. We test the logic behind it and find possible issues. Another example can be data-flow testing, where we track how the system processes the data. What are the pros and cons of white-box testing as we go inside the system. Code and architecture of the software. It reveals the hidden issues more efficiently. We trace the test cases to the lowest level and thus have more control on any future changes and its impact on the system. White-box testing performs a detailed check on the internal logic and ensures better coverage and traceability. On the other side, this approach is time-consuming and needs technical and architectural knowledge. It focuses on the current state of code and future changes in the code will invalidate our results. It validates the logic existing in the system. And to some extent, it may fail to detect missing functionalities in the software. What is black-box testing? In black-box testing, as the name suggests, this system is a black box for us. We don't know the internals of the system. We give inputs to the system, get the outputs, and compare them with the expected results. We check the functionality of the system. We don't need coding skills or architecture knowledge to perform these tests. We are testing the software from the user point of view. This can also be applied to different levels of testing. What are the pros and cons of black-box testing? Black-box testing helps in finding out any issues with the functionality of the software. It uncovers any missing functionalities and deviations from the functional specification. It will be less biased as we think independent of the internal code and architecture. We take the customer's view and evaluate the system. This requires clear and comprehensive requirements specifications. The results depend on how well the requirements are captured in the documents. We may not be able to predict and test all possible scenarios. What is gray box testing? In gray box testing, we have a mix of elements from both white and black box testing. It is a combination of these two methods. This is an attempt to give benefits of both white box and black box testing while avoiding their limitations. The tester will have limited access to the software internals. It is again based on the functional specification with an overall idea of different layers and a high level view of overall architecture of the system. It is not as intrusive as white-box testing, but along with the specifications, high level view of the internals is considered while designing the tests. While black box testers make sure everything is fine with the functionality. And white-box test scores go deep into the software and fix the source gray box testers addressed both in the same time in a non-intrusive manner. What are the pros and cons of gray box testing? Gray box testing offers benefits of both white box and black box testing, while at the same time trying to remove the disadvantages. Still, it maintains a safe distance from the internal code structure and thus remains less biased. It helps in designing effective tests covering exceptional scenarios as we have an overall idea of layers, interfaces, and data structures. This requires more coordination among testers and developers. There are chances of redundant or repeated test cases. The test coverage may not be as good as in the case of white-box testing. It may not be suitable for all types of systems and specific project environments. Please note that the selection of one or more of these approaches depends on the type of the software and the environments where it is created. List a few parameters that you consider for globalization testing. Another classification of our interests will be localization testing and globalization testing. The parameters of our interests include language, currencies, address formats, mobile number formats, date formats, and so on. What is localization approach and testing? And localization testing? We test the software and a specific geographical or cultural environment. We target the end users coming from this specific category. Here, the targeted customer base will be limited. What is globalization approach in testing? In globalization testing, we check up the software behaves in different geographical and cultural environments. Here, we assume that the software will be used globally across a wide range of customers. How it performs with different languages, currencies, global standards, and so on. What are different levels in testing? We conduct tests at different levels. The most widely used classifications being unit testing, integration testing, and system testing. What is unit testing? Unit testing, individual components of software are tested to make sure that it behaves in the expected way. It is usually done by the developers along with coding. The test object can be a code segment, a procedure, a module, or an object. It helps in identifying and fixing bugs early in the development. As unit tests target a small portion of code. It will be easier to find and fix bugs. What is integration testing? And integration testing, different software modules are tested together to check how it works and an integrated local environment. The interactions and dependencies between different modules are tested here. What are different approaches for integration testing? There are different approaches and integration testing. This can be a big bang approach or an incremental approach. What is A big bang approach. In big bang approach, all units are integrated and tested as a unit. For this, all modules must be ready for testing. It is a convenient method for testing small systems, but this may not be always possible. What is incremental testing? The next approach is incremental testing. Two or more logically related modules are integrated and tested together. More modules will be added to this in an incremental way and then test it. Different approaches and incremental testing. This can be carried out in three different ways. The bottom up approach, the top-down approach, and the sandwich approach. What is the bottom up approach? And bottom up? The low-level or base level modules are tested and integrated first. After they are tested, we move up to the next level. And this continues till all top-level module or modules are tested. In our booking application, there will be modules that fetch available source, and destination cities. These modules may be interacting with different external systems to get a list of the cities. And there will be modules that display this data in a formatted way so that the user can select two of them. These two modules will be integrated and tested before we move on to the next level and test the flight search functionality. What is the top-down approach? In the top-down approach, the top-level modules are tested first. Then the lower level modules will be tested and integrated to make sure that the entire unit works together. In our example, first, we test the search functionality, assuming the cities are already selected. Then we test and integrate the module for selecting the cities. What are stubs and drivers and testing? In both approaches and integration testing, there will be modules that depend on other modules. We may not be able to independently test each module as they exchange data with one another. To help us in this situation, we use stubs and drivers. These are proxy modules that substitute the actual modules. Please note, it is not for the module under test. It substitutes the other modules that provide data for the module that we are testing. Stub is a proxy module that is called by the module under test. Driver is a proxy module that calls the module under test. These proxy modules simulate actual data and interact with or receive calls from the module under test. What is sandwich testing? It? Another hybrid integration testing approach called sandwich testing. We use a combination of top-down and bottom-up approaches. Top-level modules will be integrated with low-level modules. At the same time, low-level modules are integrated with the top ones. We will be using both stubs. Drivers in this approach. What is system testing and system testing? We test the entire system that is a fully integrated system, will be under test. We test the end-to-end functionality of the system. The entire system is tested against the specifications. We test its interactions with external systems as well. This is a black-box testing. What are some examples of system tests types? There are many types of system testing. All of these address different aspects of the system. In short, we can say we test the entire system. The types of system testing include but are not limited to functional testing, load testing, regression testing, usability testing, recovery testing, migration testing, and so on. What is acceptance testing? Acceptance testing. This is one of the important aspects of the testing. After the system testing, acceptance tests are conducted, usually by the customers in a production like environment. They verify if the system meets all the requirements and is ready for release to the end users. What are different types of acceptance tests? There are different types of acceptance tests. User acceptance testing or UAT, contract acceptance testing, regulation Acceptance Testing, operational acceptance testing, and so on. What is user acceptance testing or UAT? User acceptance test, or UAT, is done by the customer or end user to confirm the system meets the requirements and is usable. This confirms if it can be released for its actual youth to the end-users. What is contract acceptance Testing? Contract acceptance testing goes beyond the functional acceptance and checks all agreed upon deliverables are ready and delivered. It checks if all deliverables meet the quality standards and meets the criteria set in the contract. What is regulation? Acceptance testing? Regulation acceptance testing is also known as compliance acceptance testing. It tests if the software meets the laws and regulations including those set by the governments and other legal entities. It confirms that the software adheres to all regulations for releasing it to the targeted audience. What is operational acceptance testing? Operational acceptance testing, which is also known as operational readiness testing. Make sure that all required items are in place for the software to go live. It includes customer support, user training, backup, and recovery plans. Other two key phrases that we come across are alpha testing and beta testing. Let's have a quick look at those. What is alpha testing? Alpha testing is carried out in development or testing environment by an expert's team. They are called alpha testers. Their feedback helps to make the software more usable and to reduce defects. It is performed to identify all possible issues and bugs before releasing the final product to the end users. What is beta testing? In beta testing, we expose the software to the end user environments. It is tested by the actual users. There are no restrictions and they test the software as real end users. Their feedbacks are recorded and addressed to improve the quality and usability of the software. Now let's discuss two other types called smoke tests and sanity tests. What is smoke testing? Smoke testing is a quick verification carried out after the software build, but before detailed testing, it verifies the critical functionalities are working fine. It makes sure that the testers can work on the software. It rejects the software. If it is not a testable condition. This saves the testing team from wasting their time on a software version which is not ready for testing. This performs a quick screening, ensuring that the software is up and running. The screens are opening, the data is available. The major functionalities are working fine and so on. What is sanity testing? Sanity testing is another quick verification. When the software is delivered with minor changes, it quickly verifies that the defect is fixed and no new defects are introduced by these changes. If in the case of failure, the software is rejected, saving time and the cost of further testing. The objective is not to verify the entire system, but to check that the changes made by the developer work fine. What are the main differences between smoke and sanity testing? If we compare both, we will see smoke testing verifies the critical functionalities of the system. While sanity testing focuses on new functionality or changes. Smoke tests verify the end-to-end system. Sanity testing concentrates on changed modules. Usually, smoke tests are part of acceptance testing and sanity test are part of regression testing. What is regression testing? Regression testing makes sure that the new changes to the software are not adversely impacting the existing functionalities. When a new feature is added or a bug is fixed, the code changes may have an impact on existing functionality. After we fix a bug, we will be executing the relevant test cases to make sure that the issue is fixed. This is retesting, but this doesn't guarantee that other parts of the software are not broken with this change. So we perform regression testing. Retesting all test cases will make sure that there are no issues, but it may not be practical. It is costly and time-consuming to execute all test cases for each and every change in the system. In another approach, we select a number of existing test cases that gives us confidence that the system is not affected by recent changes. What are the considerations while selecting test cases for regression testing? Selection of test cases for regression testing usually includes test for the critical functionalities. Test for modules which have more stake on the defect count. Test cases for dependent modules, related integration test cases and so on. And another approach, we can prioritize these tests. And based on the changes and the expected impact, we can decide how many test cases must be a part of the regression suite, explained a few software testing techniques. Before moving on to the next section, let's quickly review a few software testing techniques. As we have mentioned earlier, an exhaustive testing is impossible. These techniques will help us in achieving better test coverage. A few of them are discussed here. What is boundary value analysis? This focuses on data that the software is expected to handle. It tests how the software handles input values which are above, below, inside and outside the expected range of inputs. For example, imagine the valid input for a field in the screen is numbers one to nine. Here we will test with value 019 and 10. What is equivalent class partitioning? Another software testing technique is equivalent class partitioning. In this Group possible input values to different groups, and make sure our tests take at least one value from each group. Here, we assume that the values within a group are similar and have the same behavior in the system. Take a case were valid input for a field called age is all numbers between 11 to 40, including both. We create groups like up to 10, 11 to 4041 and above. Here, the second group have valid inputs and the other two have invalid values. We make sure that the test cases test values from all three groups. Our decision tables are used in testing. What is cause effect table. Another technique is decision table based testing. Here we consider a combination of input values and test how the software responds to it. Also, there may be inputs which need to be validated based on the value of other inputs. So we can find different combinations of input values. We create a table with different input parameters as rows and different combinations and columns. For each column, we will record the output and decide if it is a pass or fail. This is also known as cause effect table. What is state transition technique? Another technique is the state transition technique. In a state-transition technique, we test the system for a sequence of input conditions and verify how the system responds to it. So it is used for testing a sequence of events. A good example can be the login functionality of our flight search application. In first login attempt, the user could enter valid or invalid credentials. The system is then expected to respond accordingly. The same inputs are possible for the second, third attempts as well. But if the third attempt fails, the system is expected to lock his or her account. This is an example of a state-transition technique. What is meant by error guessing? Another example is error guessing. An error guessing, we depend on the experience of the testing professional. From previous experience Testing similar software, we predict possible error conditions and error prone areas of the software. We list down these error conditions and design test cases for the same. For this to be successful, a good experience of testing or working with similar systems is essential. We have reached the end of this module. Next, we will discuss tests scenarios, test cases, defects, and so on. 5. Software Testing Basics: What are tests scenarios and how do we create them? Test scenarios represent different ways that a user will interact with the system or different situations that the system goes through. This is also known as tests conditions or test possibilities. This shows different possible ways that we can test the system. We think from an end-user point of view and analyze the possible ways of using the system. Also, possible ways to user may try to abuse the system. How do we create test scenarios? For this? We go through the requirements in detail. For each requirement and for its combinations. We list down user actions and their objectives. We make sure all requirements are covered and all scenarios are connected to at least one requirement. This ensures better coverage and traceability. For our sample application. One of the tests scenarios can be checking the login functionality. Explain test case with an example. Test cases can be viewed as a sequence of steps or actions that can be executed to verify a feature or functionality of the system. It consists of tests, steps, data, preconditions, and postconditions. Test scenarios deals with different possible actions or interactions that the user performs within the system. Test cases are more specific in terms of user actions and variations. For example, consider our test scenario user login to our flight search application. There are many possible test cases here. The user and during valid credentials and pressing the login button, the user entering invalid credentials and pressing the login button, the user pressing the login button without entering any values, and so on. Each test case will have a test case ID. Related tests scenario, precondition, test steps, test data, expected result, actual result, and a pass or fail decision. For example, in our case, the test scenario is a user login. The precondition can be that the URL is available. The test steps are, the user opens the flight search URL. The user enters an e-mail ID. The user enters a password, the user clicks the login button, the test data, and attains a valid user email ID and password. And the expected result is the user successfully logged in to the application. What are the common features of a test case? There are some common features of a test case. A test case must be written in a simple and transparent language. It should be clear and unambiguous so that any new person can understand and execute them. While designing test cases, we keep the end-user perspective. How will he or she used the system and interact with it? Avoid repetitions. Be specific and concrete on the task. For example. We are writing test cases for flight search. Instead of adding steps to log into the application, it is better if we add it in the precondition. Don't keep your assumptions in your head. Write it down in precondition steps or wherever applicable. Don't assume requirements. Stick to the specifications. If there is an ambiguity, get it clarified and make sure that the specifications are updated by its owner. Make sure that the test cases cover all possible scenarios. A test traceability matrix can help with this. We will be discussing test traceability matrixes soon. Look for applicable testing techniques. We have already discussed different techniques like boundary value, state transitions, error guessing, and so on. Make sure that the test environment remains in stable and good condition after executing the test cases. For example, in our flight search, we are testing a scenario where the airlines systems are not available for fetching the data. We may be disconnecting the system or disabling the stubs for this. After execution, make sure that the system is back in normal condition. What is test data? Test data is the input data that is given to the system while performing tests. As important as test cases. Or we can say that it is an essential part of test cases. Even well-designed test steps won't serve its purpose without proper test data. There are many ways to classify test data. We have indirectly mentioned these types in previous lessons. So under different types of test data, there can be normal data or valid data, abnormal data, or invalid data, and boundary data. These are self-explanatory. Test data, can be used as an input during the test case execution, or can be injected, or in other words, inserted in the backend to establish preconditions for the test. Test data can be generated manually or using automated tools. Whatever the source, it must support the test, making sure it covers all possible values and combinations. What is a requirements traceability matrix, and how is it used? Traceability matrixes are used to establish connections or relationships among different artifacts and software development requirements traceability matrix, or RT m, tracks the requirements to other artifacts like design specifications, modules, code, test scenarios, test cases, and so on. In this training, our focus is the traceability from requirements to the tests. This make sure that we have covered all requirements and helps in verifying current status of the requirements. The content in this with relation to our scope can be Requirement ID, requirement type, acquirement description, test scenario, Test Case ID, and test status. What are different types or directions of traceability? Different types or directions of traceability are covered. It can be a forward traceability where we check all the requirements are covered in the implementation. And are tested by the test cases. Backward traceability checks the scope creep. It makes sure that all implementations and tests are connected to a requirement in a specification and we are not unnecessarily adding scope. A bidirectional traceability is a combination of both. And make sure that all requirements are covered and we are working on a given requirements. This also helps in analyzing the impact of defects at different areas of the software development ecosystem. What are the key benefits of traceability matrixes? Traceability matrixes help in moving towards 100% test coverage. It highlights any missing requirements and exposes any inconsistencies. It helps in maintaining transparency on the current state of development. It assists in analyzing the impact of changes made at any levels or stages of development. What is a defect or bug? Software development? While testing, we find the issues, abnormalities, or inconsistencies existing in the system and report it as a defect. The defect goes through different states before it is fixed, closed, or removed. This depends on the development process and policies existing in the development organization. What we discuss here is an example and there may be variations in the names and in the process flow. Explained defect lifecycle. When we find a new issue or defect, it will enter the system in a new state. It remains open until it is fixed or removed. This may get assigned to a team or person for fixing. It may get rejected if it is an invalid defect. Term does a duplicate or deferred for later stages in the development. When the assignee takes necessary actions and fixes the defect, it has moved to a fixed state and will be pending retest. The tester will recheck the issue and the defect will either be put in a verified state or be reopened as a defect. A verified defect will be closed once all required confirmations and documentations are in place. What are different ways of classifying defects? Defect categories? There are different ways of classifying defects. We can group defects based on the source, nature, priority, or severity. Like in the case of defect lifecycle, the defect classifications, it's naming conventions and expected actions may vary from one organization to the other. Defects can originate from different sources. Issues can be found in the requirements design code or even in the test itself. Another classification can be based on the nature of the defect. It can be a functional issue where the behavior of the software is not in terms with the specified requirements. It can be performance issues like speed, response time, or number of parallel users. There can be usability defects that make it inconvenient for the users to take actions. There can be user interface issues that have a negative impact on the user experience. Software may run into compatibility issues with other systems, environments or configurations. Another type in this classification is security defects. One of the most commonly used defect classification is based on severity and priority. How do we classify defects based on severity? Defects exists in different severity levels. Critical defects block the usage of the system and we can't proceed without fixing them. For example, after logging into our flight search application, it shows a blank screen. High severity defects impact one or more core functionalities of the system. In our flight search application, the software might display flight options between some other cities which are not selected by the user. Medium severity defects are those the impact some minor expectations from the system. For example, our search results in the flight search application are not sorted by price, but the user can click on price and still get it sorted. Low severity defects can be small issues in the user interface like font size, color, highlighting, and so on. How do we classify defects based on priority? Let's review a few priority levels. Urgent defects must be fixed within a short period of time. The critical level defects fall in this category. But sometimes even a low severity defect can get an entry in this category. For example, some critical display, like company name or legal identification may be misspelled in the UI, which needs to be fixed urgently. High priority defects need to be fixed in the next release or patch. It can be something that impacts the workflow, but workarounds are available. The next level is medium priority defects, which can be fixed in the next release or in the subsequent releases. It can be some formatting issues, date formats, and some specific browsers and so on. Fixes for low-priority defects are good to have, but usually don't block the acceptance of the system. It can be cosmetic issues like minor alignment issues, font color, and so on. 6. Traditional and V Model: Testing lifecycle and processes. In this section, we start our discussion on testing process. We will start with the traditional software development lifecycle, the waterfall approach. Then we will move on to different testing processes, including the latest testing methodologies. We will discuss the V-model, software testing lifecycle, test-driven development, acceptance, test-driven development, Model-driven development, model-based testing, and so on. What is the waterfall approach? Traditional software development lifecycle starts with a requirement phase. In this phase, all the requirements are captured and documented. The requirements are frozen and signed off with the customer before the development starts. Next is the design phase. We decide on the technical aspects and create high level and low level designs. Then we will move on to the next phase, the build or coding phase. After this, the software is handed over to the testing team, and the testing phase starts. After different levels of testing and fixing, we deploy or release the software and the maintenance phase starts. This approach is called the waterfall module. Control flows from one stage to the other. This looks to be a SWOT framework, but comes with many issues. One of the main challenges in terms of testing in the waterfall model. Testing happens only late in the development phase, so it will be costly or to fix the bugs. Bugs may be existing even from the requirement phase and its impact grows as the software is developed. Many times towards the end, requirements may remain untested or partially tested. What is V model in software development? To address the issues with the waterfall approach, the V-model of software testing is developed. In this model, there is a testing phase parallel to each development phase. This is an extension of the waterfall model that demonstrates the relationship between development and testing phases. You may find some variations, especially in the phase names and scope among different representations of this model. For this course, we will take a generic approach. The phases are represented in a V-shape. On left side top, we have the requirement phase. Below that, the high level and low level designs. Then we have our coding or build phase. Then it goes up on the right side, defining testing phases and parallel to the development phases. What are the main testing phases in the V-model? There is a system testing in relationship with or in parallel to the requirement phase in the V model representation. There will be integration testing connected to high level design and units tests connected to low-level design. The phases on a left, our software development life cycle, or SDLC, and those in the right are referred to as software testing life cycle, or SDLC. Next, we will have a quick visit to the software testing lifecycle. Explain different phases in software testing lifecycle. The SDLC software testing lifecycle lists down a sequence of phases or steps that are performed to make sure the quality of the software. Testing activities will start as early as possible in the software development lifecycle. Testers analyze the requirements, keep an eye on the development and plan design and ready with the test cases. Once the software is developed, we can start the test execution without any delay. This saves time and cost. Here we are listing down a number of steps in this life cycle. These steps vary a lot based on the nature of the project and the organizational process framework. Different phases that are commonly used. Our requirement analysis, test planning, test case designing, test environment setup, test execution, and defect reporting and test closure. What are the key expectations from requirement analysis phase in software testing lifecycle? Requirement analysis. In the requirement analysis phase, the focus of the testing team is to understand the requirements. The test team members being a part of this phase, helps a lot. They analyze the requirements, understand the scope and the focus of the testing. We consider the functional and non-functional requirements. We go through the documents and interact with different stakeholders to get their perspective. In this phase, we get initial ideas on different aspects. We discuss what are the relevant testing types, what are the focus and the priorities? How do we track the requirements to the testing activities? What will be the environments the software is targeting? And also check the feasibility and need for test automation. What are the key activities in test planning phase of the software testing lifecycle? Test planning. As the name suggests, in the test planning phase, we create a detailed plan for testing activities. This includes many aspects like our approaches and strategies for testing, selecting required tools and processes, estimation and resource planning for testing activities, identifying the training requirements, and so on. It is as important and detailed as an supports the planning in the development lifecycle. In this phase, we come out with a test plan and a strategy, as well as estimations. What do we do in the test case designing phase of the software testing lifecycle? Test case designing. In test case designing or test case development phase. We create tests scenarios and test cases. This is done along with creating or deciding on the test data and other details. Test cases are developed, required scripts are written and all of these are reviewed and reworked and kept ready for the testing to begin. We update the traceability matrixes as we develop the testing artifacts. How do we set up test environment? Test environment setup? It is better to have a test environment which is as close as possible to the actual production environment. This activity can be performed in parallel with the test design activities. We understand the targeted production environments and design on the same required hardware, software infrastructure. This could be created either by the testing team or can be given to them. In both cases, we should examine the environment and make sure that it serves its purpose. We may conduct a smoke test to confirm the testing set up. What are the key activities in the test execution phase? Test execution, and defect reporting. Once we have the software, the test execution will start. We run, execute the test cases script, and keep reporting our findings. We make sure the required documents and traceability matrixes are updated. We retest the fixes and track and handle the defect lifecycle until closure. From time to time, we create the required reports on the defects as well as the testing process. What is the final phase in the software testing lifecycle? Test? The closure phase mainly includes formal conformations, reporting, and documentation. We create the final test reports, defect status, and other test artifacts. We took some time to analyze our activities and create lessons learnt documents. The relevant documents are archived and contributions are made to the company knowledge base. This was a quick visit through the traditional software testing lifecycle. Now we will go through a few testing approaches that we are following today and the approaches that are getting momentum in the industry. 7. TDD ATDD BDD MBT: What is test-driven development or TDD? This is a test first approach for any change or new functionality. Test cases or scripts are written first and code is developed to fix the failing test cases. Based on the specifications the tests are designed first, it will fail as the required changes or functionality is not in place. Then we develop the software components or make necessary changes in the software for the test to pass. Then we improve the code and retest. This cycle continues. What are different steps and test-driven development, or TDD? The different steps involved in TDD are create the required test run and fail at minimum code for passing the test run and pass, refactor the code and repeat first, write the test case and make it fail. First, we create the test. This is done based on the specifications. This will be a tedious job, especially when we go for automated test scripts. We run the tests against the existing software, it should fail. This makes sure that the tests are checking the intended change. Here we are focusing on specific single functionality. This gives better traceability and focus. Next, make minimal code to pass the test. Once the test is in place and fails, we develop the minimal code to address the change or create the core functionality. In this step, we do as much work as needed to make the test results green. Now, we run the test and this time it should pass. If not, we get a real-time early feedback that our implementation is wrong. We make the required changes and make it pass. Next refactor and improve. Once the core functionality is up and passing, we concentrate on improving the code. We enhance the code to meet the standards, meet the non-functional requirements, and make it more readable and maintainable. Then we repeat the test and do refactoring. This is an incremental approach. We improve the code and retest it one step at a time. We may improve the tests as well. This process repeats until we reach the required level of quality. What are the benefits of test-driven development or TDD? The benefits of test-driven development. This process improves the quality of the system. We concentrate on the requirements and have a clear idea of why we are writing a piece of code. We are coding to fix a failing test case. This gives an early feedback and helps in finding issues early. We make minimum required changes first. And then improve the code. This gives us more focus and improves our code quality. It saves time by reducing rework and unwanted lines of code, which in turn improves the productivity of the programmer. This results in better traceability and documentation and creates more maintainable code. Coding is done to fix the test cases which result in better test coverage. What are the limitations of TDD? There is something on the flip side as well. This new process may create unrest and ambiguity. We may be confused on where to start and which test cases can be written and run. The team will be focusing only on a specific functionality at a time and may lose the overall picture. The business goals. We'll be focusing on the specifics of a method or function. And often forget how it is connected to and what it performs and the larger business environment. We are creating tests from the lowest level. This gives a better coverage, but may result in a large number of test cases which are difficult to maintain. The code will be only as good as the test for which is developed. Any issues in the tests will reduce the code quality, impact its functionality, and the overall application. What is agile software development and how does test-driven development fit in there? Test-driven development and agile. Agile software development focuses on an iterative and incremental way of development and focuses on early and consistent feedback, close collaboration, and better communications. There are many software development approaches under the Agile umbrella, including Scrum, kanban, Extreme programming, and so on. Extreme Programming lists test-driven development as one of its core practices. How does test-driven development help in Agile software development? As mentioned, agile promotes early and consistent feedback. In test-driven development, coding is done to fix the existing tests and it is tested in every refactoring step. This helps in early and continuous feedback. Test development and coding are done in an iterative and incremental way. We start with the minimal code for passing the tests and keep improving it. Test-driven development demands close collaboration among testers, programmers, and business. Tests are designed per the requirements, specifications, and code is written to pass the tests. Effective communication at all stages of the lifecycle is essential for the success, test-driven development. What are the key differences in the approaches followed in traditional and test-driven development. Both test-driven development and traditional testing serves the same purpose of reducing defects and improving the quality of the software. The difference is in the approach and focus areas. In traditional testing, a failed test indicates the presence of a defect. In test-driven development. It shows and triggers the required code changes to both sound the same. They are. Listen to it again. In traditional testing, a failed test case indicates the presence of a defect in test-driven development. Shows and triggers the required code changes. You can see the focus or the approach is different. In traditional testing, the focus is on the test cases that find the defects and the software. Test-driven development focuses on the code that makes the test pass. Test-driven development offers better test coverage as tests are in place before coding. What are different types of test-driven development? What is developer TDD? There are different approaches in the test-driven development. The test-driven development, as discussed in common, can be called as developer test-driven development. Developer TDD. We create the required test, commonly at unit tests level and right Production Code to address these tests. Most of the times this is simply referred to as test-driven development. There is another variation called acceptance test-driven development, or acceptance TDD, or a TDD. In this approach, we first write or script a single acceptance tests that addresses the requirements specification of a specific behavior of the system and create the software elements to fulfill this criteria. This is similar to behavior-driven development or BDD. What is a TDD? Acceptance test-driven development focuses on capturing accurate user requirements and make sure that those requirements are developed and tested in the system. Acceptance tests are written from a user's perspective and focuses on satisfying the functional behavior of the system. These acceptance tests provide the foundation and guidance for development and gives consistent feedback on whether the system is getting developed as per the user expectations. What are different steps and acceptance test-driven development. What is a user story? The different steps Of acceptance test-driven development start with capturing the requirements from the end user's point of view. Usually, this is documented in the form of user stories. A user story is a requirement expressed in a user's perspective. It describes what the user wants to do with the system and why it is required. For example, in our flight search, the login functionality can be expressed as as a user of the portal, I want to log into the system so that I can search for available flight options. What our acceptance criteria in a user story. Users stories include acceptance criteria. In other words, acceptance test criteria that explains how we can confirm the said functionality is successfully implemented. This gives a baseline for acceptance testing. This can have many conditions. In our example, it goes like when the user tries to login with valid username and password, the system must take him to the search page. When the user gives invalid credentials, the system must show an error message. If the username field is empty, the system must prompt the user to enter the details. There can be more or less items in this. These acceptance criteria are converted to test steps in a sequentially executable format. It may be automated using scripts and testing tools. With the current state of the software, these test cases will fail. Then we will be creating minimum required software elements to satisfy the test cases. We will retest to see if our implementation satisfies the user expectation. Then it will be followed by refactor and retest loops to improve the implementation. We can summarize the steps and acceptance test-driven development as red, green, and refactor. When we create and run a new acceptance test, it fails and we enter a red. Then we make it green by adding required code or software components. Then we refactor and improve the implementation. What are the benefits of acceptance test-driven development? Acceptance test-driven development focuses on customer needs. Requirements are analyzed in detail and converted to test an intern to the software implementations. It forces the team to think from the end user's perspective throughout the development process. This approach promotes collaboration among all parties involved. They work together to make this possible. Testing happens along with the development and results in fast feedback loops. And more importantly, here we get the feedbacks from a user's perspective as well. It brings in better requirement traceability and helps in improving coverage in terms of requirements. Can test-driven development and acceptance test-driven development co-exist. How did we implement TDD? With eight TDD? The developer test-driven development, which is commonly known as TDD. And the acceptance test-driven development can co-exist in the development process. Test-driven development can be plugged into the acceptance test-driven development. The larger picture can look like this. We can plug in the test-driven development to the build phase of acceptance test-driven development while making changes for the required functionality which is addressed by the acceptance test, we deep dive and come up with different elements required for its implementation. We develop the feature or the user requirement in an incremental way. In each iteration, we follow the test-driven development. This will add up to the change that the acceptance test-driven development is trying to implement. The control shifts back to the acceptance test-driven development, and we progress and refactoring loops and subgroups. What is behavior-driven development or BDD? Behavior-driven development is another variation. This is similar to acceptance test-driven development, but here the focus is on systems behavior. Again, the intent is the same, but the focus and the approach differs. This defines various ways to develop based on the expected system behavior. It uses the formats like given when, then to capture and communicate the behavior. For example, in our application we can say, given the user entered valid username and password, when he or she clicked the login button, then he reaches the flight search page. This follows similar steps as in TDD or a TDD. First, we define the expected system behavior. Then we will define the steps for that. As it is, these steps will fail. We then write the code required to pass the step and the process continues. In BDD, we use nontechnical language that anyone can understand. This helps and better communication and collaboration among all parties involved. The tests are more user focus and focuses on expected behavior and thus reduce the chances of post development defects model-based testing. Before closing this section, let's look at one more approach. What is Model Driven Development? In model-driven development, or MDD, we focus on the construction of a software model before the actual software is built. This model represents the system behavior. This helps in capturing the user requirements. We analyze and improve the model with inputs from all stakeholders and develop based on the model. Once the software is created, it can be tested using model-based testing or MBT. This approach helps to implement software quickly, effectively and at minimum cost. The methodology is also known as model-driven software development or MD, ASD. What is model-based? In model-based testing, we checked behavior of software against predictions made in the model. The model represents systems behavior. This can be described in terms of input sequences, actions, conditions, output, and the flow of data from input to output. Here, we will check how the actual system responds to an action which is defined in the model. We perform the action on the system and check if it responds as expected. An example of this model can be the state charts. This represents the system behavior in terms of different states and transitions between states. We can use standardized general-purpose modeling language, like Unified Modelling Language or UML. This creates graphical visual representations in a common language and helps in analyzing and understanding the model. This is a big topic in itself and an evolving approach with new initiatives like agile model-driven development, a MDD. 8. Test Plan: Before finishing this training, we will discuss some topics related to test plan. What is Test Plan? A test plan describes different aspects of testing, like test strategy, objectives, schedule, estimation, deliverables, and resources required to perform testing. This serves as a guidance for all test activities and helps an execution, monitoring, and management of test activities. The format of the test plan depends on the process framework and policies of the executing organization and of the customer. In general, the test plan has the following elements. Explain key elements in a test plan. Scope of testing. We define what is in scope for testing. Decisions are made on application boundaries where we perform the test. But the term scope is not limited to this. It also includes items like what are the layers and the components that we need to test. What will be the involvement at different phases? Identifying the testing type or types. How did we do the testing? What will be our approach? We discussed different options and decide types of testing that needs to be performed. We may select one or more approaches, types, and so on. We decide on approaches to follow at different levels, stages or phases of development. Document assumptions, risks, and issues. Like in the case of a project management plan and the test plan, we document any assumptions that we make. We may be creating a plan assuming certain elements will be available on a given date. We analyze and find possible issues, risks involved in test execution. It can be technical or non-technical. For example, there are chances that a test infrastructure component or a tool is not behaving as expected. We may not be able to get experienced professionals as expected. Test logistics. We decide who will do what and when. We decide who will be doing the testing, we discuss the skills and experience levels required. We discussed the timing of the test. When should we conduct a test activity? This will be in connection with the development plan. Defined the test objective in a clear, unambiguous language, define the overall objective of the test. The objective of testing is uncovering as many issues as possible in the system to improve its quality. For defining the objectives, we consider all the features and functional and non-functional expectations from these features. Then we define the goals and objectives based on these objectives. For small systems, this may be done at a higher level. For others. Thus can go further down to the component or feature level and boils. To create the overall testing objectives. Test criteria. There are certain conditions that help us to make go or no-go decisions. There are two criteria commonly used. What is meant by suspension criteria in test plan? Suspension criteria describe conditions where we have to pause or suspend testing activities. If the software is not in a testable state, major functionalities are not working or a huge part of test cases are failing. There is no point in wasting our efforts and test execution. We may decide to suspend the test activities till these are fixed. What is exit criteria in test plan? Exit criteria defines the state where we stopped the testing activities. This confirms the successful completion of the testing phase. This may include the targeted run rate and pass rate. This depends on the quality, standards, and policies in the organization. This can include items like 100% of critical test cases or green. 95 percent of all test cases are passing. The items related to the run rate can be 98% of test cases that are executed. This varies based on the performing organization and customer requirements. Resource planning. We decide on the human and system resources required for the testing activities. We have analyzed the skills required for the testing professionals. We discussed different roles and their responsibilities. We decide what are the skills required for different stages and the number. In the same way, we plan and document the system requirements, machines, network tools, and so on. Test environment. We test on the different environments required for testing. What all are needed to make sure that the test environments closely resemble the actual end-use environment. We discussed the required hardware and software environments for test execution, estimate, and schedule. Here we will do an effort estimation for different activities, organization policies, and develop it process guide us and answering questions like what level we estimate and how accurate these estimates should be. Based on these estimates, we create an overall schedule for testing. Test deliverables. Test deliverables are the expected outputs from testing at different stages. This includes reports, documents, and other components that are developed and maintained during testing. Examples can be the test plan itself, test strategy, test cases, test data, defect reports, release notes, and so on. 9. More Testing Types: In this section, we will quickly go through a few more testing types that we have not discussed so far. What is ad hoc testing? This is a free hand in formal testing. We test the application without any formal test cases. We experiment and tried to break this system. This can be performed by anyone. This helps in finding defects which are not covered by the formal test cases. What is accessibility testing? This checks that the application is accessible for different types of users in various environments. Disability testing, where we test that the application can be used by those with disabilities, falls under this category. What is back-end testing? This checks the data stored in the backend, the database. This is also known as database testing. This helps in identifying data loss, data corruption, and so on. What is browser compatibility testing? Here, we test the application in different browsers and operating systems. It checks if it shows the same expected behavior in different configurations. What is backward compatibility testing? When a new version of the software or a patch is applied, this checks if it works well with older environments. This also addresses situations where the new version has to interact with systems that have an older version of the software. What is exploratory testing? This is another example of informal testing. The testing team explores the entire system to find existing issues. This is done without any specific test cases, but we document the steps followed and if a defect is found, we may add new test cases based on our steps. What is Graphical User Interface or GUI testing? Test the screens for its usability and adherence to the specifications. This includes the functioning, logical flows, and the look and feel of the user interfaces. What is guerilla testing? Here, we test a module or functionality thoroughly and heavily with an intent to break it. This checks the robustness of the application. What is happy path testing? Here, we execute a valid success scenario in the application. We don't give any invalid or negative inputs. This checks in normal conditions with valid input if the application gives expected results. What is install or uninstall testing? This test if there are any issues with the installation process of the application. This checks if the process installs a stable system. It checks if the uninstall process performs as expected and clears the required elements. What is monkey testing? How the system responds to an ignorant or sometimes arrogant user? We give all possible random inputs, may be without any logic. Here, we are not testing with any formal test cases. Testers need not be aware of the full functionality of the system. What is mutation testing? This is a kind of white box testing. We make minimal changes to the code for a specific functionality without impacting the overall system. We check if our test cases are able to find the impact. What is negative testing? Here, we test the software with invalid or negative data and try to break this system. It checks how the application responds to error conditions. What is recovery testing? This test how the system reacts and remain stable in case of a failure. For example, we may unplug the network while a transaction is in process and plug it back in later to see how the system behaves. What is static testing? This talks about testing without a working application. We check the documents, design, or the models to validate and review them. What is usability testing? Here, we test the user friendliness of the application, are the user-interfaces and the flow self-explanatory. Can a new user understand and perform actions within the system without much issue? What is vulnerability testing? This identifies the weaknesses and the application that make it vulnerable to hackers and cyber attacks. What is volume testing? This test how the system reacts to a high volume of data. We use tools to simulate high volume data and check how the application reacts to it. What is configuration testing? This helps in identifying the optimal configuration and minimum requirements for the system. We test how the system behaves with different types of operating systems, memory configurations, server setup, and so on. What is compliance testing? This checks if the system is developed and compliance with existing rules, standards, and regulations. What is concurrency testing? This test the behavior when more than one user performs the same action at the same time. What is penetration testing? In this approach, we simulate a hack or an attack and check how secure the system is. As we have mentioned earlier in the course, testing types, classifications, and process vary a lot. The same type of testing can be called with different names in different organizations. There is a considerable overlap among different approaches. We select what solves our issue. We select the process, types, and technique relevant for us. This is an attempt to create a strong theoretical foundation on testing. This is only a quick starter. Your journey in this great topic has just started. Go on, keep exploring more topics. We wish you all the best with your job search. Keep learning and be a successful testing professional. Wish you all the best. Thank you. 10. Outro Our Courses in SkillShare: We would like to provide an overview of our course is on and the the, the, the, the, the, and the anions.