Lean Six Sigma Yellow Belt - Online Learning | GreyCampus I. | Skillshare

Lean Six Sigma Yellow Belt - Online Learning

GreyCampus I., Training for working professionals

Lean Six Sigma Yellow Belt - Online Learning

GreyCampus I., Training for working professionals

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
48 Lessons (2h 44m)
    • 1. The Basics of Six Sigma Part 1

      1:18
    • 2. The Basics of Six Sigma Part 2

      3:09
    • 3. The Basics of Six Sigma Part 3

      2:30
    • 4. The Basics of Six Sigma Part 4

      5:25
    • 5. The Basics of Six Sigma Part 5

      3:15
    • 6. The Fundamentals of Six Sigma Part 1

      2:35
    • 7. The Fundamentals of Six Sigma Part 2

      4:39
    • 8. The Fundamentals of Six Sigma Part 3

      3:35
    • 9. Selecting Lean Six Sigma Projects Part 1

      2:19
    • 10. Selecting Lean Six Sigma Projects Part 2

      5:34
    • 11. Selecting Lean Six Sigma Projects Part 3

      3:10
    • 12. The Lean Enterprise Part 1

      3:13
    • 13. The Lean Enterprise Part 2

      5:44
    • 14. The Lean Enterprise Part 3

      2:38
    • 15. The Lean Enterprise Part 4

      2:37
    • 16. The Lean Enterprise Part 5

      0:38
    • 17. The Lean Enterprise Part 6

      4:00
    • 18. Process Definition Part 1

      4:00
    • 19. Process Definition Part 2

      7:19
    • 20. Process Definition Part 3

      3:04
    • 21. Process Definition Part 4

      4:03
    • 22. Six Sigma Statistics Part 1

      4:02
    • 23. Six Sigma Statistics Part 2

      4:24
    • 24. Six Sigma Statistics Part 3

      2:04
    • 25. Six Sigma Statistics Part 4

      1:31
    • 26. Six Sigma Statistics Part 5

      3:42
    • 27. Six Sigma Statistics Part 6

      2:18
    • 28. Six Sigma Statistics Part 7

      4:23
    • 29. Six Sigma Statistics Part 8

      3:39
    • 30. Measurement System Analysis Part 1

      1:44
    • 31. Measurement System Analysis Part 2

      4:47
    • 32. Measurement System Analysis Part 3

      5:05
    • 33. Measurement System Analysis Part 4

      3:07
    • 34. Process Capability Part 1

      5:14
    • 35. Process Capability Part 2

      1:30
    • 36. Process Capability Part 3

      9:27
    • 37. Process Capability Part 4

      3:16
    • 38. Process Capability Part 5

      0:38
    • 39. Lean Controls - 1

      2:00
    • 40. Lean Controls - 2

      1:28
    • 41. Lean Controls - 3

      1:24
    • 42. Lean Controls - 4

      1:49
    • 43. Lean Controls - 5

      1:29
    • 44. Statistical Process Control (SPC) - 1

      7:32
    • 45. Statistical Process Control (SPC) - 2

      2:17
    • 46. Control Plan - 1

      2:27
    • 47. Control Plan - 2

      5:19
    • 48. Control Plan - 3

      2:14
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

451

Students

--

Projects

About This Class

In this training program you will learn about the basics of Six sigma methodology and how this methodology can be used for solving process problems. You will be exposed to the tools and techniques for data collection, process analysis, Process capability calculation and Control chart.

Meet Your Teacher

Teacher Profile Image

GreyCampus I.

Training for working professionals

Teacher

GreyCampus transforms careers through skills and certification training. We are a leading provider of training for working professionals in the areas of Project Management, Big Data, Data Science, Service Management and Quality Management. We offer live-online (instructor-led online), classroom (instructor-led classroom) and e-learning (online self-learning ) courses. Our growing suite of accredited courses is constantly upgraded to address the career enhancement goals of working professionals.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. The Basics of Six Sigma Part 1: welcome to the first phase of Lean six Sigma the defined face in this phase you we will explain the basics and fundamentals of six Sigma will illustrate ways of selecting lean six Sigma projects that will provide the most benefit, and we'll discuss lean enterprise. In this first session called The Basics of Six Sigma. We will talk about that meanings of six Sigma. We'll explore the general history of Six Sigma and continuous improvement. Understand the delivery Bols of a lean six Sigma project will describe the problem solving strategy of y is a function of X will be able to explain the voice of the customer voice of the business and voice of the employees in the difference between all three. Discuss the different roles of Six Sigma with in the structure. Some basic questions of six Sigma are. What are the fundamentals? How does six Sigma help me South problems? What are the benefits of Six Sigma? What is the methodology that we follow and how do we implement a 66 mud process within an organization 2. The Basics of Six Sigma Part 2: six Sigma is a business process improvement methodology that is all about the process. There are activities associated with any given process that converts inputs into outputs. A lot of times we refer to this as i p o inputs process outputs that process. If you drive, draw a box around, it doesn't matter how big or how small that processes. Every process has inputs and outputs. Every process also has variability that very Asian can be measured. Variation always exists. Variation within the process is sometimes too large and are outputs are not acceptable in the eyes of the customer as variation increases are defects and effective also increase. There are two types of variation in the process. Special cause or excitable cause. Variation in common cause. Variation. Common cause variation occurs in the process no matter what it's common. Six Sigma uses tools to get to the root causes of special cause variation. What we want to do is reduced that I'm sorry. Eliminate that special cause variation so that we can then reduce the common cause. Variation six things that is data driven. Six Sigma is referred to as 3.4 defects per 1,000,000 opportunities. Six Sigma itself is the sign for standard deviation of a population. It's is the six Sigma is plus or minus three standard deviations from the mean on the standard normal curve. It was developed by Motorola in the 19 eighties and then refined by General Electric and Allied Signal. It is robust but not rigid. Think of a toolbox more than something that the use of these, the boxes that you must check. You can see here that the Six Sigma level is three point for defects per 1,000,000 opportunities, and it produces a yield of 99.9997%. As we go down in Sigma levels, you can see how the rejects increase where 1.5 sigma level is 500 defects per 1,000,000 opportunities. 3. The Basics of Six Sigma Part 3: in six Sigma. There are two major categories of improvement methodologies. Tomake d M A. I see is used for existent products, processes and services. D. M. A. D V defined measure analyze design verify is used for new products, processes and services in the design face. Both of these methodologies have been inspired by W. Edwards Deming, P. D. C. A Cycle. The defined phase of de make, as will be using in this lean six Sigma course, starts off with defining the problem. The goal in using the project charter as the basis for this face. The measure phase is trying to baseline the current process and understand the data that, uh, collecting the data. We then analyze that data to find the root causes of the problem. We then make improvements. We understand what the impact of those improvements are and put it improvement action plan together. Then we control the process. We want to maintain the gains. Some business successes that have been seen specific to Six Sigma are listed here. Some companies that have experienced great benefits from using the Six Sigma methodology are Motorola Honeywell in General Electric. You can see here at this chart the maturity of Six Sigma over a 20 year period. It starts off in the eighties with Motorola and propagates outside of manufacturing into, ah, lot more various industries. 4. The Basics of Six Sigma Part 4: implementing six segment within your organization starts with management. Management has to make the decision to say yes, we want to change. There are certain aspects that must apply in your organization in order for six Sigma to be successful. I mentioned earlier that six Sigma is all about the process process. Include includes inputs and outputs. The output of a process is considered a why or effect or response These air all names for the outputs of processes. In this particular example, the headache is the effect. Well, there may be several causes of that effect. And in the Six Sigma methodology, you want to focus on the causes in order to solve a problem instead of the effect where you're simply fixing a problem solving problems, make sure that the problem does not come back up again. We start with practical problems in six Sigma. We turn those into data, which is a statistical problem. We analyze that data, come up with a statistical solution and then implement a practical solution. Six Sigma always focuses on the customer. The customer is the driving force of any six Sigma project. There are different types of customers. They could be happy customers that could be lost customers, they could be perspective customers. Identifying what the customers attitude is at any given time is critical to the success of a Six Sigma project. Customers do not only constitute external but as well as internal customers. Internal customers are ones that receive outputs from internal processes. There are several aspects. Teoh internal customers. Yeah, external customers. They are not part of the organization, but they are impacted by the processes within that organization they can be and users intermediate customers are any impacted parties that's are affected by the business itself . There are different ways to gather voice of the customer or vio si from external customers . You have to identify what you're trying to get from the customer. Some of the roles and responsibilities within a six Sigma program include master black belts, black belts, greenbelts, executive sponsors, champions and process owners. I'd like to walk through each one of those and identify the roles and responsibilities of each of them, starting with executive sponsors. Executive sponsors are the keys to success of an effective six Sigma program. They're the ones that's Ah signed the the resource is and the allowable resource is for a Six Sigma program toe work. They are the leaders of the team for continuous improvement. Champions champions have to be identified at the lowest possible level that have influence over a project. The president of a company should should not always be the champion of any given project process owners. They're the ones that owned the process and deal with it on a day in and day out on basis. They're the ones that will own the control plan at the control phase a to the end of the project. 5. The Basics of Six Sigma Part 5: master. Black lit belts play a major role in the Six Sigma organization. They're typically the bosses of black belts as well as the leaders of the continuous Improvement organization. You can see some of the roles and key skills that are required for black belts, understanding that they also run black belt projects in order to keep their skills sharp. Black belts also run process improvement of projects on a regular basis and artifically full time process improvement in full time process improvement positions. They want to focus on the maximum cost reduction product profit improvement for the company , as well as the highest return on investment for their efforts. One major role that black belts play is to help to mentor, teach and coach greenbelts as well as other folks within the organization. Greenbelts, usually part time process improvement folks Textbook says 20% of their time should be solving problems. They, ah, they don't have as much experience as black belts or master black belts. But they still have a very good understanding of the use of the tools and principles within six Sigma, and they work very closely with those black belts and master black belts that I mentioned earlier. The three critical elements that creates a successful six Sigma organization are dedicated . Resource is the master black belts and black belts Aziz well as the greenbelts. In order to have that infrastructure systematic approach, The persistence to stay within the six Sigma Ah method you methodology is critical and most importantly, is keeping the customer in mind both internal and external customers for greatest potential for impact in the bottom line. Saving money in summary. In this session, we went over the meanings of Six Sigma, the History of six Sigma and continuous Improvement, the delivery bols of a lean six Sigma project. We talked about the problem solving function of why is a function of X, talked about the differences in explanations of voice, the customer, voice of the business and voice of the employees and talk about the different roles of Six Sigma and responsibilities of those roles 6. The Fundamentals of Six Sigma Part 1: selecting good lean six Sigma projects is the beginning of success with Lean six Sigma. So in this session will be talking about what a process is, what SETI cues are and how we use them. What the impact of cost of poor quality or C O. P. Q. Is the Peredo analysis or 80 20 rule and some basic six sigma metrics. We take a large organization and circling h are at the top weaken drill down into the exact issue of what's, ah, where the problems are. So how do we take that, uh, customer requirement? Often referred to his vio si voice of the customer and make that something measurable? Well, we call that the V O. C. Two C T X conversion or critical to X Factor's. Those CTX is have four elements. The name itself the measure of the metric What are we actually going to measure the targets ? What is the goal and what is the specifications? The customer defines those tolerance limits limits so you can see here a an example of boc to CTX conversion. I have to wait 48 hours to get a reply to a single email, so that's the voice of the customer that that could be turned into a problem statement. So what's the measure that we're going? Thio Thio use for that? It would be turnaround time, and then you can see the specifications. What is the customer willing to accept? Here's another demonstration of that using the C. T. Q. Tree. So we have customer needs and then customer requirements. Getting that voice of the customer is critical, and then we go to the detailed specifications. Remember that X is air. Always. Inputs and wise are always outputs or responses. So the performance to schedule is our response here. How are we going to measure its? What is the target that we're looking for or what the customer is looking for? What are those specifications limits that the customers willing to accept? And then what is our allowable defect rate? 7. The Fundamentals of Six Sigma Part 2: cost of quality eyes broken down into several categories, starting off with preventative costs. This course and a lean six segment program is a good example of preventative costs. Appraisal cost eyes typically associated with inspections on quality control, internal failure costs is categorized with rework and scrap, but it is their defects that do not leave the factory. External failure costs are defects that do leave the factory and are either returned. Uh oh are, Ah, lost reputation from a customer standpoint, is ah is seen. One of the justifications with a investing into a lean six Sigma program is that the preventative cost is a lot less than internal and external failure costs. Peredo ANALYSIS Wilfredo Peredo was a uh, an economist in Italy that realized that 80% of the land was owned by 20% of the people. So he has been associated with Peredo analysis and the 80 20 rule. Peredo diagrams air used throughout various industries and were able to clearly see the impact that certain causes have on a process. We're able to prioritize those impacts and the the 80 20 rule applies there it by saying that 80% of the issues are caused by 20% of the problems. Here is an example of the 80 20 rule in. In effect, the Peredo diagram itself eyes a combination of two difference charts. One is a bar chart that shows the frequency on the Y axis. In the categories along the X axis, they are ranked from highest to lowest. The other piece of a Peredo diagram is the cumulative line percentage where each category adds to the next, and the summation of all of the categories is equal to 100. That's the cumulative percentage. There are primary and secondary project metrics associate with any lean six Sigma project. These air the main things that you're going to focus on in order to show improvement within your project, things that you must consider when choosing your primary metric our suppliers, internal processes and customers. Some examples for primary metrics. The list is not limited to these examples, but these are typically associated with six Sigma projects. Secondary metrics, usually numerical representations of the primary metric, and we will go over each of these A Z we go along. But DPU defects per units DP oh defects for opportunity DPM Oh, defects per 1,000,000 opportunities and you can see the explanation of those metrics there 8. The Fundamentals of Six Sigma Part 3: Here's a pictorial view of the process metrics that we use within six defects per unit. If we look at a windshield windshield, have many chips and cracks per units, so that would be described as defects per units. If we look at defects for opportunity, I often use techno text messages or phone numbers as defects per opportunity. Every letter or every number it is possible on their, uh is an opportunity for a defect and then defects per 1,000,000 opportunities or DPM. Oh, that's often used to level the playing field. For example, if we go back to automotive example a steering wheel supplier as one defects per opportunity but for a vehicle but a entire wheelman supplier as four or five opportunities . So in order to level that playing field, you basically say, If you, the steering wheel supplier, gave me a 1,000,000 steering wheels and you, the tire manufacturer retire supplier, gave me a 1,000,000 tires. How many defects would you you each provide face on your historical performance that levels the playing field and says Now I can compare one to the other. Another metric that would use is yield clasping your yield or final yield has everything to do with how Leith Eunice did we start with and how many units did we end with? It does not matter at all what it took to get those units to pass at the end, rework or extra work or anything else. It's just final yield. I start with certain amounts and how many came out of the end. First pass yield is a little bit different in the sense that how many units came out the first time? How many units? How many? How many units did I not have to rework to get that final yield? So first pass yield is is no rework don't strap rolled throughput yield now takes that first pass yield, or throughput yield for each step of the process and multiplies them by each other. So I could take multiple steps of a process and find out how many units actually got through the entire process off multiple steps of the process with no no issues and then cycle time. How long does it take to do a process? So what we've covered in this session was the definition of a process. CT cues si o P. Q. The Peredo analysis and some basic six ago metrics like D p o d PMO deep in you rode throughput yield final yield. 9. Selecting Lean Six Sigma Projects Part 1: we're still in a defined face of the lean Six Sigma project. So what we're going to talk about here is why we should do it. It's called the built business case as well as the how and Who is going Teoh. To do this, we'll talk about Project Metrics, which means we will develop metrics so that we can make sure that we make improvements as well as the financial side of the improvements of a six Sigma project. So the Chart, uh, business case and Project Charter, The Project Charter is a key element that captures in consolidates all of the bits and pieces of information on the how and who of what we're going to get done, including mission scope, objectives, timeline and consequences of it not getting done. The charter is a key elements because it's a commitment from the team members from management and a commitment to the process owners that this project will make improvements . So there are some advantages Teoh doing a project charter that are foundational to the success of the project. A project sharpish charter should contain all of these elements. We will go through the's elements in more detail, but this is the structure of a project charter 10. Selecting Lean Six Sigma Projects Part 2: the business case for a six Sigma project is the Why should we be doing this project? You have to remember that's projects involve a lot of people and efforts. In order to be successful, the business case must be there. The payoff must be there for the investment of the resource is another part of the defined face that goes onto the project Charter is the problem statement. The problem statement should clearly identify what the problem is, why it's a problem and who thinks it's a problem. You can see the example here if at all possible, include baseline metrics. But if you do not have those at that time of the defined phase, then go back and updates your problem statement. Once you do get those in the measure face project scope, one of the key elements to a successful project. There are different ways to define scope, but they're basically the boundaries of the project. What is the project going to include? What are we going to be working on and what are we not going to be working on? It is very important to do this as a team, so everybody is on the same page. This could be a simple brainstorming activity where a large T is drawn with in on one side and out on the other. If things are unknown at that given time, you can put them on the line. But the idea is to get those answered either in scope, are out of scope as soon as possible. A goal statement. Now that we have a problem statement, we need a goal statement. I like to use the acronym Smart S M a R T. This is not a lean six Sigma tool, but it is applicability for gold statements, milestones and deliverables. When am I going to do what? That's what this comes down to. The Six Sigma project should encompass enough issues to last 3 to 4 months, so breaking the domestic process down so that's you stay on track is very helpful. Resource is, I mentioned. Resource is that are required for ah project to be successful. You certainly need qualified people. People are going to be the key to success for any given lean six Sigma project project. But there are a lot of other items that you may want to take into consideration when doing your project. Identifying any type of resource constraint early on will help your project be successful. Here are just some points When developing project metrics six Sigma comes down to continuous improvement, which comes down to making money. Using those resource is is going to require some sort of cost benefit analysis. Again, Why should I do this project? What type of payback will this project bring to the company? The sequence for performing cost benefit analysis. This is very important to do in the defined face. I'll talk more about it in the control face once the project is completed. But balancing those two things, setting expectations properly for upper management to understand what that project will accomplish or what that project should accomplished. 11. Selecting Lean Six Sigma Projects Part 3: Like I said, Lean six Sigma projects all about satisfying the customer and making money there are. There must be a consistent way of measuring that, making the money. So I look over four different formulas or methodologies for capturing the financial benefits. The 1st 1 is return on assets. So if you're talking about assets, it's also referred to as far away, and you can see the formula there as well as return on investments, which is referring to his are all I. The next formula is NPV. This is a little bit more complicated from a mathematical standpoint, but if it's more applicability and should be used on, then the payback period, uh, the textbook, uh, pay back period for any lean six Sigma project should be one year after the improvements, there is always risk. Associate it with the in six Sigma projects. There is a commitment to effort that we're putting in with the Project charter, but there is a risk that things may change. It may fail the cultural side of risks. Um, things don't always go the way that they should hear some, um, specific, both business risks and insurable risks. So Once we identify the risk, we have Teoh quantify it. We have to put some level of ah number scale to it. So what we can do is take the probability of occurrence on and then the consequence of the risk, the probability of currencies. What is the likelihood or how often will that happen? And the consequences? The severity. How severe is it? Look what the impact Teoh people in the business of that risk. In this session, we talked about business cases and project charters at the very beginning of a of A project . It helps to define the why and the how way talked about Project metrics. How are we going to measure that? We actually made improvements and we talked about the financial on and risk risk factors associated with the project. 12. The Lean Enterprise Part 1: We've spent some time talking about Six Sigma and the Six Sigma methodologies. Now let's spend some time about lean. Let's talk about the history of Lean Uh, and then let's talk about how we combine both Lean and six Sigma in order to make a more effective problem solving methodologies and lean. The basic premise of Lean is eliminated waste, so we'll define those seven elements of waste as well as discussed ways. Overview of ways that we can eliminate those wastes so lean is all about flow. Uh, lean always keeps the customer's perspective in mind or boc voice of customer. The things that stopped flow are waste, so we are trying to eliminate waste, whereas in six segment we try to reduce variation. So the key factors that in tools that we use within lean are are listed there on your slide lean enterprise. The lean enterprise is looking at the entire system, so that is everything from suppliers to customer. So the customer demand is everything that's the customer, getting the customer, what they want when they want how they want it, and then the process of eliminated the waste in the process to get that it is a systematic way of identifying in eliminating waste. It is not just going out there and firefighting. This is a ah structured process. And in order to once you use that structure process, you can see some of the benefits that, uh, that you'll experience the five areas that Dr Lene, of course, costs the business is making effort to reduce waste. Therefore, there should be some sort of cost benefit there. Ah, higher quality, faster delivery, but not at the sake of overproduction. So again, getting the customer what they want, when they want, how they want it always keeping safety in mind. And one of the fundamental principles in lean is the people. So morale is is it a corner step to this so 13. The Lean Enterprise Part 2: people often associate leading with the manufacturing industry, but it absolutely applies to other areas as well. Such a service in office Lean enterprise says that wherever work is being done, waste is being created with the basic premise of lean being the elimination of waste. This obviously applies everywhere. Lead goes back very far back in time. But here's just a sample of some of the founders of some lean concepts and users of these lean concepts. Frederick Taylor Frederick Taylor's now for Division of Labor. He, uh, he proved this theory in a pin making facility that's he split up the entire process into smaller processes across several people and made an exponential amount of pins. Henry Ford, known mainly for mass production, the moving assembly line actually came from an idea that he got from a meat processing facility where it was a moving disassembly. Security Toyota. Often a suit associated with judoka, he invented the automatic loom. Sojod. Elka is automatic automation with a human touch. What that loom did was automatically stop as soon as one of the threads broke. Therefore, one employee could watch several machines instead of being 12 months. Keiichiro Toyota taking that loom and fabric business into the Toyota Motor Company that we know of today I z Toyota Mechanical Engineering up again. One of the one of the greats, Toyota Motor Company inventors Eugene O. No often associated with Toyota production system. Some people think that he invented Lean. Because of this. He packaged the lean concepts into the book called The Two Production System. Shigeo Shingo. There have actually been a an award associated to world class lean organizations called the Shingo Award, named after him, James Womack and Daniel Jones, often associated with books. The Machine That Changed the World, which is talking about the automotive industry as well as lean thinking. A non Sharma, a student in the Jeep Jixian You jitsu group in Japan, learned directly from the Toyota uh, pioneers brought that to America and recognized a tremendous amount of success. Michael George authored several books within the lean six Sigma discipline, then Sugar Jitsu as a company within Japan helps to bring lean into the 21st century 14. The Lean Enterprise Part 3: Let's now put Lean and six Sigma together, so both are focused on improvements. Six Sigma is focused on reducing variation, while Lean is trying to eliminate waste. How do we justify those? Ah, those improvements or how do we measure those improvements? Six. Sigma again, 3.4 defects per 1,000,000 opportunities Using the statistical number and with lean, we measured velocity or speed to the customer Main savings in Six Sigma. We're tryingto improve our quality. So our savings is the C o. P. Q. And then lean. We reduce our operating costs by eliminating wastes. The learning curve six Sigma is a little bit more complicated in data driven than, uh, then six, then lean. So the learning curve for six Sigma is a little bit longer. Project selection. In lean, we use the value stream map, Teoh, identify waste and, uh, in projects to eliminate waste. While six segment could come from a variety of places such as company goals, management objectives, problems in general, so six Sigma projects typically run a little bit longer than then lean. Some of that is because of this driver data we have Teoh. We have to collect data in a little bit more complicated method with the Six Sigma project than a lean project, and the complexity six Sigma is a little bit more complex. Where Lien is, ah, a little bit easier to identify and fix. So should they exist together? Ah, I absolutely believe so. It's a It's a way of combining two different sets of tools or two different toolboxes in order to accomplish the same thing, which is process improvement. 15. The Lean Enterprise Part 4: the lean tools and principles can be used in many ways to improve processes. Eliminating waste is the main goal of leaned in creating flow. Every organization has their own challenges. So how does Lean and Six Sigma combined together? Well, we will follow the to make process the fine measure. Analyze improved control. You can see here the combination of six Sigma tools with Lian tools for each one of those phases of the process. There are five principles to lean implementation. Those principles start off with identifying the customers. The customers will will define what value is next. We need to map the process. We use something called a value stream app in order to do that value stream APP helps us to identify and eventually eliminate waste creating flow by eliminating waste. Responding to customer pull is responding to customer demand. As the customer wants something, we are able to respond to that with our processes and then pursue perfection is often referred to as continuous improvement. Never stop improving. There are seven elements of waste. Waste is defined or refer to as muta. In the Japanese language, it stops flow. These were the seven categories of waste. I will go through each one of them and explain and give examples for each. Overproduction is considered the worst waste of the seven because it is producing or creating all the other wastes. It is getting the customer not what they want when they want it. Correction. The waste of correction deals with defects. If you create a defect, it obviously is a waste. You either have to scrap that or rework it. Excess inventory is a waste as well. Inventory cost money to produce to store and can go obsolete, In which case we would have to scrap The waste of motion is the bending, turning, twisting climbing of human workers over processing is a waste and is often associate ID with the office area or computer transactions. 16. The Lean Enterprise Part 5: the waste of conveyances, the transportation of material. This can be done in different formats using forklifts, conveyors, pallet movers, even people. The waste of waiting is the easiest one to understand and notice. 17. The Lean Enterprise Part 6: five s is a way to, ah, system that helps to eliminate waste. It is all about everything having a place and everything in its place. It comes from the five Japanese words that start with s. We have translated those into five words in the English language that start with us. That's are pretty closely related. The idea, though, is that everything has a place and everything is in its place. The first s is sort. What we do there is remove any unnecessary items that are not needed in that immediate area , that doesn't mean threw it out. It means that if it's not used, their, uh, bring it to where it it is used. If it is not used, then then yes. Ah, straighten or set in order that this is where you are identifying those locations for the, uh for all of the idols, the next one clean and inspect. There is no use in having tools and equipment that are dirty and not in working order. So this is the opportunity to to shine the fourth US standardize standardized where it makes sense. One work station may have a completely different function that another workstation that So that means that they do not have to look the same. The other met. The other definition of standardized is to measure. Measuring helps to keep score. This also helps in the the hardest of the s is which is the sustained face having a process having an audit process having daily weekly monthly work schedules helps to To make that a habit There are two other s Is that that you may hear of the sixth s is safety and the 7/7 s is security both physical and intellectual property security. Some other lean techniques are kaizen, which means small, incremental improvements. Polka yok which means mistake proofing Kon Bon is an inventory management tool that that will get into and just in time J i t having the equipment, the people, the materials unnecessary when they're needed on then judoka automation with a human touch again, we'll get into all of these in more in depth attack time. I mentioned earlier that Colleen is focused on the customer So tax time house to define customer demand. Hi, Juncker is a scheduling methodology for mixed model and then value stream mapping I mentioned earlier is a structured way to map the current process, build a future state, currents they the future state value, stream app and identify and eliminate waste. So in this session for lead lean enterprise, we talked about the definition of lean the history of lean, how lean and six Sigma coexists within an organisation on. And then the seven elements of waste with the purpose of lean being Ah, the elimination of those seven elements of waste categories of waste in the defined face. Uh, we talked about the basics of Six Sigma the fundamentals ways of selecting lean six Sigma projects in the lean enterprise. So that is the completion of the first phase of Do make. The defined face will now go into the measure face. 18. Process Definition Part 1: welcome to the measure. Face of Lean six Sigma made purpose of the measure. Face is to baseline and document the current state process wouldn't know what's going out right now and in the end, the process. We do this by understanding the different aspects to our given process. We're going to collect data as well before we can collect data Way have to put a data collection plan together and make sure that our measurement system is sufficient so that when we do collect data, we can trust the data and make good decisions. Based off of that, we'll start to analyze the data using Six Sigma statistics as well as process capability. How well does our process meet customer requirements? In this particular session, we'll be talking about tools such as fishbone diagram. Also known. It's Causing Effect Diagram or Ishikawa diagram. Talk about process mapping, Saipa and specialized prospecting called Value stream mapping like about the X Y diagram and failure mode and effects analysis for F M E. A. Let's first talk about a process. There are inputs to a process often called exes or factors were able to manipulate those factors in order to get certain outputs of a process. The upstream process is full of suppliers that provide inputs to the process. The process creates outputs that go to downstream processes. Where there are customers, we can measure inputs and we can measure outputs again. Inputs are accidents referred to as exes and outputs, or refer to as wise. We can put process controls and process measures within the process itself as well. Feedback from downstream will improve upstream cause and effect. I grant is a brainstorming tool used to understand different causes for a certain effect. So what we're able to do is brainstorm ideas. But fishbone diagram will never tell you exactly what the cause of your problem is. Tell the fish bone diagram because it looks like a fish bone. There are six bones off of the main bone where the problem statement or effects, is written of six bones are often referred to as the sixth EMS. They are the six million categories that any cause of a problem will fall into. Here is an example of fish bone diagram. It does not matter what bone a particular cause is put onto is a brainstorming tool. The idea is to gather ideas. Those categories of the bones are simply brain triggers for your brain storming session. Another tool that we use in six Sigma is called the site Bach. The site back is built around the process. In the middle process produces outputs that go to customers going to the right. There are inputs provided by suppliers going on the left side of the diagram. 19. Process Definition Part 2: The site clock is a very powerful tool used in lean six Sigma projects, and I encourage you to always use one for every project. It could be a little confusing at first, but again are very powerful when used properly. The basis of Six Sigma is all about process, understanding and identifying what those inputs and outputs to your process. Art is critical to a successful project process. Map it. There are many different ways that we can do mapping of the current state process as well as the future state process process Maps help us to see a tremendous amount of what's going on within the process. Remember, too, that there are always different resolutions of process maps from high level. To more detailed here is an example of a flow chart. You will notice that the beginning of a process and any time there's a the end of the process is represented with a circle or oval process. Boxes are process. Steps are represented by squares or rectangles. Decision boxes are represented by diamonds. There should always be two and only two arrows that come out of a decision box Yes into No . If we use those same flow, charting terminology and symbols weaken. Organize that into something called a swim laid flow chart. This allows us to see the workload within each function as well as who has the decisions, because decisions stop float. Another type of float shop is an alternate path flow chart, and you can see here that there are decisions that are made that's can go either way. Value stream mapping is a specialized process map that comes from the lean toolbox. The purpose of value stream APP is to identify and eliminate waste value. Stream maps are typically done in the defined phase of a project in order to identify problem areas of a process. The stuff that we used to develop a value stream app start off by defining what the product family is. We then draw a current state map in a future state map. That future state map is typically one year from the time that we draw it. In order to keep it in perspective. Defining that product family is all based around processes. What are the common processes that those product families have in value stream mapping the value stream must be identified. The value stream is everything that it takes to get from Rama to get the product from raw material to finished goods to the customer. That includes both value added and non value added steps. A value stream manager would be the individual that's in charge of that value stream. There are two types of value street maps. The current state value stream app in the future state value stream app. Where do we want to go? We use those together in order to develop an implementation plan where the problems and what can we do? Specific actions that we do to eliminate those problems or eliminate the waste, their standard nomenclature and symbology associated with value stream maps. Fife Oh is first in first out Kaisa and means continuous improvement or small incremental improvements. So we would use a kaizen burst in an area that we know that we could make improvements or need to make improvements confines air signals that's helped to manage inventory. Elektronik flow of information is used with a skinny lightning bull era. We may put safety buffer stock in there called standard working process. We can also use a supermarket to do that. Customers and suppliers were represented with the source symbol. A withdrawal arrow is represented with the half 1/2 circle. Inventory is always represented with a triangle. Material movement is always represented with a fat arrow, any value stream app that you you see, we'll have these common symbology. Another tool that we use is X Y diagram. What we're doing here is looking at our input variables along the left side and our output variables along the top. So what are the process inputs and how do they affect process outputs? This allows us to quantify the impact that each input has on awaited output. 20. Process Definition Part 3: another tool that we use in the measure faces the FMI a failure mode in effect analysis. It isn't methodical risk analysis tool that helps us to quantify risks within our process. There is we talked earlier about the causal factor. Fishbone diagram. There is a relationship between failure modes and fishbone diagram. There are certain terms associated with failure mode in effects. Analysis, the failure mode itself. What, what goes wrong, The effect of that. How do we determine it, or what's the symptom of it? What is the cause of that failure mode? We may using the FMI a tool when they come up with several causes for any given effect. And current controls are what we have currently in place. Teoh either detect or prevent that failure from occurring were able to measure the effect of a failure mode using a severity scale. What is the impact of that effect on the process that Scala's from 1 to 10? Next is occurrence. How often or what is the frequency of that failure mode occurring, and then detection again on a scale of 1 to 10? What do we have currently in place to either prevent or detect that failure most. There are three types of FM Yates design. FMI A is used in the design of a product process in service in order to prevent failure modes and minimize the risk when going into manufacturing. Assembly process Emmett FMI A is for existing products, processes and services and allows us to identify risk using real time data and then the system. FMI A is the high level mea for the entire system itself. Multiple steps within a process you can see here there are 10 steps to me again. It is a risk analysis tool. What we're doing is quantifying but that risk using something called RPM. Our PM is a risk priority number. You can see that in step seven in 10 risk priority number is calculated by multiplying severity times occurrence times detection The highest rpm you could have is 1000 10 times 10 times 10. The lowest rpm you could have is 11 times one times one 21. Process Definition Part 4: here is a good example of a filled out FMI, a failure mode, in effect, analysis at the top of the FMI A. You can see better information. What are we analyzing? What process are reanalyzing who's doing it? When are we doing? It's all those kinds of header information type things getting into the next. Later are the titles of all of the columns and then into the actual information associated with this process. You can see that the severity is labeled at a seven a. And the two different causes have been identified here for the same potential effect when we have multiple causes for the same potential effect than the severity remains the same. Severity doesn't change depending on the cause. In this case, occurrence changes for those different causes. The first cause occurs more often than the second cause within eight and five, respectively. We then look at prevention and detection controls that we currently have in place. The detection is ranked on a scale of 1 to 10 as well as occurrence and severity. We multiply those three numbers together and we get A and R P en a risk priority number. You can see that the first cause of that failure mode has a R p en of 280 while the second cause has an R P en of 1 75 We then take that quantified risk and prioritize it and say What actions do we want to, uh, to implement in order to reduce that r P en, we go back through and late, quantify our severity occurrence in detection, and you can see the dramatic drop in R P en from before the improvements that after the improvements, here's a scale of ranking for severity, occurrence of detection. This is simply a guideline or suggestion, starting with one in going to 10. As we go down and go up in severity, you can see things are becoming more impacted. Occurrences also set. So that's a zoo. We get higher. The occurrence happens more often. Remember, with detect ability, the better the detect ability, the lower the number. So going to the second half of the scale, you can see in detect ability for a 10 that it doesn't work. You must create your own severity occurrence in detection scale for your process. In this session, we talked about fishbone diagrams as a brainstorming tool for causes to a certain problem. We talked about the existing process of defining the existing process, using different types of process mapping, sigh pock and via cents X Y diagrams with inputs and outputs and then the risk analysis tool of failure mode effects analysis calculating the risk priority numbers. 22. Six Sigma Statistics Part 1: Let's get into the statistics of Moon. Six in this session will talk about basic statistics. You will not get to in depth. Descriptive statistics help us to identify the data that we have in ways of explaining. It will compare that to a normal distribution and talk about graphs. So data types There are two categories, or umbrellas of data types, quantitative and qualitative. Quantitative sounds just like quantity. And there's some number associated with that. Within quantitative, there are two types discrete and continuous. Under the qualitative side, it sounds more like a quality. Well, there's some sort of attributes. It's not numeric data. There are three types of qualitative data. Categorical, orginal and number nominal. Data you can see is names. They are independent categories. Orginal data has some sort of rank associated with it. It could even be tall medium shorts. It can be gold, silver, bronze interval data as some sort of relative value. There's a magnitude that were measured and ratio data as a proportion. There. There's a ratio scale. Discrete data is countable. An example would be number of defects or number of cars in a partner. Continuous data means that there are possibilities. In between two observations. It's a continuous scale. It's also referred to as variable. Attribute. Data is on the qualitative side. Binary data is go no go Red Green Pass fail. There are only two possible numbers. That's why it's referred to his binary. Nominal data is more names. They are independent categories of each other, but more than two of you can mix this order up, and it doesn't affect the integrity of the data orginal data. You cannot mix it up. There is an order associated with some basic statistics on the basis of statistics you can see here. The first phase of Dia is raw data, which is just your numbers. The second phase is descriptive statistics way. Want to transform that data charismatically so that it's explain something to us. It talks to us. There are two theories within statistics. One is deductive, which is where we have a known universe. We know what the population consists. Therefore, when we take a sample, we know what that sample is. A portion of versus inferential statistics that's we haven't unknown way have no knowledge or very little knowledge about population, for example. So when we take a sample from that population way. Have Teoh do everything that we can to ensure that it's a good representation of that population. Then the home engineer. The question always comes up. You must be proactive in doing this and not reactive or ah ignorant to the variation that this could bring. 23. Six Sigma Statistics Part 2: If we start with raw data, which is just the data sets of numbers that you collect, we then start to analyze the data using descriptive statistics. These are specific mathematical formulas that are used to help to describe the shape, the location. What does our data look like? What's our data doing? You can see here some of the, uh, explanations of those descriptive statistics will go through each of them in depth measures of central tendency. Where is the middle of my data? There are three measurements of central tendency mean median and mode, Mean is often referred to as the average one of the other data descriptive statistics of our data is media median is the middle number of our data set when put in order, 50% of our observations are below the median, and 50% of our observations are above. We have an even number of data points. We'll take the average between the two middle numbers and then mode is the third measurement of central tendency motives that most occurring, frequently occurring number. In our data set, we will run descriptive statistics on variable data on lee not attribute data. Therefore, we will end up with a mean we will end up with a median. We may not end up with a mode. We may end up with multi multiple modes if we have two modes. It's called the by motile Distribution. If we have more than two modes is called multi modal distribution. Some of the advantages and disadvantages of each of these measurements. One thing I'd like to point out with the mean is that it is sensitive toe out lives, meaning. If you have one data point that is either really high or really low compared to the rest of your data sets, it will pull the mean in that direction. The media is not as sensitive in the sense that it is only one data point, so it'll only shift the median one or 1/2 of observation in either direction. The mode is not affected by outliers at all Measurements of dispersion or spread. There are multiple measurements of this versions. Spread range is one of them. Range is the easy one. That is the lowest number subtracted from the highest number. One of the other ones that I want to point out that standard deviation. So you can see here at the bottom of the slide main standard deviation and variants. I'll have symbols that are used no symbols, very between populations and samples with the mean the population refer to his new. And for a sample it is X bar. Standard deviation is sigma and for a sample, it's s and various variances. This square root outside the square deviation standard deviation is the square root of various. The reason that we use and report both standard deviation and variance is that we can do mathematical equations with variants that we cannot do with standard deviation again. The range, uh, this smallest number subtracted from the largest number quartile deviation. It is half the difference between the first and third court tiles. What that does is it allows us Teoh normalize the data more standard deviation a little bit more complicated, and we have to make sure that our example size is large enough. I mean deviation using the aromatic mean and then variants the square of the standard deviation 24. Six Sigma Statistics Part 3: instead of the standard distribution curve or no normal distribution curve is also referred to as the Z distribution. There are some defined characteristics associated with this distribution shape. The 1st 1 is that the tales go out to infinity in each direction they do. The area under each tail becomes minuscule after a certain point, but they do continue up. The curve is symmetrical, and it is symmetrical around the mean median and mode, which are all the same value. The area under the curve has a value of one on that represents 100% of the, uh of the population. The peak of the curve represents the center of the Russia process again mean median and mode. When we divide the normal distribution into three standard deviations, that represents 99 point 97% of the population, we can add upper and lower specifications limits to the standard normal curve. Do you see where our distribution of data lies in relative relative to the customers? Acceptable tolerance Here is a picture of the standard normal curve, with upper and lower specifications limits added to it. You can see that we've also included three standard deviations above the main and three standard deviations below the mean those three standard deviations represents the voice of the process, using or spread as standard deviation measures spread and the specifications limits represents the voice of the customer. 25. Six Sigma Statistics Part 4: here is the normal distribution curve. It is a bell shaped curve and it is symmetrical. The standard deviations are are displayed on the bottom, with zero being the middle of our data. The middle of our data is the mean median and mode. If we go up one standard deviation to the right and down one standard deviation to the left . That represents 68% of the population, meaning that 68% of the area is located under the curve between minus one standard deviation and plus one standard deviation. If we go plus remind us to. That encompasses 95% of the population. If we go plus or minus three at encompasses 99.73% of the population. This class is called lean six. Sigma Sigma is rep is the designation for standard deviation of a population. Therefore, one of the definitions of six Sigma is plus and minus three standard deviations from the mean on the standard normal curve. You can see here that that encompasses 99.73% of the population 26. Six Sigma Statistics Part 5: Kraft will analysis is 1/3 phase of data after descriptive statistics. There are many different graphs that we can use to draw pictures of our data. It is Each graph has unique communication skills, but it's all based on the type of data that we have will determine what type of graph we can use. Box and whisker plots is a very powerful data to to compare multiple things. The anatomy of a box and whisker plots is based on the percentiles of data's data that you have. So the first percentile at the bottom part of the bottom whisker represents 25% of your data points. So 1/4 of your observations are represented with the length of that whisker. The next 25% of your data is represented by the bottom half of the box. The median is the line that splits the box. Be very careful with that, because a lot of students confuse the mean with the median. There it is the median because again, the median asked 50% of the data points below its and 50% of the data points above its This follows thes percentiles follow with the upper half of the box being three next 25% and then the upper whisker being the top 25% of observations out. Liars can also be easily displayed on a box and whisker pluck. Here's an example of a box and whisker plot. You can see that the variation in number numbers one and two are much greater than the variation in Box three. You can also see that the variation of the 1st 25% of data points in Box one, which is the bottom whisker, is much less than the 1st 25% of data points in the second box, which is the bottom represented by the bottom whisker as well. But the range of box one, the range of sample one, is nearly the same as the range of sample to the location of that, distribution is very different and demonstrated very well with the box and whisker. Plus Houston grams are another way that we could show the distribution of our data. The bell shaped or normal distribution is in the upper left. A bind. Motile distribution, or double peaked, is next to that to the right of it. That shows 22 peaks. Skewed data will talk a little bit more in depth on truncated data. There has to be some sort of reasoning, like data points, stop a certain points and and then at the ragged plateau is a more random distribution. 27. Six Sigma Statistics Part 6: here is a computer graft history, and you can tell that it's instagram. Besides the fact that the title says so because of two things. The frequency is always on the Y axis, and it is a bar chart where the bars touch each other. That's because with a history, Graham were using continuous data. If you draw a curve that that fits this particular model, you can see that it is generally bell shaped with a bell shaped curve. Uh, the it is the standard normal curve. We measure cemetery using skew nous skewed ISS. If we take the standard normal curve and we pull it to the rights were going toe, have a right skewed data on it will have a positive skew nous value with the standard normal curve. We pull it to the right and the right in the end of the positive value, will always end with a positive valued soonest. We pull it to the left, will end up with a negative skew nous value. The, uh, the thing to remember with skew nous is that the skew is the tail with Curtis is that helps the measure shape Orpik. It nus how peaked is our data. Our standard Dharma curve is considered Meso Kartik and has a curto cis of zero. If I pull it in the if I treat it like a rope and pull the standard normal curve up into the positive direction, I'm gonna end up with a positive curto, sis. It's going to be more peaked, and it's gonna be referred to as leapt Okkert IQ. If I again treat that standard normal curve like a rope and I pushed down on its into the negative direction, gonna end up with a flatter curve. It's called Plati Couric. 28. Six Sigma Statistics Part 7: as I mentioned, a positively skewed distribution is right skewed and that the tail is to the rights and negatively skewed distribution is left skewed in the tail is to the left. You can see here that if a skew has a great ah value greater than negative one or positive one, we consider that highly skewed if it is between negative one and negative 1/2 or 1/2 positive 1/2 and positive one. And it is moderately skewed if it's closer to zero than it's considered symmetric with curto, Sis, curto, sis that's that hovers around the three or zero except the excess keratosis of zero is considered Meso Kartik, Laddie Kurt IQ is that flatter curve and left Okkert IQ is the more peaked curve. Here's another representation of skew nous. If we start in the middle with the standard normal curve, we treat that like a rope. We pull on the right side of that rope. We're pulling in the positive direction in the right direction, and we're creating a tail to the right. Typically, the mode will be the highest point of the curve, followed by the median and then the meat. If we pull to the left of the on the rope will pull in the negative direction. We end up with a left skewed data, and now my mode, immediate and mean are are going the opposite way. One way to remember that is that they go in alphabetical order or reverse alphabetical order. So when it's left skewed, you have the mean than the median, then the mode. If you remember in the descriptive statistics portion, we talked about the mean being more sensitive, toe out liars than the median or the mode. This is a good representation of that stem and leaf plots, another way of organizing our data into a picture. The stem portion is the largest single place number in this case. It would be the tens position and the the leaf portion are the subsequent positions. So for this data sets, you can see that we have three numbers that are that that start with one. And those are 12 14 and 18 represented with stem being one belief being to four and ate the same thing with the stem being too. In the twenties, there are three observations. 23 23 25 scatter diagrams are good way to represent our data when we have exes, direct exes and wise, What we're trying to do is show some sort of relationship, which is measured by correlation. So if we have strong positive correlation were going toe have a very tightly grouped, sloping, upward sloping line. Of those of those data, points now with a scatter diagram were allowed to have multiple why values for any given X value. Therefore, we do not. We do not, um, connected the dots in a scattered diagram. Here we can see a strong negative correlation with the downward sloping trend. And then when there is no trend at all, there really is no slope at all. We have a, uh uh, no correlation as those data points, spread outs are correlation doesn't are correlation value weekends. 29. Six Sigma Statistics Part 8: run truck run charts or trend charts are a good way to show how I process is behaving over time. A run shot will always have time as the X axis. We'll talk more about run charts in the control phase, but just going through the the trends here, help us to understand if there something abnormal going on with our current states. Here you can see seven or more points of the same value breaks the rule of the run charts. Clustering. You can see here how there are clusters of data points at, uh, each group of time that could show some sort of lots of lots or supplier issue. Having mixtures is some sort of systematic, uh, representation of the data going up and down and up and down and up and down. It is not statistically likely that this would happen. An oscillation charts is represented here, where it's fluctuating rapidly and represents that the the process is not stable. A trend chart is going to show several seven or more points in either going upwards or downwards. One of the common areas that I have seen this is in tool, where and then a shift plot is where it's is doing one thing on and then it shifts into, uh onto one side of the of the center line and it stays there. It is statistically unlikely that you will get eight or more points on one side of the center line. In summary this session we covered the statistics descriptive statistics, which is the second phase of data that allows us to define what our data looks like. Using specific statistics describing central tendency, dispersion or variation spread skew nous, which measures cemetery and Curto sys, which measures shape. We can compare our particular data using descriptive statistics to normal distribution. We also understand, uh, covered the the basics of the normal distribution where one standard deviation represents 68% of the population and so on. And then the third stage of data of graphical analysis having those pictures that help us to describe, evaluates, analyze our our data different ways 30. Measurement System Analysis Part 1: one critical aspect of the measurement phase that must be done before collecting data is measurement system analysis. So we'll be talking about in this session will be talking about precision versus accuracy, the different elements of precision and accuracy. We'll talk about the different ways that we can measure this. For both variable and attributes data starting off with accuracy, accuracy is closest to the targets. It is true value. It is nominal. Precision is consistency. How often can I get the same answer under different conditions for the same condition? Using a simple target? You can see the picture here that says I have a good cluster on consistent, but I'm not near the targets In this particular picture, you can see that I've accurate but not precise meaning. If I add up all the distances from the the bull's eye, I will end up at the bull's eye. What we're looking for is something that looks more like this. Calibration deals with accuracy, bringing the gauge back to true value measurement system analysis deals with precision, getting consistent results 31. Measurement System Analysis Part 2: there are three elements to accuracy. Biased linearity, instability. Ice is the difference between the observed value and the true value. Or actually, a lot of times it's referred to is leaning because biases typically on one side of the accuracy line linearity is the accuracy of the measurement system across the measurement range. For example, if a scale that's intended for people has a measurement range of 100 to £300 linearity would be the accuracy across that 103 £100. If you put a dog or child on that scale that weighs less than £100 the accuracy is not intended to stability is the accuracy that measurement system over time? How well does that that measurement system performed over time? M s a measurement system analysis measurement system analysis is a controlled experiments are controlled method of making sure that the measurement system itself is it collects good data. The common M s A is referred to as gauge our in our gauge repeatability and reproducing discuss those two terms in a little bit. But there are two types of variation with in a process. There's the process variation or part two part variation. There is also the measurement system variation in the measurement system. Analysis is intended to eliminate or reduce minimized that measurement system variation. You want to make sure that data that you collect is hard to part variation of process, variation and not variation within your measurement system. For example, if I if I step on a scale and it rate, it reads one amount, and then I step off and step back on the scale and it reads a very different amount. The measurement system, the scale itself is not giving me good data. Therefore, I will not be making good decisions. The there is accuracy within the measurement system again biased, linear and instability. And then there is measurement system precision or consistence, repeatability and reproduce ability. You can see here another definition of bias as well as linearity, instability, repeatability and reproduce ability are the precision aspects off a measurement system analysis. The repeatability is getting the same results under the same conditions with no changes. Same person, same tool, same environments. Reproduce ability is where one of those things changes. We changed one thing. It's called Oh fats O F 80 or one factor at a time in order to really determine what is causing problems. We only manipulate one factor changed one factor at a time. One easy way to remember. The difference between repeatability and reproduce ability is that it takes two to reproduce. Therefore, you need to change something in order to have reproducible results. Again. Repeatability is getting the same results in the same conditions. You can see here that in COLUMN column one is operator. A operator A is going to measure six parts, and then operator A will measure six parts again. What that does is it proves repeatability, same operator measuring the same parts of the same conditions we go from COLUMN column a to column b two column c. We are changing operators. Therefore, we are testing our reproduce ability. Wear changing the one factor meaning three operators. We're not changing the parts. Parts one through six are all the same in each of those measurements. 32. Measurement System Analysis Part 3: measurement resolution is the third are of precision resolution deals with using the right tool for the right job. Typically, the 10 bucket rule is used for a resolution, meaning your measurement tool must go one decimal place further than what you're trying to measure so that you can you can round properly. There are different types of measurement errors associated with all of this variability, so process variability versus measurement system variability. There are standard acceptable levels for the amount of variation within your measurement system versus the total measurement totaled variation within the study. Again, you can see Accuracy vs Precision on the left. We'll do crossed M essays versus Nested M essays for nondestructive testing. Its consist of multiple appraisers, multiple trials but the same parts. The idea of starting with Appraiser one and measuring all six parts in random order helps to eliminate bias. If we follow that same process and mixed the order up for appraiser to then appraiser three . That eliminates bias as well. We then go back in measure Appraiser one doing the exact same thing, uh, than two in three. What that does is help with the repeatability aspect of the M s a Here's the data. You can clearly see that operator one did all six measurements, not in one through six order, but did all six measurements that operator to did all six measurements in random order than three. Then we went back toe one and repeated the process. Here is the output from many tab for the gauge. Our in our we'll go a little bit more in depth into the run charts for both the range and the X bar. But you can clearly see that on the bottom left operator. One for part one is, uh, quite, um, out of control. We would say also on the rights you'll see in the middle. It's a box and whisker plot where we're comparing operator one for the three different trials that that they did. And we'll talk more about the anatomy of a box and whisker plots later in this course here , the number outputs. One thing I'd like to point out is the source. So what source is is adding variation to this process is it's the repeatability piece where operators agree with themselves the reproduces pit Bruce reproduce ability piece where operators agree with each other Or is it part two part variation? What we really want is parts apart variation to make up the bulk of the variation or contribution of the variation so that we get data that we can rely on again here. The numbers with the percent contribution in a little bit cleaner cleaners, charts And here's the summary again, What we're looking for is the measurement system to provide the minimum amount of variation to our process. Nondestructive testing is where operator three can measure the exact same part as operator one and two Did. Destructive testing is where once the sample is taken or the observation has taken, they could no longer measure that particular item again. Now, if we talk about attribute data, we can also do em essays on attribute data. Accurate data requires more samples because you only have a limited amount of options. One of the good things about attributes, essays is that we know what the real number is or what the true value is. Therefore, we can compare our appraisers to that true value. That's something we cannot do in the measurement in the variable Uh, M s a 33. Measurement System Analysis Part 4: Here's an example of attributes M. S A. We are starting to collect data from one operator in three different iterations. In this case, it is Tom. You can see here in column three. I'm sorry, Row three. Tom does not agree with himself on Sheet one, he says past she, too, he says past. And then she three says, failed. So we're testing repeatability in this case. We then do the same thing for two more operators. Joe and Paul. Paul's first row, failed past failed again proves that he does not agree with himself. Repeatability is an issue. We then tally that for each operator, and you can see in the second column it says attributes. Those are the true values or actual values that for those particular things you can see in Column three, the attributes is a fail. But Tom says past pass fail, so the accuracy to the standard is not there. Policy is passing it when it's actually should be a fail. Uh, he actually can't agree with himself. It all we do the same thing with Joe and fall. You can see it. The the bottom. Paul agrees with himself 53.3% of the time percent appraiser and then percent after view is how often does he get it accurate or agree with this standard or the attributes And 26% of the time? We then compiled the results with all three operators so that we can test both repeatability and reproduce ability as well as accuracy. In this case, accuracy can be done with an attributes M s A cannot be done with a variable Temasek. Here are some acceptable ranges for both continuous data and attribute data for em. Essays In this session we covered precision and accuracy, the difference between consistency and closest to the targets. We talked about the three aspects of accuracy, biased linearity and stability. We talked about using a gauge our in our or repeatability and reproduce ability study repeatability and reproduce ability or two elements of precision. And we talked about the difference between a variable M s A. And an attribute of this a 34. Process Capability Part 1: in the final session of the measure phase. We're going to talk about process capability in General. Prescott ability is how does my process perform against customer requirements? We'll talk about capability analysis, explain the concept of stability, which is processes performance over time. Talk about actually discrete capability as well as different monitoring techniques. Process Capability uses specifications limits defined by the customer. There's an upper specifications limit in the lower specifications. Waken measure a process using signal levels. Signal levels are the standard deviations in relation to the standard normal curve. How many standard deviations are there within our process? Signifies a universal measure of process performance. We can do that with variable data who are attributes. We do it with several different types of metrics and sigma is that one universal aspect. So process capability is defined in two ways. CP is the first measurement of process capability. What it measures is, am I able to do it way? Take the upper specifications limit minus the lower specifications limit. Divide that by six standard deviations or six tests. What that means is Voice of Lee customer Over voice of the process is total tolerance over total spread it says. Can the process meet the customer requires? You see in the chart here different values that's correspond with a good CP value versus a bad I are the CPI value, the better the process is in relation to the customer. CPK is the second measurement of process capability. CPK uses the mean in the formula sample mean be expert and which is a measurement of central tendency. So as CPI measures, am I able to do it? CPK measures Am I actually doing it? It takes the location of the process into account as well as spread with CPK. We have to calculate the CPI upper and the CPI lower separately in order to find out which one is the smaller value. You can also see on the table there the different values for good and bad CPK valves. Some rules associated with of process capability. See if point ounces rule number two CPK will always be lower less than are equal to CP. If c p K equals C p, we know that the process is perfectly centered. Mention signal levels earlier so hot US two Sigma levels correspond with CPK values. Well, signal level uses the specifications limits from the customer and ties the center of the data using the mean our expert are and then divides it by spread, which is standard deviation. You can see an example here. Long term process capability uses indexed indices referred to as PP and PPK. They are calculated the same way as CP and CPK. They just use long term standard deviation verses, short term standard. You can see here a comparison between CP and CPK against ppd and PPK in CP and CPK are short term process capability PP and PPK are long term process capability. 35. Process Capability Part 2: here is a computer output of process capability. You can see on the left the lower specifications limit and upper specifications limit values those air represented on the graph with the red lines, LSL and US. You'll also see on the left sample mean sample size and standard deviations. Remember that sample mean is a measurement of the middle of my data or central tendency, and standard deviation is a measurement of spread or variation spread out by data. You can see on the right of the graph the CPI value of a 1.34 which is a in acceptable CP value. You'll also CCP, Lower CP Upper and CPK. So the CPK is the minimum of the CPI lower and see p number that CPK value is a 1.3. This ah this'll out but also explains the, uh within, um process capability as well. A short term CP and CPK versus long term PP and PP 36. Process Capability Part 3: you say the CP and CPK stuff is all well and good, But what do I do once I actually collect the data? And I realized that my process is not capable? Well, there are several courses of action. Also, where else can I use this process capability analysis? The next thing we want to ask are What are the characteristics that we need to measure within the process capability study? What should they look like? Well, they should be key factors in the quality of the product or process. They should be adjustable values, and we want to do this within a controlled atmosphere. If there are outside factors that are influencing our data, then our data will not be telling us when it needs to. So I mentioned earlier that the customer defines the specifications limits. But who else could do that? Well, we could go to industry standards or the engineering departments that, uh, converts the voice of the customer into customer requirements. Having specifications limits, clearly defined and agreed to, is a key points early on in determining process capability. You can see here anything outside of those specifications limits is going to be a rejection anything with Ed, those Russification limits is accepted. That's why thes things need to be very well, clearly established and agreed to when we're converting those standard deviations and into Z values. What we want to do is have a common knowledge that that's the reason that we do it so that Z transformation. How do I convert my data using mu and Sigma into zeros and ones with the Z Z distribution? Well, you can see the equation there for a Z transformation. Once we do that Z transformation, we can apply the rules of the Z distribution or the standard normal curve using the Z table or standard normal table. The way that this table works is the Z values around the outside edge is the the first decimal place of a Z value is going to be on the lie access or the up and down along the left side, and then the second decimal points of easy value is going to be across the top. If I'm looking for a certain type of Z value, I simply start going down until I reach the first decimal point that I need and then go across to the second decimal point that I need in a matrix style. Everything inside of the table is considered the probability or area under the curve. Many of the Z tables have a drawing like this that's describes the area under the curve. You can see the equation, maybe a little overwhelming, but all we're talking about is what is the percentage of shaded area under the curve you can see here if we go to a positive 1.5 Z value starting at the far left side? What is that area that is covered under the curved? Using this equation, you can see that it's 93% or a 930.933 Another example. Because the standard normal curve is symmetrical, and we know that the area under the curve is a total of one. We can work backwards into these percentages. You can see here that if I go to a Z value of 0.1 to 6, but I want to know what's left, we can just subtract the total from one, and we get 10%. 1/3 way that we can use the Z table and Z distribution is to determine what is the difference from one Z value to another Z value. In this particular example, you can see that I'm trying to find the area under the curve between negative 1.25 and 0.37 So, using the Z table, I know that I can determine what it is up and tells euro 0.37 and what it is. Up until 1.25 If I subtract the negative 1.25 area under the curve from the 0.3 positive 0.37 I end up with the area under the curve between those two Z values, which ends up being 53.9%. When we talk about non conformance again, that yield is inside of my upper and lower specifications. Limits on my target value is obviously going to be in the middle of those that specifications range. This is one way that I can predict what my process, yield or probability of defects is going to be using the Z Z table. Here are the equations to apply that and the function within excel that you can use to do this automatically. The 1.5 Sigma shift is a theory that is applied to Six Sigma that is used in the change of variation over time. So when we talk about long term process capability, there is going to be a 1.5 sigma shift that's that occurs over time. Here is short term versus long term ah Z scores and the specifications limits to that. So remember that your data must be normal in order to calculate a proper CP CPK or PPK Normality test. You can see here is the is your data against the normal line in a later session will get into what the P value, uh means and how we can use that in excel. This is the equation or function that that you can use to include that 1.5 sigma shift. Remember, For short term, we do not include that because it is a It is a theory over the long term of the process. We've been talking this whole time about variable data and how we use variable data for process capability. We can do the same thing with attribute on discrete data. We use control charts or run charts that are associated with actual data. One way too easily recognize attribute data control charts is that they are all letters P and P. Seeinyou versus statistical terms like X bar are in s one of the distributions that we can use for attributes. Capability is by no meal. You can see here how we get that by no meal on the left of the slide using mini tap. The outputs of that mini tab function is on the rights. We are using a Apichart Teoh to do this the other way. That we can do this for certain types of data is using the poison capability, analysis or poison distribution. This is where we use the your chart and this is how we get it. In many tab. This is what comes out of many Tab looks very similar to but by no meal. But again, it's all based on the data that that you have. Stability is a major concept within six Sigma because it is how the process performs over time. We want to make sure that it is predictable over time and not out of control. There are two types of variation that will cause a process to perform out of control, common causes and special causes. Common causes are that natural flux of variation that always exist. Special causes are something that is assign herbal something special out of the ordinary. 37. Process Capability Part 4: within a process. There are two types of variation. Common cause, variation in special cause variation. Common cause variation is that in het inherent natural variation way use standard deviation to measure that variation or spread of our data. If our data is normally distributed, way could go back to that standard normal curve where plus and minus three standard deviations from the mean represent 99.7% of the process that is pictured here on the bottom of this, like we can measure stability in track stability. Using control charts Here is an example of an expert in our charts. They are two separate charts. The top chart is the X bar or sample. Mean bottom chart is the arch are or sample range. So in this particular case, our sample size is small less than five. Therefore, you can see our upper control limit lower control limit for each of those that they're based on standard deviations. You could also see no specific or identifiable pattern to the data. This represents the common cause variation that natural bouncing around with in the uh plus and minus three standard deviations here is an example of a process capability. Six back, you can see and it's starting in. The upper left is the expire chart that we just referred to as well as the our chart. Just belittle that, Uh, next next graph represents subgroups on the upper right. You'll see the capability History Graham number hissed. A gram is for continuous data. Are bars touch. It also has on that the upper and lower specifications be very careful not to confuse specifications. Limits, which is which are defined by the customer birds is control limits, which are defined by the process using standard deviation. Middle charter on the rights Eisen Normality Plots taken sure or insuring that our data fits the normal curve, and next is the capability plots using CP and CPK, a swell as pp and PP. In this session, we talked about process capability. We measure process capability using CP and CPK for short term and PP and PK. For long term, we can use both variable data as well as attribute data for capability. We also talked about different monitoring techniques that we use for process to make sure that it maintains its capability or stays within those customer specifications ranges 38. Process Capability Part 5: This is the end of the measure phase. Remember, the measure phase is all about measuring the current state process. Some of the things that we used to measure the current state process are statistics, measurement, system analysis and process capability. 39. Lean Controls - 1: in the session, we will be talking about lean tools that we can use in the control phase that will help to sustain the gains. Some of those lean tools are five s Kon bon in pokey. Okay, five s program is critical. Teoh, A visual factory. It is an opportunity for employees to take ownership of their work areas. It also is just as applicability to the office area as it is the shop floor. Ah five s program is relatively easy to set up, but very difficult to maintain That fifth s of sustained is the hardest s A con von system is a signal and helps to monitor inventory and production. As we walk through the different steps of a con bon system, you see that parts are used, a withdrawal combine is is taken, a whip conv on or working process. Kon bon card goes upstream to create a pole system. This condo's this combine system can can go through the entire supply chain all the way back to raw material outside suppliers 40. Lean Controls - 2: Pok Yok is Japanese for a mistake proofing this is a A simple in an inexpensive way of making the process more robust. An operator cannot necessarily blue be blamed if a process does not fix itself. So, uh, the best. Okay, okay. Devices are ones that become part of the process and are not extra steps. There are ones that are simple there ones that are inexpensive Errors occur in many ways and that does not necessarily blame the operator. Therefore, what you're trying to do is identify ways that or causes of the the errors. A simple tool that we mentioned back in the measure phase is the cause and effect diagram for brainstorming these causes. Once identified, you can put a mistake proofing elements into the process. So that's the operator does not have the opportunity to even make that mistake. 41. Lean Controls - 3: here some examples of mistake proofing or pochi yoking prosper processes. It should start in the design phase, Aziz much as possible. It should extend over to tools and fixtures and there should be some sort of ah, work procedure. Any type of signaling mechanism is also pokey. Okay, There are several design improvements that we can make to mistake proof, aren't products and processes the three key elements to any good. Cokie. Okay, is that it's simple, It's inexpensive and ideally, it's a part of the process, meaning the operator or the process does not have to consider an extra step. It's just part of how things were done. 42. Lean Controls - 4: remember, the purpose of Lane is to eliminate waste and create flow. Visual management is a major component to this in the lean philosophy. Visual management is not just visual, though it takes other senses into consideration. An example of visual management are and on lights colored lights that signal and communicate something judoka joke is automation with a human touch. We're combining man and man skills with machines, abilities and technologies. Visual standards. Providing examples through pictures or samples of this is right, and this is wrong. Visual management boards. How can I manage a A process? An area, a cell using a standard board area information boards. What am I trying to communicate for that particular area? Combine systems using con bon cards number con bonds, air signals, signals to go do something. So when you put that whole system in place, con bonds are part of that visual management process, transparent machine covers and guards. This ties directly into the safety aspect. Uh, of Visual Management. Five S is also a major tool that is used in creating visual systems. 43. Lean Controls - 5: standardized work is a tool that is good for encompassing all of the lean controls that we've been talking about. It is a way to ensure and maintain quality productivity in safety. There are three elements of standardized work. Tax time, which is getting the customer what they want when they want it, or all based around customer demand and available time work sequence. Meaning what is the sequence of steps that need to be done in order for the work Teoh occur and then sweep or standard working process? How do I keep the line moving on and flowing in this session? We talked about the different tools in the Lena Toolbox that we can use in the control phase. Some of them are five us Kon Bon. Okay, okay. Visual management in standardized work. 44. Statistical Process Control (SPC) - 1: Now let's talk about a six Sigma tool out of the tool box that will help with the control phase of our Six Sigma projects. It is called statistical process Control. We'll talk about how you collect data for this, the different types of control charts and the, uh, common and specific and Adami to those control charts. The purpose of statistical process control is to monitor the process. There are two types of variation within a process. Common cause and special costs. Again, we want to eliminate that assign herbal or special cause variation within the process. Statistical process control uses run charts or control charts often referred to as sure control charts. They allow us to average level of the, uh, they allow us to use these. So that's we have the average level of quality, characteristic basic variability of the quality characteristic in consistency of performance. The reason that we used statistical process control is to save money. Ah, lot of times we can eliminate or reduce inspection, which is appraisal cost of quality. If we have good statistical process control, we can use control charts to predict issues that are are coming up. We can see them before they actually occur and prevent them. We can reduce defects using statistical process control. We can also monitor or keep score with our continuous improvement. The first thing that we need to do is understand what variables or what measurements we want to to use the one of the major things that I see wrong and industry using statistical process control is that they want to measure everything. It is not important to measure everything. You have to identify what the critical characteristics are that need to be measured, the inputs and outputs of a process. Some things that you you should consider when selecting a controlled chart variable. We can use statistical process control for both variable data as well as after view data invariable data. We're going to measure statistical numbers with attribute data will measure defects or defectives. There are a multitude of control charts that's we can choose from. The first thing we have to decide is what type of data do we have? If we have variable data, then we have to decide what is the size of our subgroup meaning as I put a point on the control charts, what does that point represent if that points represents one data point or one observation that I'm going to use something called on. I am our chart or individual in moving range that actually consists of two charts. There is the individual charts and the moving range charts. If my subgroup size or if that that dots on the, uh, the control chart represents an observation greater than one. But less than eighths, I will use it X bar. And at our charts, the X bar plots the average of that subgroup and the R chart plots the range of that subgroup. If my subgroup is greater than eight, I can use an expert and s chart and the expire sharp again plots the theatrics of that sub group in the S chart plots the standard deviation of that subgroup. The reason that we can use standard deviation is because that are are subgroup size. Our sample size is greater when it's lower. The standard deviation is a less robust statistical tool to use to measure spread. So we will be measuring it using range with attributes, data. We don't have ah statistical terms that we can use. So we have to make a decision. Are we counting defects or defectives? Defects are discrete data defectives are binomial or binary, uh, data meaning it's either defective or it's not defective. So if our subgroup size remains the same four defectives, meaning I'm always going to measure 10 10 units and out of those 10 units, I'm going to find out how many of them are defective. If that's the case, then I'm going to use an end charts. If my subgroup size varies, for example, I'm going to measure 5% of each day's production, and that production number varies. That means that my sample size is going to vary. Then I'm going to use Apichart. If I'm measuring defects and my subgroup size is the same same rules, then I use a sea charts. If my sub brute size varies and I'm going to use A you charts another form of statistical process control. Uh, monitoring is using come some charts. V. They are a little bit different than the Shuar charts, Um, and and I'll show you an example of one on the next slide. But you could see the characteristics here. Here's the come from comes some charts with the V mask displayed. You can see that the, uh, the process is out of control as the data points is displayed above the upper arm of the V mask. One last control chart used in statistical process control is E W M a or exponentially weighted moving average. The benefit of using the E W. M. A statistic is that I will be able to predict a little bit better than using sure control charts that is more reactive. 45. Statistical Process Control (SPC) - 2: Here's an E W m A chart The Red X is on the charts are the raw data or observations? The black squares connected by the green Line is are the e w m a statistics. Over time, you can see how the e w and they takes the nervousness out of the actual observations. You will also see that the process is under control or in control because it does not fit any pattern just yet. Well, and there are no points outside of the control. It's you will notice the last five observations Ah, in an upward trending, uh, line and should be should take notes. The operators should take notes of a potential issue. The disco process control. Uh, collecting data is the is the main aspect of this. If you don't have correct data are accurate data, then your SPC process, it will be broken. You have to make sure that you have the correct control chart and you're tracking the right thing and then understanding the different aspects and elements of control charts. This is the end of the control face. The control phase is the phase where we are making sure that we maintain the gains that way we've put together and work so hard. Teoh, get Teoh. We do that using control plans. Main element of those control plans are reaction plans. Some some different tools that we can use to keep that process in control and monitor that process are both out of the lean toolbox as well as the Six Sigma toolbox with statistical process control. 46. Control Plan - 1: welcome to the control face of Lean six Sigma. This is the final phase of the domestic process. This is where we do final implementations with the improvement plans, we put control plans together and make sure that we maintain the gains that we've, uh created throughout the throughout the project. In this phase will be talking about six Sigma control plans the structured process that we used to create them, some types of controls that we can use coming out of the lean toolbox as well as using statistics to control process otherwise known as Spc. In this session, we're going to talk about cost benefit analysis, three elements of that control plan and what to do when a process is out of control referred to as the action plan. So in the defined face, we put our projected financial savings together. Now that we're in the control phase, we're starting to see some of those financial benefits. So one thing that we can use is called a cost benefit analysis. What this does is this allows our champion on other stakeholders to make the decision whether to go forward with full implementation of the improvement plan or not Is the project going to pay off with the projected benefits or or will not? And, uh, how should we move forward? So elements of a control Flint Ah, the There are three different types of control plans that are identified, the prototype which is done in the early stages of an improvement process. The production, which is the ongoing aspect of it in the pre launch, which is where we're trying to move forward with the process. 47. Control Plan - 2: so often people get excited when they finished the improve phase and say, Geez, I'm so happy that we were able to improve the process that they forget about putting tools in place to actually control the process. The control plan is one of the major aspects of a lean six Sigma project. The control plan is owned by the process owner. It is not owned by the project leader. You must always involve the project team as well as the process team in the control plant, any type of work instructions or procedures that are affected by the The improvements need to be updated or created, and training must be done to any type of personnel that are affected by these improvements . We want to make sure that the control plan training is effective so that it is not a loss documents and a lost phase. Attaining agreement between team members and process owners is an ongoing thing. There is a trust factor associated with that as well. There are several inputs that are associated with the process that we have covered in the project. All of these need to be captured in the control plan as necessary as possible the steps to creating a control plan. Start with Control MMA Control number. If the control plant is a controlled document, it will have a number associated with it and go into the database. Team members are critical because they have been a part of the process of investigation, measuring the current states and making improvements. Make sure that the control plan has the right people associate ID on it. The original dates of when the control plan was created in any type of revisions that have been made to that control plan. The control plan should last the life of the process. It should always have some sort of key input. Variable cause that's the knob that we're going to turn should have some sort of key output variable because that's what's going to be affected by the input, their variable, any type of special characteristics that are associated with that specifications, which are defined by the customer. And then how are you specifically measuring that? That metric, Starting with gauge capability, which is the gauge R and R. Make sure that accuracy and precision are covered in whatever measurement tool that you're using. Ensure that sample sizes appropriate and sampling technique is is appropriate. The initial capability, um, of the of the process so that it can be It can be used going forward. Who's the Who's responsible for doing the measurements? And then how are we going to control that? The last piece is referred to as the reaction plan, which is also referred to Is Out of Control Action plan. What happens if our process ends up out of control? There are documents that are associated with the control plan. In the reaction plan, make sure in a control plan that, uh, there is some sort of emergency situation that is covered as well out of control. Action plan is very specific to the operator that identifies the out of control situation. Here are some aspects to the reaction plan Keeping it simple. An easy to follow is one of the main things. It helps to formalize it so that it is followed more re Gurlitt rigorously 48. Control Plan - 3: Here are some guidelines for documenting a reaction plan. The main points that I want to drive home is to keep it simple, but keep it clear the out of control action plan is exactly that. What actions do I need to take when the process is out of control? Three other main points is the documentation should be nearby. The process. This is not something that should be stored away in a binder in a file cabinet or or some place not accessible. Uh, the reaction plan and control plan should be measuring key points within the process. So it is not supposed to be some sort of a truce of process that takes away from the actual production of whatever you're trying to do. The last thing on here is the SWAT analysis, which I have not mentioned yet. But it stands for strengths, weaknesses, opportunities and threats. So in this session, we talked about cost benefit analysis basically saying, Hey, is our project getting what what we planned on it. We talked about the different elements of a control plan. Remember, the purpose of a control plant is to monitor the process to make sure that we have maintained the improvements that we put in place. The, uh, major elements of a reaction of all control plan is a reaction plan or out of control action plan. What do I do when my process is out of control or going out of control?