Transcripts
1. Easy Statistics: Non-Linear Regression Short Promotional:
2. What is Easy Statistics: Non-Linear Regression?: Welcome to easy statistics. Nonlinear regression. What is easy statistic? Easy Statistics is designed to provide you with a compact and easy to understand course that focuses on the basic principles of statistical methodology. No prior knowledge is required when, following an easy statistics course, you do not need to have a background in statistics to follow this course. In fact, the less you know, the more your gain. Importantly, there are almost no equations in this course. The emphasis of this course is on understanding statistical contact intuitively, Andi in the gentleman, my focus will be on application and interpretation. Off statistics. I want to teach participants how to use and how to interpret statistical results without needing to understand the underlying mathematical macaques.
3. What is Non-Linear Regression?: what is non linear regression. Nonlinear regression is an important regression technology that is used much more frequently than you might think. However, non in your regression is not about fitting curves to data. Linear regression can also do that. Nonlinear regression is about nonlinear parameters. It means that regression coefficients are not linearly related to changes and why, In simple terms, a one unit change in X can lead to a one unit increase in why, or it could lead to a 1.5 unit increasing why the change is not constant. The most popular long linear regression techniques are loaded and probate regression. We'll focus on these, but there are also more complex variations. Logic and probate models are often used when the dependent variable is not continues. World's. This doesn't have to be the case to use the nonlinear regression in practice. Many regression models that contain non continuous dependent variables are estimated with only near regression models. Nonlinear regression models are used in many scientists, including economics, sociology, psychology, politics and medicine. It is often used to analyze choice. Why do people vote for a particular party, or which mode of transport to people take
4. What are the main learning outcomes?: What are the main learning outcomes to learn and understand the statistical intuition behind nonlinear regression without needing to know about complicated equations, to learn how to apply loaded and probate regression models today to be able to comfortably interpret and analyze output from such models and finally, to learn some extra tips and tricks that will help you when dealing wouldn't only near a question models.
5. Who is this course for?: who is this course for this course is for academics, students of any left. It doesn't matter whether you're studying at school or university. If you need an easy introduction into the non linear regression, this course is for you. But this course is also for practitioners, such as business users who deal with quantitative analysis at their workplace. This course is also for those working in government, especially those were involved in policy analysis. Finally, this course is aimed at anyone who has an interest in all needs to engage with non linear regression.
6. Prerequisites: what prerequisites are needed. No math or statistics knowledge is required to follow or get the most out of this course. Some state of knowledge may come in handy for the practical application part of this course , but this is not required. Stater is a statistical software program that allows uses to estimate many different types of regression models. I will use data to demonstrate regression examples, a keen interest in understanding how measurements might be related to each other. Regression is all about measuring quantitative variables against each other. If you want to know how wise related to X, then this is the correct cause for you.
7. Using Stata: using states. In this course, I'll be using states that to demonstrate examples state that is a purchase herbal statistical software, and you can find out more information at www ST dot com. There are many courses on How do you state or should you be interested in this course? I will not teach Stater, but focus on the interpretation off the top. However, if you are interested in stater and in replicating example, some scores are half attached. The relevant code fouls to this course, all training data using the scores can also be downloaded.
8. What is Non-Linear Regression analysis?: what is non linear regression analysis. Just like linear regression analysis. Nonlinear Regression Analysis is a statistical technique that examines the relationship between one dependent variable. Why and one or more independent variables. X. An alternative term used for dependent variable is outcome response or endogenous variable . Alternative terms used for independent variables are predictor explanatory or insurgents. Variables like linear regression models. Non linear regression models, often right models in the form Why equals two x one plus X two plus X tree etcetera. The last term will be in error term, often denoted by e captures. Everything that is missing will avoid writing too many equations in this course. So we leave this expression like this. Variables can take many forms and normally near regression analysis. They could be continuous, in other words, that there could be measured anyone a number line, too many decimal points. Date. That can be an internship format such as 12 or three. Data can also be in finally, formats such as zero or one. Sometimes data are orginal. Orginal data is categorical data that is ranked such as like it skips. Finally, data can also be nominal. This is categorical data that is unknown, for example, different modes of transport. The key difference with linear regression is not for non linear regression models. The dependent variable is often not continuous. Nonlinear regression is primarily used when the dependent variable why is measured as an integer finery, orginal or even nominal variable. This obviously applies to a lot of Abel's in real life, and this is one of the reasons why nonlinear regression methods are so common.
9. How does Non-Linear Regression work?: How doesn't only near regression work? Nonlinear regression assumes that variable parameters relates to the dependent variable in a non linear way. Very parameters or coefficients is what regression analysis estimates, for example, why equals two one times X In a linear world, this means that for every one unit change in X, why would increase by one unit? However, in a non linear world, we can't be sure what the changing. Why is the change in why depends on the specific value of X. It could be more than one, or it could be less than one. The exact value will depend on the type of nonlinear transformation used. This, unfortunately, makes interpreting lawn in your regression models much harbor. The row coefficients often have no reasonable interpretation. That is why it is important. Understand how the coefficients from you non linear regression models can be re transformed into something useful. Often this is done using marginal effects computation
10. Why is Non-Linear Regression analysis useful?: Why is non linear regression analysis useful? Like linear regression? Nonlinear aggression is used to answer questions that require quantitative evidence like linear regression. It allows us to examine the effect, often explanatory, viable on a dependent variable controlling for other factors. It is used for hypothesis, testing and for predictions, very much like linear regression. However, nonlinear regression has a significant advantage with certain data types. Specifically, it helps us avoid an out of bounds prediction. For example, if a dependent variable is measured as it by new variable. In other words, zero or one linear regression can predict probabilities off greater than one or less than zero. But how can we have less than 0% charge to do something? Alternatively, dependent variables like time require positive predictions. Only if someone is given the drug, how much longer will they live? Well, at minimum, it must be zero or more right, so therefore, predictions should not be below zero from such models, nonlinear transformations and sure that we don't predict nonsense from our regression models.
11. Types of Non-Linear Regression models: what types of non linear regression models exist. Quite a lot, actually. Whilst linear regression models such as ordinary squares remain the most commonly used regression methods, it turns out that many popular regression methods are actually nonlinear. The most famous example off non linear regressions are probably loaded and profit regression models. These are regression models for buying re dependent variables. The dependent variable is often measured. Has zero or one common examples include voting decisions being unemployed, educational attainment, choosing to do something etcetera. Load it, and probably models use nonlinear transformations to ensure that model predictions stay within the zero and one boundary. Both models are very similar, but you slightly different nonlinear transformations to analyze the Penan variables that have ordered categories such as a like skills, we often use ordered, loaded and bordered profit models. These are very similar to load it and profit models and use similar normally near transformations. The additional trick that these models use is to include cup points into their modeling, which estimate where decisions are cut so that predictions into different categories can be made. Another class of nonlinear models are multi nominal loaded models. These are often used when a dependent variable consists of a non ordered or normal categories. A famous example includes what motive transport people take the bus, the car or the train note that multi normal problem models do exist, but they're not frequently used. However, nonlinear models do not only work on categorical choice models. Some data types require that predictions abounded between zero on positive infinity. In other words, model should not predict negative values. Examples include Count regression models and time regression models. Both require transformations so that the predictions from these models are not negative. The poor sahn and negative binomial regression models a common examples for count data. What's the Cox proportional hazard model is a common example when time is a dependent variable in the regression.
12. Maximum Likelihood: maximum likelihood, whilst ordinarily squares is estimated by solving the least squares equations, most on non linear models estimated using maximum likelihood. Maximum likelihood is a new miracle method that estimates the value off the parameters that have the greatest likelihood of generating the observed sample of data. Maximum likelihood is often estimated attractively, which means the computer perform many calculations to narrow down the best possible parameters. I'm not going to explain this technique in a lot of detail, but here are some basic tips that should be observed when dealing with maximum likelihood. Estimation. Maximum likelihood should be used when samples are larger than 100 observations 500 or more . Observation is best. More parameters requires more observations. A rule of thumb is that at least 10 additional observations per extra parameter seems reasonable. However, this does not imply with a minimum of 100 observation is not needed. Maximum likelihood estimation is more prone to Colin E. Garrity problems. So much more data is needed. If explanatory variables are highly cold in there with each other, you remember little variation in the dependent variable. In other words, too many outcomes at by the one or zero can also lead to poor estimation. And finally, some regression models with complex, much from likely functions require more data probate and loaded models. On these complex models like Milton, Normal loader models are very complex.
13. Linear Probability Model: linear probability model. Let's have a look and explore why nonlinear regression might come in handy by examining the linear probability model. The linear Probability model is a standard ordinarily squares regression applied to a model where the dependent variable why is binary. But before we continue, please note the following. The linen probability model is often used to demonstrate what is a bad idea to run linear regression through categorical data. However, often the result from the linear probability model will be very similar to the final marginal effects from a loaded of profit model. I will demonstrate us later, but for now be warned that once we often say that the linear probability model is wrong, the truth is probably more complex. It can be surprisingly useful when used with the right amount of knowledge. Also, be aware that if you ever do decide to use the linear probably to model, you need to use robots standard eras, as the linear probability model causes Petrosky elasticity. Imagine for a moment that we have a very simple data set that contains only two variables. Why Index? We're interested in the relationship between why and X Imagine that why is also measured as a finely variable, either zero or one and X is measured as a continuous variable. Before we go further, let's see how this would look on a graph. It would look something like this. Each continues X observation is associate it with either a zero or one y observation. A scatter plot of such data is probably not the best way to visualize this kind of data. But bear with me because the sample size is not enormous. We can just about make out that observations here with higher values of X are more likely tohave a value off why that equals one. Whilst observations with lower values of X appear more likely to have a Y value off zero. This tells us that there seems to be a positive relationship between X and why increases in X lead to a higher chance of why being one so far, so good. But of course, doing this visually has its limits. We don't know what the exact relationship between why and exist. We could plant the relationship between Wine X using a non Parametric fit like so this method clearly tells us there is a positive relationship between why and they initially the relationship is non existent, and then, at a certain value of X, the relationship becomes positive. After a certain higher value of X, the relationship flattens off again and becomes non existent. Great, however, we've already discussed the problems with non Parametric in a previous course. We want to be able to parameter tries the relationship between why necks so that we can compare it to other data or give this information to somebody else. How can we do that? One way is to use ordinarily squares and run a simple linear regression through her pick that would result in something that looked like this. The Lynn. If it clearly establishes a positive relationship between Y and X, the estimated slope coefficient of this regression is approximately 0.23 In other words, for every one unit increase in X, the probability of why being one increases by 23% points. Great Next. Let's plot the estimated predicted values of Y from our simple regression model. Yeah, that seems to be a problem with our model of the predictions from our linear regression model results in three observations having a predicted Y value above one and one observation having a predicted Y value off below zero. This is the problem off the Linger Probability model. It's linear. Nature, by definition, predicts values outside our bounds. That doesn't make sense. Such results a nonsensical It is not possible to have a probability of voting for a party a off 120%. Unfortunately, no matter what the relationship between, why and exes, any linear relationship will at some point predict why values that go out about and this example here, and you a slightly shallower of aggression slope between the state of. But you can still see that at some point it will go out of bounds. There is no escaping this problem with linear regression. Something will always be a little bit wrong. Clearly, we need a better kind of model.
14. The Logit and Probit Transformation: the low budget and probably transformation the arm sarees to use a non linear model. Specifically, In this case, we need to use some kind of transformation that makes the linear relationship between wine eggs non linear. The two most commonly used transformations for our previous problem are the loaded prober transformation. Both transformations ensure that the relationship between why next remains bounded within zero and one. In other words, that can be no out of bounds. Predictions from these regression models. Mathematics buying these transformations can look a little bit complex, So let's explore both transformations. Visually. Here is the estimated relationship between wine X from a loaded fit on a probably you can see that both are very similar in how they relate. Why in X together in general, both have a very similar shape and offer the same kind of predictions. There's often very little reason to prefer one over the other, and both are frequently used in the plight work. Both models predict y values that are now bounded between zero and one. Take a look. The predicted values Why, from both the loaded and probate progression, stay within the zero and one bound of why fantastic. It looks like we've solved our problem. Linear probability is out and non linear models are in
15. Latent Variables: latent variables. Nonlinear models are generally more difficult to interpret and linear models. Let me explain why many nonlinear models, like loaded and profit models, assume that there is a linear process on the line, each dependent variable. What does that mean? Well, imagine your decision to eat, to eat or not to eat. How do you decide Loaded in probate models? Assume that underneath your decision to eat or not to eat is a continuous an infinite hunger scale. If you're not hungry, you don't eat. If you are a little bit hungry you don't need. If you're a little with more hungry, you still don't. But at some point your hunger becomes too much and you decide to eat this sound Loaded in probate models work, they assume that every choice decision is the realization off people passing some visible cup point on the hidden continuous process. We call such a process a latent process. We often denote such a process with a variable called Y star. In our equations, Y star will be a function off many factors. For example, if y styles hunger, it might be a function off exercise. If exercises measure that X then the relationship between exercise and hunger might have a positive coefficient of one. However, Why start is always hidden from us? We don't see it. We can never observe this process to make things more difficult. This is what loaded and probate coefficients relate to. They recover coefficients that relate to why Star. This means that profit and loaded coefficients have no natural interpretation. They simply don't make sense. A one unit increasing next will lead to a one unit increase in unseen hunger. That doesn't make sense. So what do we observe? We observed the realization off. My star often called. Why, in other words, did somebody eat or not to figure out how X is related to the realization off choice We need to transform the coefficients from nonlinear models such as loaded in probate regression into something useful, and this is often done using marginal effect
16. What are Marginal Effects?: what are Montreuil effects? Marginal effect. Our slope coefficients. Sometimes they're also called partial effects in linear regression. Estimated coefficients are Montreuil effects. That is because they have a constant slope that doesn't change everyone. Unit increasing X will lead to a beta change in white. However, in long linear regression such as probably loaded regression, slopes constantly vary. There is no single marginal effect. This is why we must compute model fact at particular points. This is why we must compute martyr effect at particular points. Two types of computations are most popular effects, computed at the mean of X on the average effect off all effects computed along every point of X. These are the most common Montreuil effects of practice, but uses can also choose any other point that makes sensitive. Let me demonstrate us visually. Here we are back with one of our non Len. If it's off Y against X in this case, the fit is a probe. It fit in each data point as a predicted value off. Why along this fit, we observe that it's X increases. So does the probability of why being one. We also note that the relationship between X and Y is not linear to understand the effect of Exxon, why we compute Montana thing Mantra effect on the slopes at respective points of X. As you can see, the slope changes constantly at low values of X. The relationship between wine eggs It's almost flat AP average values of X. The relationship is strongly positive at high values of X, the relationship flat again. So we need to choose some value bakes. Where to compute how marginal effects the mean effects is usually good value, and in this particular case, the slope coefficient is approximately 0.30 This means that the effect of ex envoy is as follows. A one unit change in X causes a 30% point increase in the probability off. Why being one? Just remember the relationship. There's no hold across all the values of X at higher values effects. Further increases in X lead to much smaller increases in why being one
17. Dummy Explanatory Variables: no me explanatory variables. So far, we've established at the coefficients coming out of a non linear model require a bit of extra work to make sense off. However, we've only looked at a single continues variable, to be precise. We looked at a model along the lines of boy equals two beta X plus and a return where X is a variable that is measured continuously. What about if we include an additional dummy variable intermodal? In other words, we want estimate a model along the lines of why equals two beta x plus beta Tommy variable plus inheritor dummy variables up behind your variables that often take the numbers zero or one a bit like our dependent variable. Why in linear regression coefficients on dummy variables, sometimes called interception coefficient because they change the intercept? In other words, they moved the entire relationship between X and Y upwards, but downwards. However, in nonlinear models, their effect is not constant. They still shifting non in your relationship between my index up or down. But the size of the shift is not constant. Let me show you this graphically. In this example, we continue to fit a non Lynn if it on our observed data. Why is measured is a fine available and eggs is measured continuously. However, actual model underneath is from a regression model. Also includes a dummy variable dummy variables act as an interception. Observations with a dummy value of one say these represent men have a higher probability off observing a Y value of one for any given value of X. However, as can be clearly seen here, the size of this effect varies depending on where at X we are at low values of X. The effect off the Tommy Bible is almost negligible. It meeting values of X. The difference between the two cursed is high and finally, at high values vex the effect of the dummy variable decreases again. This all makes sense this because we continue to bow now relationship between why and eggs between zero and one violent nonlinear in this case, logistic transformation. Therefore, any stepwise effect from a dummy variable must also be nonlinear to continue to ensure that we don't go out of bounds with our predictions
18. Multiple Non-Linear Regression: multiple non linear regression. Finally, one about when we have a regression model with multiple continues country Bibles. How does that work? Let's take our previous model with a dummy variable and simply add in. Another continues, expand. Terrible. Let's call it X two. This gives us a model along the lines of Why equals debate a time sex one plus baked that time text to close beta types of dummy variable. The key thing to understand about the multiple, long in a regression is that the effect of each beta were very not just according to what value of X where, but also at one value off. Other access were, in other words, the effect of each paper. That will depend on the value off every X, not just the variable in question. In practice, we often measure the slope of each coefficient at the mean value off ball of the excess. Let's give me hard to comprehend. So again, let me show you a visualization of a loaded model with two continues variables on one dummy variable. Here is a visualization of the aforementioned loaded regression model. Outdated consists of one dependent variable that takes only the values zero on one. That is why, on the left hand graph, that data is distributed on the ceiling and floor off the three dimensional image our data also contested. Two continues explanatory variables X one and X two. Both have a positive relationship with why, but it's pretty hard to figure that out from our scatter plot on the right graph, we've plotted the predicted values from a loaded regression, whereas a linear regression model such as ordinarily squares attempts to fit a linear plains of best fit. Through this data, a loaded regression fits nonlinear planes off best fit through the state of. However, the loaded pain of Best fit is not only nonlinear in relation to only one X variable, the slope off the plane changes according to both. The X variables, specifically, the value of both exes will determine the relationship between X one and why I'm also next to and why all of this can be quite a tricky concept to grasp. If we have more expansion variables, all of this moves into higher dimensions. Finally, the effect of the dummy variable is also visualized. Here we have two planes of best fit in this graph one plane is for all the values of zero for the dummy variable, and the other plane is for all the values of one for the dummy variable. I think it's obvious to see how difficult it can be to make sense of such models. It's basically impossible to state for everyone. Unit Change in X Why changes by this much? Everything depends on each other in lonely near models.
19. Goodness-of-Fit: goodness of fit. Now that we have a reasonable understanding of how non linear regressions such a loaded and probate regression models work, let's talk about how to measure whether such regression models fit that data well. Traditional R squared values from ordinary squares does not exist for nonlinear models. There is no sum of squares computation coming from these kind of models. That means we cannot compute how much variance has explained and unexplained other ways to measure fit. I'll need it. Many software packages compute something called apes pseudo on square. This attempts to mimic it. Goodness of fit diagnostic by first estimating a so called normal in our model is a model with no explanatory variable and only a constant. A second model with full co variants is then estimated, and they comparison off the log likelihood function is made. The ratio of how much better the four model is is then provided as apes ago are square. It can be useful statistic, but it should never be considered to be similar to the traditional square. So there is some out of danger here, another way to compute witness if it is to look at something called a classifications tape . A classification table assigns predicted values from the model to either zero or one. Values that are predicted to be one and are actually one would be classified as correct. Likewise, Value started predicted to be zero and are actually zero are also classified correctly. Any other values would then be classified as incorrect. The proportion of correctly classified values then serves as an indicator of how well the model fits the data. Here's an example of a classification table from state. There's quite a lot of output going on here, so let me explain what's happening at the top. We see a classification table for our logistic regression model. We have a total of 100 observations off. These 63 observations are classified as one and 37 observations are classified as zero off the 36 observations and are classified as one. 45 have actual one values in the raw data. 18 have zero values. Likewise, for those with a prediction of 0 11 are actually ones in the data and 26 zeros in the bull data. Then a total of 71 out of 100 observations are predicted correctly. We can see at the bottom that 71% of observations are correctly classified. A higher value indicates a better fitting loaded profit model. Generally values about 80 or 90 are excellent values in the seventies are good values in the sixties are okay, and values in the fifties indicate a poor 15 model. Remember that simply by rolling the Nice, we could expect to classify 50% of values correctly. So 50% should be seen as the base line here. There are quite a few other statistics in this table, but all are just variations of a theme. However, there is one last item to note the classifications depends on a cup value, my beautiful by default, many programs used 0.5. In other words, values above 0.5 are predicted as one, and values below 10.5 are predicted as zero. This is arbitrary. What's a value of 0.5 seems to make logical sense. The couple in value can be changed, and this will result in completely different model fits. Here's an example of that. In this video, I'm demonstrating the impact on the goodness of fit statistic by changing the Classifications Cup. The graph shows the raw data points off a regression of a bind. Rea y variable against a continues expect a low Jamal is estimated and the predicted values applauded. Read values are classified A zero and green values are class, for it is one gray values. Slug Lee in large for better visual effect, the note. Incorrectly classified values. The initial cut point for classifying variables is set at 0.5. Now let's go ahead and change this. We can see that as we move the cup point value between zero and one. The proportion of corrected class 10.8 points changes dramatically. In other words, this measure of goodness of fit ISS subject to what we think is the right cup point to classify data points. This could never happen in the normal living regression model. My personal advice is to stick with 0.5 unless now very specific reasons to do so. One reason might be very skewed data. For example, if a bind, it enviable has a very high or low proportion off ones
20. A note about Logit Coefficients: a note about loaded coefficients. Probate coefficient do not have a natural interpretation as they relate to the underlying latent school of a dependent variable, which by definition is always unseen. And him, however, loaded coefficient do have a natural interpretation. Thanks to a quirk of mathematics for loaded models, the estimated coefficients can be interpreted as a one unit increase in X causes a beta increase in the log. Odds off. Why being one? This natural interpretation has some meaning, but the log odds portion can still be a bit awkward. To overcome this, we can exponentially eight coefficients from loaded model. This allows loaded coefficients to interpret it as odds. Odds specifically, odds ratios are still complex to interpret, but it does mean that uses are able to avoid marginal effects computation. We can interpret in the exponentially ated logical, efficient as follows. For a one unit change in X, the odds are expected to change by a factor of beta holding everything else. Constant odds ratios have a base of one when the odds are similar. Therefore, if the beta is about one, we can say that the odds are beta times larger with the beta is below one. We can say the odds are data times smaller, however, remember that whilst odds have some meaning, they do not reveal the magnitude off the change in the probability of outcome. Only module effects can do that.
21. Tips for Logit and Probit Regression: Tips for Loaded and Probert Regression. What state of requirements for nonlinear models tend to be higher than for linear models? It should be noted that probe it and loaded regression models are very robust, even small samples on scaling variation. In other words, whilst models like Morton on your loaded models require a lot of data loaded and probity, aggression can be done with a much smaller sample size. There's often very little reason to choose between loaded or provide models. Both his old both result in very similar predictions and similar models. However, one reason why some people gravitate naturally towards load models is the extra flexibility off the odds interpretation off its coefficients. Roll loaded coefficients are generally 1.7 times larger than war probably coefficients for the same model. However, Montrone effects will be very similar. It is generally good practice to report mantra effects at the mean of all other variables or the average margin. It would be strange not to report he's when using such models. However. Sometimes martyr effect complication can be intensive, and there are two ways to overcome this raw coefficients from noted a probate models, still allowing users to interpret to sign relative size and significance, or one could resort to alienate probability model. Let me explain why
22. Back to the Linear Probability Model?: back to the linear probability model. We started this course with a clear example of why a linear probability model is generally a bad idea. However, it turns out that there is a silver lining. Lanier probability models often produced the same marginal effect as the marginal effect from loaded and probate regression. If most off the variables in the regression model have a normally behave data, marginal effects computation will often produce the same slope estimates as the slope estimates from a standard linear regression. In other words, it is possible to genuinely use linear probability models to compute marginal effects for aggressions with buying new dependent variables. This can be really useful situations where computational time needs to be reduced. Alternatively, it could be useful for complicated, nonlinear regression models, such as panel data loaded models, where the mathematical complexities make marginal effect calculation extremely difficult. Here's an example of what I mean here I'm using Stater to estimate a logistic regression between Wine X and the loaded coefficient comes out at around 1.26 and average margin effect computation produces a result of circa 0.24 In other words, the average Montreuil effect is that a one unit increase in X leads to a 24% point increase in the probability off. Why being one? No, let's take a look at ordinary least squares regression using the same model, and it this model estimates a coefficient of 0.23 In other words, a one unit change next leads to a 23% point increase in the probability why being one. This is almost identical to reloaded model and highlights the potential usefulness of a linear probability models.
23. Stata - Applied Logit and Probit Examples: Okay, let's explore some of these concepts we've been discussing in an applied environment. We announced data, which is a statistical software package commonly used to analyze quantitative data sets. It is similar to other packages such as SPS is our or SAS. I won't explain how to operate Stater or the coat that I'm executing to obtain these results. You can learn more about Stater in specific state, Of course, is I've already opened up a training data set called National Longitudinal Survey of Woman 1988. Let's examine it a bit closer before we start running regression. Let's start with a description of the data. The output returned by the Skype produces high level information about the data, such as where it is located. How many observations and variables are included. Andi Its size In this case, our data contains 2246 observations and 17 battles. That's a fair sample size, but modern data sets tend to be a lot bigger. Below this is information on the variables, all variables measured. That's a numeric variables, while some a measure two different precision. There are no string variables in this data. The valuables all relate to labour market outcomes of a sample of woman aged 35 to 45 in 1988 we have information on their ages, wages, occupation, education and more. Good. Now let's a quick summary summarize provides us with some basic statistics for each variable, such as the observation count, the mean the standard deviation on the minimum and maximum values. Scanning through the data reveals that most mountains look normal for what we would expect . The average age is 39 years and 64% of sample Unmarried wages look fine, although we know that the variable union has observations missing. No, let's pretend we're really interested in explaining the determinant of union membership. We can already start building a picture in our head off what variables might be important and explaining the choice of being a union member. Wages and education are likely to be important factors, maybe age, too. In fact, a lot of the variables here might be important factors in determining someone's decision to be a union member. To keep things easy. That's only include a small number of variables to start with, let's pick age, wage married and college grad as our variables. The variable union looks like it is measured as a finely variable that's confirmed this with a tabulation. Indeed, the variable is measured as a bond, new variable and 24.5% of our sample on members of a union. Next, let's plot the viable union like himself, first by one the list age. This is a good example of why a graphical analysis off behind your data can be difficult. We can't really see anything here other than that for each year of age, there are union members, Andi, non union members. We could draw a local polynomial smoother through his clock to get a better understanding of what the relationship between age and being a union member looks like. It doesn't look like that. There's a particularly strong relationship between age and union membership for demonstration purposes. Let's now estimate a parametric relationship. Using a loaded model will only use age as an explanatory variable. For now, status loaded regression output looks very similar to that of a standard ordinarily squares regression output diagnostic information is presented at the top, and results are presented below that at the very top of the results we see the maximum likelihood process taking place. Stater compute several models with different parameters and estimates a log. Likelihood then converges on a best set of parameters that offer the smallest log likelihood. Because loaded and probing models are so well developed, it doesn't take many iterations to achieve a final set of results. The final local likelihood is presented here. Next, we have information on the observation count and a likelihood ratio. Chi Square statistic. This statistic is similar to an F test for linear models and tells us that the model explains something or not. In this case, the answer is or not, since the P value off the Chi Square statistic is way above 0.5 Next steps Uno are squid, which further confirms that this is a terrible fit. What's one should never translate this as being analogous to limit our squad. Statistics in value off 0.1 is extremely bad. In the results section, we see why the coefficient on age is very small and the standard error is high. The Associative Zet statistic is analogous to the T statistic and linear regression values above 1.96 employing statistical significance for reasonably sized samples. The P value also has the same meaning as fully near models. Values off 0.5 or below are statistically significant at the 95% level, both said Stop and P value shut up the variables. Age is very statistically insignificant. To further illustrate this, we could compute the predicted probabilities of union membership from this model and plot this on our graph. The blue dots represent their role data points, and the red dots represent the predicted probabilities of union membership. The result is that there's virtually no relationship between age and union membership. It is hard to see, but the predicted relationship here is still nonlinear. It's just that the nonlinear part in this bit of the data is so flat that we can hardly see it. If we predicted this relationship into higher ranges of age, we could see the load return information. Here it is using an eight range of minus 1002 plus 1000 will be also normal in your relationship between age and union membership from this particular loaded model. Obviously this doesn't make a lot of sense, were predicting far out of bounds, moreover, ages below zero. I'm not possible. So let's go back to our loaded model and add in some more variables. We know that age is not statistically significant, but unless there's a problem with Somersize, my advice is generally to not exclude statistically insignificant variables. The reason is that controlling for additional new variables might make earlier variables statistically significant. Again, let's take a look will add Wage Married a college graduate as further explanatory variables toe model. The model now has a chi squared statistic off 48 which is statistically significant. This means our variables to explain something. The suit, No all squared is 0.0 to 3, which is much better than before. However, it still seems like a low value. It is worth exploring this further with a classifications table in the moment. First looking at the results we see not two variables are statistically significant at the 95% level wage and college graduate one variable married is statistically significant at the 10% level. The currently presented coefficients are difficult to interpret, but we can infer size, sign on and significant wages are positively related to the probability of being a union member being a college graduate also positively related, being married is negatively related to being a union member. Both college graduate and married Tommy Explanatory variables. So we confirm that the effect of being a college granted is stronger than the effect of being married. This is because the absolute coefficient off college granted is around 20% larger than the coefficient of Mary. To make a sense of the coefficients in a more meaningful way, we would normally compute marginal effects. This communion easily and state are on by default. State of compute the average martyr effect. In other words, all the slopes. Because every value of eggs and it then averages tease. Here are the results. States are computed. The average margin effects would respect toe all variables. The effect of age is insignificant, but the interpretation of the estimate is this followed. On average, a one unit increase in age increases. The probability of the union membership by 0.1% point. Wage is also a continuous variable, so the interpretation is, on average, a one unit increase in hourly wage increases. The probability of union membership by 1.2% points. Married and college graduate a dummy variables so they can be interpreted as on average. Being married decreases the probability of union membership by 3.9 percentage points. On average, being a college graduate increases the probability of union membership by 4.6% points. Great. We can also compute specific Montreuil effect to answer questions about how specific people might be affected by change in X, for example, of the effect of being married on union membership is minus 5% point for women who are aged 40 with a college background and a wage off $30 per hour. Next, let's explore goodness of fit a little bit closer. The pseudo R squared value was 0.231 By calling a classifications table, we can obtain more information. The classification table file load. Your progression shows that we classified 75% off observations correctly. That seems like a pretty good number, but it is important to examine reclassification table in more detail. Whilst our model did a good job of predicting zero values that are actually zero, it is a very bad job at predicting any positive values. Only 20 observations are predicted to be union members, and we know from our summary statistics that around 450 observations are actually union members. So what's the proportion of correctly classified values is relatively okay. A further inspection off the classification table tells us that our model does a bad job of predicting positive values. It clearly needs more work makes let's compare the output from the loaded model. The results from a probe it and Linear Probability model. Comparing the role coefficients won't be very useful. So let's compute the marginal effects for each model. The Linear Probability model produces marginal effect 24 loaded and problem regressions. We need to ask Stater to computer well, stole these estimates and then compare them in a table. Like so, The results table indicates that all three models produced very similar result. The marginal effects are almost identical. For example, being married results in a 4% point decrease in the probability of being a union member from the Linear Probability model, a 3.9% decrease from the loaded model and they 4% decrease from the profit model. Finally, before we finish, let me show you the concept of Lake variables with a problem model. This can be a hard concept to understand, so I prefer to demonstrate this with simulated data. That's clear everything in our data. And let's invoke the set command that tell Stater to do something 1000 times when we invoke random number command. Finally, let's set a seed so we can reproduce our results. I'm now going to generate a new variable out of thin air using staters. Random number functions are going to generate a new variable called X that is normally distributed. Let's do a summary to explore what I've done. I've generated a new data set that has one variable X. This variable is normally distributed. It has a mean of zero and a standard deviation off one. A kernel density plot shows the normal distribution of this variable. Next, let's generate another variable called E, and it's also normally distributed. This variable will mimic in air a term in the regression. Now let's generate 1/3 variable called Y Star. We generated Y star equals to two times X plus one times e. So there is a positive relationship between Y star and ex off slope to, however, that's now pretend that why star is a latent and un observed process. We don't actually see why stop. What we see is a why the realization off Wife. Why is one if y star is greater than zero and zero if it is less If we tabulate why we see that 51% of observations or one and 49% of observations are zero. Now let's go on a pro with regression off. Why against ex? Look at that. The probe it coefficient is approximately two. This coefficient relates to the underlying relationship between y style and X. This is what we mean when we talk about latent variables and how loaded and probate coefficients are the coefficients off underlying latent processes. If we change the value off 2 to 4 in our Y style generation, the problem model will predict a coefficient of four. Hopefully, this little simulators example made the concept of latent variables more riel and easier to grasp