- Published on
Logit Notes
- Authors
- Name
- Kevin Navarrete-Parra
I am writing quick and easy R guides for my didactic purposes and to provide useful starting places for my peers in grad school. If you see that I have made a mistake or would like to suggest some way to make the post better or more accurate, please feel free to [email][1] me. I am always happy to learn from others' experiences!
Logit Function
Let's break up the above function to understand better what's going on. On the left side of the =
sign, we see the logit transformation of the dependent variable. Unlike an OLS regression, you're not just inputting on the left side. Since logit models deal with dichotomous variables, you must first transform the 1's and 0's. Therefore, we need the logit transformation where p is probability and ln is the natural log. Notice that this is the odds function shown below. The logit function does not simply predict a 1 or a 0 like an OLS model would predict a continuous dependent variable; instead, it predicts the natural log of the dependent variable's probability.
On the right side of the logit formula, we see where is the intercept, is the coefficient, and is the independent variable.
Odds
Calculating the odds of a given event. Notice the implicit assumption in this equation: the odds for the population (P) equal the odds for the sample (p).
Forward transformation from odds to log odds.
Backward transformation from odds to probabilities
Backward transformation from logit to odds.
Log Likelihood
First, we have the likelihood function,
This function gives us the unknown parameter given the known data . This effectively acts as the reverse of the probability function discussed above.
Next, we get the log-likelihood function, which builds up from the likelihood function above. The log-likelihood function will be important in what follows when we talk about the deviance of the logit model.
where . In other words, the log-likelihood is the natural log (ln) of the likelihood(L) of a parameter (p) given the present data (y).
The other half of the equation gives us the process for estimating the log-likelihood, which is the summation () of the natural log (ln) of the given probability () for the data () plus the natural log (ln) of the probability of observing a 1 ().
When using the log-likelihood statistic to test a model's goodness of fit, one will often look at the 2LL values in conjunction with the LL value. The 2LL value is calculated as
where the saturated model is the theoretical logit model that perfectly fits your data. This theoretical model would be of zero value because it would be so severely overspecified that there would be as many parameters as observations. Usually, this is simplified to . Your 2LL value for the logit model is termed the deviance, which indicates how well your model fits the data compared to the "perfect" model.
Using Deviance to Compare Nested Models
A nested model is a model whose parameters are a subset of another model's parameters. The model with fewer parameters is the reduced model and the one with the full set of parameters is the full model. Recall that parameters are the number of independent variables plus the intercept. Suppose that you have two logit models, then. The first has as the independent variables and the second only has as independent variables. The first would be the full model and the second would be the reduced model. You compare these two models by finding the difference between their 2LL values ().
Pseudo R-squared
The values for logit models are treated differently from the same values in OLS models. Here, we generally have three options for measuring : Likelihood ratio (i.e., McFadden's ), Cox and Snell's (i.e., Maximum Likelihood ), and Nagelkerke's (i.e., Cragg and Uhler's ).
The McFadden's is as follows:
where signifies the log-likelihood for your model, and is the log-likelihood for the null model.
Cox and Snell's is as follows:
where is the likelihood of the null model, is the likelihood of the fitted model, and is the number of observations.
Nagelkerke's is as follows:
Notice that this takes Cox and Snell's version and builds onto it.
When you calculate the above values, they will not range from 0-1 in quite the same way as the value for OLS models. Instead, they will increase more gradually.
AIC and BIC
The AIC and BIC scores are helpful when comparing two different models.
AIC is calculated as
where is the deviance and is the number of parameters.
BIC is calculated as
where is the number of parameters, is the number of observations and is the deviance of the fitted model.
Significance
The logit model's significance is tested by finding the Wald z-score, which is simply
where is the estimated logit coefficient and SE is the standard error.
Calculating the confidence interval is as follows:
where SE is the standard error, and z is the z-score from above.
Odds Ratios
The odds ratios for the different variables are an important facet of logit models because they help you interpret the directionality and magnitude of the relationship between a given variable and the response.
If the result equals 1, there is neither a positive nor a negative relationship between the given variable and the response. A result of 2 indicates that a one-unit increase in the predictor corresponds to doubling the response. And a response below 1 similarly indicates a negative relationship.