Centering predictors #32
Replies: 1 comment
-
Centering (whether at the mean or at some other interpretable value) is important for the interpretation of the intercept term as well as all interactions. In this way, it's a lot like contrast coding of categorical variables: it won't change your model's predictions, but it's important for the interpretation of all the coefficients and becomes more important when you have interactions. Scaling or Z-transformation are similar. These also changes the units of your predictors and hence the units of your regression coefficients. Depending on whether or not your units were natural/have a good interpretation (e.g., milliseconds, meters, etc.) or not, performing the Z-transformation can make your coefficients more interpretable. Note that many common changes of units -- seconds to milliseconds, meters to centimeters, etc. -- are technically scaling! So we already deal with scaling in many common situations. Finally, it's important to note that standardizing your predictors -- i.e. performing the Z-transformation on each continuous predictor -- can help overcome numerical difficulties (more on this when we discuss a few more details on what it means to fit a model in the technical sense) or may be required for certain statistical procedures where all the variables have to be on the same scale (e.g. LASSO or ridge regression) The StandardizedPredictors.jl can help you standardize your predictors by treating things like centering or scaling as a type of pseudo-contrast for continuous variables. Its documentation also has a nice example of how centering a predictor impacts its interpretation. |
Beta Was this translation helpful? Give feedback.
-
We've talked about tranformation of the dependent variable. But how about the decision on tranforming our predictors (e.g., mean centering, Z transformation)? How do we decide on which method to use (or not to use)?
Beta Was this translation helpful? Give feedback.
All reactions