Preprocessing of the data helps ensure that the information you are feeding the model is balanced and optimized in terms of what features are most viable to provide the best patterns.  Think removing the noise from your data to help the model be successful.

Variables that are essentially near zero variance are by definition constants and can be removed from data sets.  See https://recipes.tidymodels.org/reference/step_nzv.html 

Variables which are co-correlated to the parameter of study provide similar information and therefore the one with highest variance and hence potential information should be retained.  See https://recipes.tidymodels.org/reference/step_corr.html

When you have a variable with significantly missing values it becomes a distraction and non informative unless missing values actually have meaning and/or you can synthesize them via a subject matters expertise and or automatically.  General rule of thumb is >30% then don’t use it.  https://recipes.tidymodels.org/reference/step_filter_missing.html

If column A + B = C then you only need 2 of the 3 mathematical.  This can cause downstream modeling complications.  See https://recipes.tidymodels.org/reference/step_lincomb.html

Centering and scaling data sets ensures you are giving all variables an equal and balanced voice in the model.