Part Two: Systematic components and link functions

statistics
Author

Jon Minton

Published

December 1, 2023

tl;dr

This is part of a series of posts which introduce and discuss the implications of a general framework for thinking about statistical modelling. This framework is most clearly expressed in King, Tomz, and Wittenberg (2000) .

References

King, Gary, Michael Tomz, and Jason Wittenberg. 2000. “Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American Journal of Political Science 44: 341355. http://gking.harvard.edu/files/abs/making-abs.shtml.

Footnotes

  1. Using some base R graphics functions as I’m feeling masochistic↩︎

  2. Note from Claude: In machine learning terminology, these link functions g(.) correspond to activation functions in neural networks. The logistic function described here is identical to the sigmoid activation commonly used in ML. Modern deep learning extends this concept: neural networks chain multiple transformations together, while GLMs apply a single transformation. The cross-entropy loss function used to train logistic regression classifiers in ML is mathematically equivalent to the negative log-likelihood used in traditional GLM estimation. Python users can explore these connections using PyTorch (torch.nn.functional.sigmoid) or TensorFlow (tf.nn.sigmoid), which implement the same logistic transformation.↩︎