Generalized linear models

Distribution Notation GLM Type Link Function MLE Loss Function
Gaussian $N(\mu, \sigma^2)$ Linear Regression $g(\mu) = \mu$ $L = \frac{1}{2n}\sum_{i=1}^n (y_i - \hat{y}_i)^2$
Binomial $B(n, p)$ Logistic Regression $g(p) = \log(\frac{p}{1-p})$ $L = -\frac{1}{n}\sum_{i=1}^n [y_i \log(\hat{p}_i) + (1-y_i) \log(1-\hat{p}_i)]$
Poisson $Pois(\lambda)$ Poisson Regression $g(\lambda) = \log(\lambda)$ $L = \frac{1}{n}\sum_{i=1}^n [\hat{\lambda}_i - y_i \log(\hat{\lambda}_i)]$
Multinomial $Mult(n, p_1, ..., p_k)$ Multinomial Logistic Regression $g(p_j) = \log(\frac{p_j}{p_k})$ for j = 1, ..., k-1 $L = -\frac{1}{n}\sum_{i=1}^n \sum_{j=1}^k y_{ij} \log(\hat{p}_{ij})$
Gamma $Gamma(k, \theta)$ Gamma Regression $g(\mu) = \frac{1}{\mu}$ $L = \frac{1}{n}\sum_{i=1}^n [\frac{y_i}{\hat{\mu}_i} + \log(\hat{\mu}_i)]$
Inverse Gaussian $IG(\mu, \lambda)$ Inverse Gaussian Regression $g(\mu) = \frac{1}{\mu^2}$ $L = \frac{1}{n}\sum_{i=1}^n [\frac{(y_i - \hat{\mu}_i)^2}{y_i \hat{\mu}_i^2}]$

Collaborative filtering

Feature selection

Decision tree ensembles