## What is the Tukey loss function?

Originally from Statistical Odds and Ends, there is a nicely brief summary of the Tukey Loss Function

I’d love to see what this does in various kinds of regression. It may be possible to set up some kind of iterative regression scheme, where a normal regression with uniform weights is first done, and then the residuals are used to define a set of alternative weights via the Tukey Loss Function. Then the weighted regression is done, producing another set of residuals, and a new set of weights is defined. This should (eventually) settle down.

Here’s the article: Statistical Odds & Ends

The Tukey loss function

The Tukey loss function, also known as Tukey’s biweight function, is a loss function that is used in robust statistics. Tukey’s loss is similar to Huber loss in that it demonstrates quadratic behavior near the origin. However, it is even more insensitive to outliers because the loss incurred by large residuals is constant, rather than scaling linearly as it would for the Huber loss.

The loss function is defined by the formula

\$latex begin{aligned} ell (r) = begin{cases} frac{c^2}{6} left(1 – left[ 1 – left( frac{r}{c}right)^2 right]^3 right) &text{if } |r| leq c, frac{c^2}{6} &text{otherwise}. end{cases} end{aligned}\$

In the above, I use \$latex r\$ as the argument to the function to represent “residual”, while \$latex c\$ is a positive parameter that the user has to choose. A common choice of this parameter is \$latex c = 4.685\$: Reference 1 notes that this value results…

View original post 231 more words 