Isotonic regression

Problem

edit

Given values   and their positive weights  , approximate them as closely as possible by  , subject to a set of constraints of kind   :

Given
values  ,
weights   such that   for all  ,
set of constraints  ,
minimize   with respect to   subject to   for all  

If all weights are equal to 1, the problem is called unweighted isotonic regression, otherwise it is called weighted isotonic regression.

The graph   must to be acyclic. For example, hte constraints   and   cannot exist simultaneously.

Example:

Minimize   subject to
 
 
 


Linear order

edit

Of a particular interest is linear order isotonic regression.

Given
values  ,
weights   such that   for all  ,
minimize   with respect to   subject to  
 
Linear ordering isotonic regression for 100 points.

For linear order isotonic regression, there is a simple linear algorithm, called #Pool Adjacent Violators Algorithm (PAVA).

If all weights are equal to 1, the problem is called unweighted linear ordering isotonic regression.

Linear order isotonic regression can be thought off as approximating given series of 1-dimensional observations with non-decreasing function. It is similar to smoothing splines, with the difference that we use monotonicity, rather than smoothness, to remove noise from the data.


Example

edit
 
Linear ordering isotonic regression for 106 points.
  • Isotonic regression for   points.
  • Left: points  , where  . The probability that   is determined by logistic function. Only 1000 points shown.
  • Right: result of isotonic regression (black graph) versus logistic function (red graph). The logistic function was recovered with high accuracy.

Linear order isotonic regression as a model

edit

Sometimes, linear ordering isotonic regression is applied to a set of observations  , where   is the explanatory variable, y is the dependent variable. The observations are sorted by their  's, then the isotonic regression is applied to the  's with the additional constraint   for all  .

Other variants

edit

Non-Euclidean metrics

edit

Sometimes other metrics are used instead of the Euclidean metric, for example   metrics:

 

or unweighted   metrics:

 

Points on a grid

edit

Sometimes, values are placed on a 2d or higher dimensional grid. The fit value must increase along each dimension, e.g.

Minimize   with respect to y subject to
 

Algorithms

edit

Pool Adjacent Violators Algorithm

edit

The Pool Adjacent Violators Algorithm (PAVA) is a linear-time (and linear-memory) algorithm for linear order isotonic regression.

Preliminary considerations

edit

The algorithm is based on the following theorem:

Theorem: For an optimal solution, if  , then  

Proof: Suppose the opposite, i.e.  . Then for sufficiently small  , we can set

 
 

which decreases the sum   without violating the constraints. Therefore, our original solution was not optimal. Contradiction.

Since  , we can combine both points   and   to a new point  .

 
One step of the PAVA algorithm.

However, after combining two points   and   to the new point  , this new point can violate the constraint  . In this case it should be combined with the  -th point. If the combined point violates its constraint, it should be combined with the previous point, and so on.


Algorithm

edit

Input:

array of n values: initial_values[1] ... initial_values[n]
array of n weights: weights[1] ... weights[n]

Output:

array called results (results[1] ... results[n]) minimizing sum by i of weights[i] * (initial_values[i] - results[i]) ** 2

The algorithm:

Initialize
pooled_value[1] = initial_values[1]
pooled_weight[1] = weights[1]
num_segments = 1
segment_end[0] = 0
segment_end[1] = 1
For current_index = 2 to n do:
num_segments++
pooled_value[num_segments] = initial_values[current_index]
pooled_weight[num_segments] = weights[current_index]
while num_segments > 1 and pooled_value[num_segments] < pooled_value[num_segments - 1] do
pooled_value[num_segments - 1] =
(pooled_weight[num_segments] * pooled_value[num_segments] + pooled_weight[num_segments - 1] * pooled_value[num_segments - 1]) /
(pooled_weight[num_segments] + pooled_weight[num_segments - 1])
pooled_weight[num_segments - 1] += pooled_weight[num_segments]
num_segments--
segment_end[num_segments] = current_index
for segment_index = 1 to num_segments do:
for value_index = segment_end[segment_index - 1] + 1 to segment_end[segment_index] do:
result[value_index] = pooled_value[segment_index]


Here   defines to which old points each new point corresponds.

Arbitrary case algorithms

edit

In the arbitrary case, this can be solved as a quadratic problem. The best algorithm takes   time, see:

  • Maxwell, WL and Muckstadt, JA (1985), "Establishing consistent and realistic reorder intervals in production-distribution systems", Operations Research 33, pp. 1316-1341.
  • Spouge J, Wan H, and Wilbur WJ (2003), "Least squares isotonic regression in two dimensions", J. Optimization Theory and Apps. 117, pp. 585-605.

Implementations

edit

isoreg

edit

The function isoreg performs unweighted linear ordering isotonic regression. It does not require any packages. For many simple cases, it is enough.

Example of usage:

x=sort(rnorm(10000))
y=x+rnorm(10000)
y.iso=isoreg(y)$yf
plot(x,y,cex=0.2)
lines(x,y.iso,lwd=2,col=2)

The isoreg function also implements linear ordering isotonic regression as a model:

x=rnorm(10000)
y=x+rnorm(10000)
y.iso=isoreg(x,y)$yf
plot(x,y,cex=0.2)
lines(sort(x),y.iso,lwd=2,col=2)

The package Iso contains three functions:

  • pava - linear ordering isotonic regression, weighted or unweighted.
  • biviso - 2-d isotonic regression
  • ufit - unimodal order (increasing then decreasing)

Example of usage:

install.packages("Iso") # should be done only once
library("Iso") # should be done once per session
x=sort(rnorm(10000))
y=x+rnorm(10000)
y.iso=pava(y)
plot(x,y,cex=0.2); lines(x,y.iso,lwd=2,col=2)

isotone

edit

This is the most advanced package. It contains two functions:

  • gpava - linear ordering isotonic regression, weighted or unweighted, for any metrics. Similarly to isoreg, gpava can implement linear ordering isotonic regression as a model.
  • activeSet - general isotonic regression for any metrics.

Example of usage:

install.packages("isotone") # should be done only once
library("isotone") # should be done once per session
x=sort(rnorm(10000))
y=x+rnorm(10000)
y.iso=gpava(y)$x
plot(x,y,cex=0.2); lines(x,y.iso,lwd=2,col=2)

Comparison of speed

edit

Since all three libraries somehow implement the PAVA algorithm, we can compare their speed.

As we can see on the graph below, for unweighted linear order isotonic regression (LOIR) with  , isoreg should be used. For weighted LOIR and unweighted LOIR with  , pava should be used. As for gpava, it should be used only for non-Euclidean metrics.

Also, the implementations of weighted simple ordering isotonic regression on R are far from perfect.


Java

edit

Weka, a free software collection of machine learning algorithms for data mining tasks written in the University of Waikato, contains, among others, an isotonic regression classifier. The classifier is deeply ingrained into the Weka's hierarchy of classes and cannot be used without Weka.

Python

edit

While scikit-learn implements the isotonic regression, Andrew Tulloch made the Cython implementation of the algorithm for linear ordering isotnic regression, which is 14-5000 times faster, depending on the data size. See [Speeding up isotonic regression in scikit-learn by 5,000x https://tullo.ch/articles/speeding-up-isotonic-regression/]. If you just want the code, click [here https://gist.github.com/ajtulloch/9447845#file-_isotonic-pyx].

Usage

edit

Calibrating the output for categorical probabilities

edit

For more information, see "Predicting Good Probabilities With Supervised Learning", by Alexandru Niculescu-Mizil and Rich Caruana.

Most of the supervised learning algorithms, for example Random Forests[1], boosted trees, SVM etc. are good in predicting the most probable category, but not the probability. Some of them tend to overestimate high probabilities and underestimate low probabilities. (One notable exception is neural networks, which themselves produce a well calibrated prediction.)

In order to obtain the correct probability, unweighted linear order isotonic regression can be used:

 

where

  is a final prediction for probability
  is a prediction given by the model
  is a non-decreasing function

To find f, the model's predictions on the validation set are matched with the output variable, and the isotonic regression is applied to the pairs.

An alternative approach is to pass   through a sigmoid function:

 

This is called Platt calibration.

For small data sets, Platt calibration is better than isotonic regression. Starting at 200-5000 observations, the isotonic regression slightly outperforms Platt calibration. Note also that this kind of isotonic regression is simpler and faster than the logistic regression required by Platt calibration.

Calibration of recommendation models

edit

The problem

edit

The following example is for recommendation engines, whose general purpose is to predict the grade given by user   to item  .

We have a model M that predicts the residuals r of another model, trained on the training set. For cases with small number of users or items, the model is overfitted, and the predictions need to be relaxed:

 

where

  is a final prediction for user  , item  
  is a prediction given by the model
  is a non-decreasing function
  may be either   (number of observations with user   in the training set), or   (number of observations with item   in the training set), or  , depending on the nature of the model. The letter   stands for support.

Solution

edit

Given the set of triplets   from validation database, we have:

 

Therefore, for  -th observation we should set weight   and value   . The triplets   then sorted with respect to  , and the isotonic regression applied to them.

When a function directly predicts the rating or acceptance rate, rather than predicting the residual of some other model, we can define   as the difference between this model and some simpler model (i. e. average rating for the item plus average rating correction for the user). This model is called basic model.

Analysis and calibrating input variables

edit

Often, we know that an output variable depends monotonically on an input variable, but otherwise we have very little prior knowledge about the dependence. In this case, isotonic regression can provide important clues about the dependence.


Non-metric multidimensional scaling

edit

A multidimensional scaling (MDS) algorithm, given the matrix of item–item similarities, places items into N-dimensional space such that similar items would be closer to each other than dissimilar items. This is especially useful for visualization purposes.

There are several types of MDS, depending on the relationship between similarity and distance, as well as the definition of distance. For non-metric MDS, the distance can be any monotone function of similarity. The function is typically found using isotonic regression, see Non-metric multidimensional scaling in wikipedia. Non-metric multidimensional scaling is implemented as [2] in the R's MASS library.

Advantages, disadvantages, and post-processing

edit

Advantages:

  • Non-parametric
  • Fast (linear time)
  • Simple

Disadvantages:

  • one or more points at the ends of the interval are sometimes noisy
  • it compares favorably with other approaches only if the statistics are large enough (  and ideally  )

These drawbacks can be improved by smoothing the result of the isotonic regression. In this way, we can get the best of both worlds (smoothness and monotonicity).

edit
  • Isotonic Regression Algorithms, by Quentin F. Stout - review of the best currently avaliable algorithms
  • Predicting Good Probabilities With Supervised Learning, by Alexandru Niculescu-Mizil, Rich Coruana
  • Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods, by John C. Platt