Karmarkar's Algorithm is an interesting method for solving linear programming problems that was first proposed by Narendra Karmarkar in his 1984 paper: "New Polynomial-Time Algorithm for Linear Programming" (Combinatorica 4). While the more well-known Simplex Method for solving linear programs traverses the edges of the polytope formed by the constraints of the problem, Karmarkar's algorithm is an interior point method that instead travels through the interior of the polytope to the solution, and is much more efficient. Karmarkar showed that his algorithm can run in polynomial time, as opposed to a possible exponential running time (in unlikely worst cases) for the Simplex Method.

The general linear programming problem can be expressed as follows:

Maximize the linear functional (the objective function):

```     T
z = c x
```

subject to the constraints:

```Ax = b,
x ≥ 0
```

where c and x are n-dimensional vectors, b an m-dimensional vector, and A a full rank m × n matrix, where m ≥ n, and c is nonzero. The LP problem must have an interior feasible solution (i.e. a solution that satisfies all of the constraints but may not necessarily maximize the objective function) x0.

One formulation of the algorithm is based on continuation methods and homotopy, and is slightly different from Karmarkar's original formulation of the method. We start with a feasible solution, i.e. a point x0 that satisfies the constraints but may not necessarily maximize the objective function. The algorithm works by moving from x0 to a sequence of other feasible points xp such that the objective function is always increasing, i.e.:

``` T p+1    T p
c x    > c x
```

The algorithm tries to find a curve x(t) within the set of all feasible solutions starting at x0 and leading to an optimal feasible solution that maximizes the objective function.

These requirements lead to the following restrictions on the curve x(t):

1. x(t) ≥ 0 for t ≥ 0
2. Ax(t) = b for t ≥ 0
3. cTx(t) is increasing for all t ≥ 0

We can obtain a curve by solving this initial-value differential equation:

x' = f(x)

where x(0) = x0

Now we need to find a suitable f(x). To satisfy the first condition above, we must arrange that whenever a component xi approaches zero, its first derivative x'i(t) must also approach zero. This can be done by creating a component D(x) which is a diagonal matrix whose diagonal elements are xi, and then assuming that for some bounded function G:

f(x) = D(x)G(x). Then our initial value problem becomes:

x'i = xiGi(x)

To satisfy the second requirement, it is enough to say that Ax' = 0. Since x' = f = DG, we must require that ADG = 0. This can be arranged by using G = PH, where H is some function, and P is the orthogonal projection onto the null space of AD.

To satisfy the final requirement, H should be chosen so that cTx(t) is increasing, i.e.:

```    d   T        T          T        T            T
0 < __ c x(t) = c  x'(t) = c f(x) = c D(x)G(x) = c DPH

dt
```

A convenient choice for H is Dc, so the final version of our initial value problem is:

```x' = D(x)P(x)D(x)c
```

where P(x) (obtained through some complicated contortions that are left as an excercise for the reader) is:

```            T         T -1
```

In practice, P(x) is obtained not by this somewhat complicated formula that involves inversion of a (possibly) very large matrix but rather by solving for z in:

```        T
```

which can be done by Gauss-Seidel iteration or some other iterative methods for solving systems of linear equations, and by noting that:

```               T
PDc = Dc - (AD) z
```

The resulting initial value problem does not need to be very accurately solved. A variant of Euler's Method can be used to obtain a formula of a sequence of vectors x0, x1..., with the iterative equation:

``` k+1    k        k
x    = x  + δ f(x )
k
```

The value for δk to use is about 90% of the maximum value you can use such that xk+1 ≥ 0.

AT&T employed Karmarkar at the time (apparently still does as of this writing, in its new incarnation as Lucent), and realized immediately that his discovery could be of immense importance. AT&T promptly filed for and was granted US Patent No. 4744026: "Methods and apparatus for efficient resource allocation" in 1988 and became the subject of ongoing contrversy over the issue of software patents. It seems that AT&T had just been given a patent on what was essentially a mathematical theorem! Prior art, however, could arguably be shown to exist, as Philip E. Gill and others showed that Karmarkar's algorithm was equivalent to Newton Barrier Methods developed in the 1960's for nonlinear programming. At least one well-known book from the time described the same methods specifically for linear programming. However, techniques to make these methods practical were not known until sparse matrix techniques were developed in the late 1970's and early 1980's. Ironically, For AT&T, the patent had only been of highly limited commercial value for them, and they sold only two hideously expensive systems (US\$8.9 million!) with the software implementing the algorithm. Fortunately, the patent expires within a couple of years (2005), but the issues surrounding the patenting of Karmarkar's Algorithm have not gone away.

http://www.csr.uvic.ca/~wendym/courses/445/interiorpoint.txt

Adler, Ilan, Narendra Karmarkar, Mauricio G.C. Resende, and Geraldo Veiga, "An Implementation of Karmarkar's Algorithm for Linear Programming", http://www.research.att.com/~mgcr/doc/ipldas.ps.Z

Log in or register to write something here or to contact authors.