Algorithms for telecommunications: class #4

# Linear programming

Topics:

1. Introduction: what is LP?
2. A quick word on methods to solve LP problems.
1. geometric method
2. simplex
3. branch & bounds method
4. duality
3. Modeling networking problems with LP

Small bibliography:

• Linear programming (chapters 1,2, 17); Vazak Chvatal.
• Graphs & algorithms; Michel Gondram, Michel Minau. (appendix: LP, chapter branch and bounds technics).
• Linear programming; Georges Dantzig. Nice historical presentation.

## What is LP?

Example 1.1: a diet problem. Polly needs to get from her meals energy (2000 kcal), protein (55g) and calcium (800 mg). She chooses from 6 kinds of food:

Limit (servings) Meal Serving size (g) Energy (Kcal) Protein (g) Ca (mg) Price (cents)
4 oatmeal 28 110 4 2 3
3 chicken 100 205 32 12 24
2 eggs ? 160 13 54 13
8 milk 160 8 285 9
2 cherry pie 420 4 22 20
2 pork with beans 260 14 80 19

She decides to impose servings per day limits.

Problem: over all possible combinations of servings, which is the one with the minimal price and sastisfies Polly's needs?

Small question: give a valid combination and its prices.

Taking everything from the menu:

Energy: 4 * 110 + 3 * 205 + 2 * 160 + 8 * 160 + 2 * 420 + 2 * 260 = 4015
Price: 4 * 3 + 3 * 24 + 2 * 13 + 8 * 9 + 2 * 20 + 2 * 19 = 260

Trial and error for most of the problems is not helpful.

### Formalization of the problem

Defining the constraints.

$x_1, x_2 \dots x_6$ are the number of servings of each kind of meal.

(1)
\begin{eqnarray} 0 \leq & x_1 & \leq 4 \\ 0 \leq & x_2 & \leq 3 \\ 0 \leq & x_3 & \leq 2 \\ 0 \leq & x_4 & \leq 8 \\ 0 \leq & x_5 & \leq 2 \\ 0 \leq & x_6 & \leq 2 \end{eqnarray}
(2)
\begin{eqnarray} \textrm{Energy:} & 110 x_1 + 205 x_2 + 160 x_3 + 160 x_4 + 420 x_5 + 260 x_5 & \geq & 2000 \\ \textrm{Protein:} & 4 x_1 + 32 x_2 + 13 x_3 + 8 x_4 + 4 x_5 + 14 x_6 & \geq & 55 \\ \textrm{Calcium:} & 2 x_1 + 12 x_2 + 54 x_3 + 285 x_4 + 22 x_5 + 80 x_6 & \geq & 800 \\ \end{eqnarray}

And the variable we want to minimise is the price:

(3)
\begin{equation} P = 3 x_1 + 24 x_2 + 13 x_3 + 9 x_4 + 20 x_5 + 19 x_6 \end{equation}

Then our linear program is to minimise P, subject to the constraints.

## Linear programming

Semantic: "Program" comes from a military term meaning planning or scheduling for training, logistical supplies, or deployment of men.

LP is a branch of optimization problems, that consists in finding the best solution for constrained problems.

For each optimization problem, there is a decision problem which answers the question: «does a solution exist?»

In general, if $c_1, c_2 \dots c_n$ are real numbers, then the function of real variables $x_1, x_2 \dots x_n$ defined by:

(4)
\begin{align} f(x_1, x_2 \dots x_n) = c_1 x_1 + c_2 x_2 + \dots + c_n x_n = \sum^n_{j = 1} c_j x_j \end{align}

is a linear function.

If we have a real number b, then the equation

(5)
\begin{align} f(x_1, x_2 \dots x_n) = b \end{align}

is a linear equation and the inequalities

(6)
\begin{eqnarray} f(x_1, x_2 \dots x_n) & \leq & b \\ f(x_1, x_2 \dots x_n) & \geq & b \end{eqnarray}

era called linear inequalities.

Finally, a linear programming problem is the problem of maximising (or minimising ) a linear function subject to a finite number of linear constraints (linear equalities or linear inequalities).

## Standard form

(7)
\begin{align} max \sum^n_{j = 1} c_j x_j \end{align}

subject to:

(8)
\begin{eqnarray} \sum^n_{j = 1} a_{ij} x_j & \leq & b_i & (i \in \{ 1 \dots n \} ) \\ x_j & \geq & 0 & (j \in \{ 1 \dots n \} ) \end{eqnarray}

Two characteristics of the standard form:

• all the constraints are linear inequalities.
• the last n of the n + m constraints are non-negativity constraints (the n variables have to be positive).

The linear function that is to be maximised (or minimised) is called the objective function.

A set of numbers $x_1, x_2 \dots x_n$ that satisfy all the constraints are called a feasible solution. A feasible solution that maximises the objective is called an optimal solution.

Not all the problbems have an unique optimal solution. For example:

(9)
\begin{eqnarray} max(3 x_1 - x_2)\ \textrm{subject to:}\\ x_1 + x_2 & \leq & 2 & (1)\\ -2 x_1 - 2x_2 & \leq & -10 & (2)\\ x_1, x_2 & \geq & 0 \end{eqnarray}

In this case there's no solution:

(10)
\begin{eqnarray} (2) & -2 x_1 - 2x_2 \leq -10 & \Rightarrow & x_1 + x_2 \leq 5 \\ (1) & & & x_1 + x_2 \leq 2 \\ & \therefore \textrm{contradiction} \end{eqnarray}
(11)
\begin{eqnarray} max(x_1 - x_2)\ \textrm{subject to:}\\ -2x_1 + x_2 & \leq & -1 & (1) \\ -x_1 - 2x_2 & \leq & -2 & (2) \\ x_1, x_2 & \geq & 0 \end{eqnarray}

In this case, there are infinite solutions, if we look at $(x_1, x_2) = (M, 0)$, we have:

(12)
\begin{align} \left\{ \begin{array}{rcl} -2 x_1 \leq -1 & \Rightarrow & x_1 \geq \frac 1 2 \\ -x_1 \leq -2 & \Rightarrow & x_1 \geq 2 \end{array} \right. \end{align}
(13)
\begin{align} \therefore (x_1, x_2) = (M, 0), M \geq 2\ \textrm{is a solution}. \end{align}

Since the objective function for $(M, 0)$ is $M$, we cannot find an upper bound which is the optimal solution.

### Problem (modelling)

A meat packing plant produces 480 hams, 400 pork bellies and 230 picnics per day. Each fo these products can be sold either fresh or smoked. The total number of hams, bellies and picnics that can be smoked in one day is 420. In addition, 250 products can be smoked on overtime at a higher cost. The net profits are:

Fresh Smoked Smoked (OT) 8 14 11 4 12 17 4 13 9

They want to maximise the profit.

1. formulate as an LP problem
2. How many variables do you have? Can you do better?
3. Write the problem in standard form.

Formulation as LP:

(14)
\begin{align} \left\{ \begin{array}{rcl} H_f + H_s + H_{sOT} & = & 480 \\ B_f + B_s + B_{sOT} & = & 400 \\ P_f + P_s + P_{sOT} & = & 230 \\ \\ H_s + B_s + P_s & \leq & 420 \\ H_{sOT} + B_{sOT} + P_{sOT} & \leq & 250 \end{array} \right. \end{align}
(15)
\begin{array} {rclc} \textrm{Profit} & = & 8 H_f + 14 H_s + 11 H_{sOT} & +\\ & + & 4 B_f + 12 B_s + 7 B_{sOT} & + \\ & + & 4 P_f + 13 P_s + 9 P_{sOT}\\ \end{array}

To write it in standard form and to reduce the number of variables, we can remove the first equations and replace one variable as a function of the other two. It was not clear if this was valid, he didn't want to get into details (!)

# A quick word on how to solve LP problems

## Difficulty

Consider the problem of assigning 70 men to 70 jobs. We have a value $v_{ij}$ which is the benefit of person i doing job j. What is the number of combinations? $70! > 10^{100}$

The problem is to maximise the profit. One computer doing one million operations per second can compute $300 \times 24 \times 365 \times 10^6 \approx 3 \times 10^{13}$ operations per year. If your computer was running since the big bang ($15 \times 10^9$ years ago), it would have computed only $45 \times 10^{22}$ operations until now!

## Methods for real numbers

### Geometric method

(16)
\begin{eqnarray} max && x_1+x_2 \\ x_1 & \leq & 2 \\ x_2 & \leq & 5 \\ x_1, x_2 & \geq & 0 \end{eqnarray}

In this case we can draw the areas in the plane corresponding (using $x_1, x_2$ as coordinates) to these inequalities and find the region which is the intersection. In this case the square defined by $(0,0) \times (2, 5)$. Then we can find the line corresponding to the objective function that is maximised inside this area, in this case: $x_1 + x_2 = 7$.

Important remark: the space of feasible solutions is convex and an optimal solution (if there exists one) is on the border.

Limitation: does not work in higher dimensions.

### Simplex method

In higher dimensions, the space of feasible solutions is a polyhedron. We say that the optimal solution is always on the border of the polyhedron (shouldn't this be proved?), and then the idea is to move along the edges of the polyhedron defined by the constraints from one vertex to another, improving the value of the objective function and stop when no improvement is possible.

Note: If it exists, it always find the optimal solution. Not clear how this works, he pointed to the book.

It is very efficient (linear) in practice (CPLEX), but the theoretical complexity is not so good. In bad cases it can take an exponential (on the number of constraints and variables) time to solve.

In 1979 it was found the first polynomial time method to solve LP problems on real numbers: ellipsoid method.

## Methods for discrete variables

Note: the teacher didn't explain much here, but these problems are usually NP-complete!

There are two kinds of methods:

• enumeration algorithms: branch and bound
• approximation algorithms: ?

### Branch and bound

For a binary linear problem, you describe your set of solutions as a tree, in which you have a branch for every possible decision. The number if leaves will be $2^n$.
You apply the simplex method for real numbers, which will give you bounds to cut your tree.

Nothing more was explained!

# Modelling networking problems

## Problem 1: maximum independent set

In a simple graph, an independent set is a subset of vertices that are pairwise not adjacent. The maximum independent set problem is to find the largest independent set of a graph.

Let us note:

(17)
\begin{equation} G = (V, E), |V| = n, |E| = m \end{equation}

We are looking for an independent set S.

Variables: for all vertices $\forall 1 \leq i \leq n, x_i \in \{ 0, 1\}$ with:

(18)
\begin{align} \left\{ \begin{array}{rl} x_i = 1 & \textrm{if node}\ i\ \textrm{is in}\ S \\ x_i = 0 & \textrm{otherwise} \end{array} \right. \end{align}

Constraints: one constraint per edge:

(19)
\begin{align} \forall u, v \in E, x_u + x_v \leq 1 \end{align}

Objective function:

(20)
\begin{align} max \sum^n_{i = 1} x_i \end{align}

# Excercise

Write the linear program for the graph shown as example:

(21)
\begin{align} V = \{a, b, c, d, e, f\}, E = \{(a,b), (b,c), (c,d), (d,e), (e,f), (f,b)\} \end{align}

## Problem 2: minimum edge covering

Let $G=(V,E)$ be a graph. The problem is to find a subset $S \subseteq E$ covering all the vertices of the graph of minimal size.

Variables: one variable per edge:

(22)
\begin{align} \forall (uv) \in E, \left\{ \begin{array}{rl} x_{uv} = 1 & \textrm{if}\ uv\ \textrm{is in the edge cover} \\ x_{uv} = 0 & \textrm{otherwise} \end{array} \right. \end{align}

Objective function:

(23)
\begin{align} min \sum_{(uv) \in E} x_{uv} \end{align}

Constraints: one constraint per vertex:

(24)
\begin{align} \forall u \in V, \sum_{v | (uv) \in E} x_{uv} \geq 1 \end{align}

## Problem 3: maximum flow

We have a graph $G = (V, E)$, let $c_e: E \rightarrow \mathbb{R}$ be a function. Let s and t be the source and sink vertices. A flow is a function $f_e: E \rightarrow \mathbb{R}$, such tht $f_e \leq c_e$.

Constraint: the sum of the flow arriving at one vertex (that is not s or t) has to be equal to the flow leaving the vertex.

(….. ??)

# Problems for training:

• maximum matching
• maximum clique
• minimal cut
page revision: 24, last edited: 04 Dec 2009 11:44
Unclear, need to ask the teachers, since most is obviously derived works.