Looking for that balance in our lives…


When there is a set of objective functions to be optimized simultaneously, it is called a multi-objective optimization problem. These functions can be conflicting, for instance, maximizing the production of factory and minimizing the pollution it emits, or maximizing your happiness and your parents’. Thus, the optimal solutions need to be taken in the presence of trade-off between two or more conflicting objectives. And it is why multi-objective optimization is sometimes called Pareto optimization–the “Pareto optimal” solutions are states of allocations of resources where one objective cannot improved without making at least one other objective worse off.

There usually exist multiple Pareto optimal solutions to each multi-objective optimization problem, so even “solving a multi-objective optimization problem” can be defined in different ways.

Linear Scalarization

One method is to convert the problem with multiple objectives into a single-objective optimization problem. Optimal solutions to the single-objective optimization problem are then guaranteed to be Pareto optimal solutions to the multi-objective optimization problem. And a common way to carry out this method is linear scalarization, in which the objectives are added up using weights. Each of these weights is defined as the importance attached to an objective in a set of mutually exclusive and collectively exhaustive objectives and all of them have to add up to one. The objective function is thus:
max_{x \in X} \sum_{i=1}^n w_i f_i(x),
with:
w_i as the weight for objective i where \sum w_i = 1;
i = 1, 2,..., n (n the number of objectives);
f_{i}(x) as each objective to be optimized.

L_2 Distance-based Scalarization

Another method of scalarization utilizes the norm function. A simple example is 2-dimensional Euclidean norm.
Let U(f) be a Utopian point of a multi-objective optimization problem f(x) = (f_0(x),...,f_{m-1}(x)):
U(f) = (min_x\{f_0(x)\}, min_x\{f_1(x)\}, ...min_x\{f_{m-1}(x)\})
Then, the minimization of distance to U(f) becomes:
argmin_x\left\{\sqrt{\sum_{i=0}^{m-1} (U(f)_i - f_i(x))^2}\right\}
We do not need to compute the square root, however, i.e.:
argmin_x\left\{\sum_{i=0}^{m-1} (U(f)_i - f_i(x))^2\right\}
And we assign weights w_i \geq 0, i \in \{0,1, ... , m-1\}, \sum_{i=0}^{m-1} w_i = 1 to the objectives to target different parts of the Pareto front (set of non-dominated points):
argmin_x\left\{\sum_{i=0}^{m-1} w_i(U(f)_i - f_i(x))^2\right\}

Cutting Planes Methods

When each in the set of objectives to be combined is linear, then the combined objective will also be linear.

In this case, holding the same assumptions for a linear program, in cases where fractional answers are not acceptable, we may consider applying the cutting planes method to involve an integer program.

To save time in finding the integral extreme points or optimal solutions, we aim to form a stronger formulation from the one we are given to reduce the feasible region to its convex hull, which for a feasible set X is the smallest convex set that contains X. In the diagram below, the red outline is the convex hull.
convex

If we cannot find compact convex hull formulations in general, we may try to approximate the convex hull efficiently for every given instance. This leads us to the idea of the cutting planes method.

We define the following:
Linear programming relaxation: of a 0-1 integer program it is replacing the constraint that each variable must be 0 or 1 with a weaker constraint, that each variable now belongs to the interval [0,1].
Valid Inequalities: an inequality \pi^Tx \leq \pi_0 is a valid inequality for X \subset \mathbb{R}^n if \pi^T x \leq \pi_0 \forall x \in X.
Separation Problem: The separation problem is the problem of a) given some X \subset \mathbb{R}^n and x \in \mathbb{R}^n, is x^* \in conv(X)? b) if not, find a valid inequality that is violated by x^*.
Cut: A cut is a valid inequality that separates the current fractional solution x^*.

The cutting plane algorithm in its general form can then be formulated as follows:
Step 1: Solve the linear programming relaxation problem. Get x^*.
Step 2: If x^* integral stop, else find a valid inequality that will exclude x^*.
Step 3: Go to Step 1.

The main challenges are then a) to find valid cuts (inequalities), b) to find cuts that will quickly lead to the optimal integer solution, c) to find a method for generating cuts that is guaranteed to terminate.

A method for general cuts called the Logical Cuts.

The idea behind logical cuts it to come up with a valid inequality through logical reasoning about the relation between the variables. Consider the following 0-1 knapsack set, where B denotes the set of binary integers, i.e. B = \{0, 1\}:
X =\{x \in B^5 : 3x_1 - 4x_2 +2x_3 - 3x_4 +x_5 \leq -2\}
Reasoning logically, we find out what always has to hold about the variables x_1...x_5: if x_2 = x_4 = 0, then the LHS =x_2 = x_4 = 0 and the RHS = -2, which is impossible, so all feasible solutions satisfy the valid inequality x_2 + x_4 \geq 1.

Other methods for general cuts include cover inequalities and Chvátal-Gomory cuts, which can also simplify an integral multi-objective optimization problem for us.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s