Wikipendium

Share on Twitter Create compendium
Languages
  • Russian
+
Edit History
Tools
  • Edit
  • History
  • Share on Twitter

  • Read in Russian
  • Add language

  • Create new compendium
Log in
Table of Contents
  1. Practical information
  2. Curriculum / reading list
  3. Terminology
    1. Problems and solutions
    2. Graphs
  4. Wicked mathemagics
    1. Inequalities
    2. Sums
  5. Complexity Classes
    1. P
    2. NP (nondeterministic polynomial time)
    3. NP-Complete
    4. NP-Hard
  6. Linear programming (LP)
    1. Integer Program (IP)
    2. 0-1 Integer Linear Program (01ILP)
  7. Formal languages (2013)
    1. Alphabet
    2. Word
    3. Language
    4. $\text{Time}_A(x)$ and $\text{Space}_A(x)$
    5. Time complexity of an algorithm
  8. Complexity classes (2013)
    1. Time
    2. Space
  9. Cost measurement (2013)
    1. Uniform-cost measurement
    2. Logarithmic-cost measurement
  10. Decision problems (2013)
    1. Famous NP-complete problems
      1. Hamiltonian cycle problem (HC)
      2. Satisfiability problem (SAT)
      3. Clique problem (CLIQUE)
      4. Vertex cover problem (VCP)
      5. Set cover problem (SCP)
      6. Travelling salesman problem (TSP)
  11. Pseudo-Polynomial Time Algorithms (2013)
    1. Example: Primality test
  12. Parametrized complexity (2013)
    1. Fixed-parameter polynomial time algorithms
      1. Example: SAT
  13. Approximation algorithms (2013)
    1. Polynomial-time approximation scheme (PTAS)
    2. Fully polynomial-time approximation scheme (FPTAS)
  14. Local Search (2013)
    1. Straight up regular local search
    2. Variable-depth search
    3. Simulated annealing
    4. Tabu search
    5. Randomized Tabu search
    6. Intensification vs Diversification
  15. Branch-and-bound (BB) (2013)
  16. Genetic algorithms (2013)
    1. Initialization
    2. Selection
    3. Genetic operators
      1. Crossover/recombination
      2. Mutation
      3. (regrouping)
      4. (colonization-extinction)
      5. (migration)
    4. Termination
  17. Linear programming (LP) (2013)
    1. Integer programming (IP)
    2. Binary linear programming (01LP)
    3. Relaxation to LP
  18. Layman's guides (2013)
    1. Parametrized complexity applications
      1. Pseudo-polynomial-time algorithms
      2. Parametrized complexity
    2. Local search algorithm design
    3. Genetic algorithm template
    4. Proving that a decision problem is undecidable
    5. The Primal-Dual schema
    6. Christofides' algorithm
    7. Converting problems
      1. VCP to SCP
  19. Appendix: list of algorithms featured in the textbook (2013)
  20. Appendix: proofs (2013)
  21. Curriculum / reading list (previous semesters)
    1. 2013, spring
‹

TDT4125: Algorithm Construction, Advanced Course (2013-2014)

Tags:
  • algkons
+

Practical information

Note: sections marked with "(2013)" have not been updated to reflect any possible changes in the curriculum between 2013 and 2014.

  • The exam is held on the 24$^{\text{th}}$ of May, 2014 at 0900.
  • All printed or hand-written materials may be used during the exam (including the textbook, and even this compendium!). Citizen SR-270X is also permitted.
  • Old exams
    • 2013V Exam

Curriculum / reading list

This year (2014), the curriculum is chapters 1 through 8 in The Design of Approximation Algorithms, by David P. Williamson and David B. Shmoys.

The topics covered are:

  • An introduction to Approximation Algorithms
  • Greedy Algorithms and Local Search
  • Rounding Data and Dynamic Programming
  • Deterministic Rounding of Linear Programs
  • Random Sampling and Randomized Rounding of Linear Programs
  • Randomized Rounding of Semidefinite Programs
  • The Primal-Dual Method
  • Cuts and Metrics

Terminology

Problems and solutions

Feasible:

A feasible solution to a problem is legal with regards to the requirements of the problem or the problem's environment. A feasible solution to the knapsack problem could be any subset of items whose total weight does not exceed the maximum allowed weight.

Graphs

Assume we have a graph $G = (V, E)$.

  • $V$ is the set of the graph's vertices/nodes.
    • $|V|$ is the number of vertices.
    • The degree of a vertex is the number of edges directly connected to it/with it as one of its endpoints.
  • $E$ is the set of the graph's edges/arcs.
    • $|E|$ is the number of edges.
    • An edge can be defined on the form $(i, j)$, where $i$ and $j$ are vertices of the graph. I.e. $(i, j) \in E$, $i \in V$, $j \in V$.
    • An edge $e$ is said to be incident on a vertex $v$ if one of its endpoints is $v$.
  • A path between two nodes is a series of edges that lead from one node to another. A single edge is a path. "$(i, j), (j, k)$" is a path.

A graph $G$ is complete if there is an edge from each and every vertex to all the other vertices in the graph: iff for every pair of vertices $i$ and $j$ in $V$, there exists an edge $(i, j)$ in $E$. A complete graph has $\sum_{i=1}^{i = |V|-1}i = \frac{n(n-1)}{2}$ edges.

A directed graph $G$ is strongly connected there exists a path from each and every vertex to all other vertices in the graph. That is, if it is possible to get from any node $i$ in the graph to any other node $j$ in the graph. If $G$ is undirected, but the rest of the description still matches then it's just connected.

Two vertices $i$ and $j$ in a graph are adjacent if there exists an edge $(i, j)$ in the graph. So in a complete graph, all vertices are adjacent.

A graph can be described as metric if for any vertices in the graph, the length of any path of size $\geq 2$ between $i$, $j \in V$ is greater than or equal to the length of a path of size $1$ between the same two vertices. In other words, the direct route from A to C is not longer than a route from A to B to C. I.e., $i$, $j$, $k \in V$, $d_{ik} \leq d_{ij} + d_{jk}$. Typically metric is used to describe a problem instance (such as metric instances of TSP).

A graph is Eulerian if it contains an eulerian path. An eulerian path is a path that visits each edge in the graph exactly once. A graph is eulerian if and only if it is strongly connected and each node has an even degree.

A perfect matching of a set of nodes is a collection of edges that contain each node in the set exactly once. Suppose that $O = \bigcup_{i=1}^{k} v_i$. A perfect matching of $O$ looks something like this: $(v_1, v_2), (v_3, v_4), ..., (v_{k-1}, v_k)$.

A connected component of an undirected graph $G$ (sometimes just "component") is an isolated, connected subgraph $S \subset G$.

Wicked mathemagics

This section details some of the mathematical "tricks" used in TDoAA to prove stuff that you might not remember from whatever part of elementary or high school we were supposed to learn it. TDoAA spends a whole bunch of time proving the performance guarantee of some given algorithms. This usually involves stipulating stuff about the optimal solution for a problem and then using some black voodoo magic to show that there exists an upper bound of the cost of the solution produced by a given algorithm, typically at some factor of the cost of the optimal solution. Unless you're a real whiz at mathemagics the simple arithmetic stuff might confound you more than the assumptions that are made about the algorithms or problems themselves. This section hopes to alleviate some of that confusion.

Inequalities

$$A = B \implies (A \geq B) \vee (A \leq B)$$ If A is equal to B, that's the same as saying that A is less than or equal to B and also greater than or equal to B. If we know that $A = B$, we can pick and use either inequality if it is convenient, like if we need it to prove something.

$$A \geq B \geq C \implies A \geq C$$ Yeah, transitivity is still a thing.

Sums

If $\sum_{i=1}^{k}x_{i} = n$, where $x_{i} \geq 0$, then we can assume that there exists some $x_{i}$ whose value is greater than the average value of the elements in the sum, $\frac{n}{k}$. Again, with more symbols: $$\sum_{i=1}^{k}x_{i} = n \implies \exists i : x_{i} \geq \frac{n}{k}$$

It also implies that there is some $x_{j}$, $j \neq i$ whose value is less than the average value. You'd be surprised how often this pops up.

Say you have some sum $\sum_{i=1}^{k}x_{i} = n$. We know that, for some $i$, $x_i \geq \frac{n}{k}$, right? Now let's assume that $x_k$ is the element in the sum with the highest value. If we subtract this element from the sum, what can we say about the sum? Since $x_k$ is the highest valued element of the sum, its value must be greater than or equal to $\frac{n}{k}$. If $A, B \geq 0$ and we subtract A from B, and we know that A has some minimum value $A_{min}$, then $(B-A) \leq (B - A_{min})$, because $A \geq A_{min}$.

$$\begin{equation} \begin{split} \sum_{i=1}^{k-1}x_{i} & \leq \sum_{i=1}^{k}x_i - \frac{1}{k} \sum_{i=1}^{k}x_i & \leq \sum_{i=1}^{k}x_i \\ & \leq (1-\frac{1}{k}) \sum_{i=1}^{k}x_i & \leq \sum_{i=1}^{k} x_i \end{split} \end{equation}$$

"In what scenario would this ever be useful", one wonders. Oh, who knows, really. Like, if we knew that $n \leq \alpha \cdot OPT$ for some problem then we could use this to show that $n(1-\frac{1}{k}) \leq (1-\frac{1}{k}) \cdot \alpha \cdot OPT$ (because transitivity is still a thing). Which is done on page 195 of TDoAA for the multiway cut problem and the proposed minimum-cut-based algorithm.

Complexity Classes

The complexity classes include (but are not limited to): P, NP, NP-Complete and NP-Hard. These four classes contain decision problems, although not all NP-Hard problems are decision problems. Note that all problems in P are also in NP. A decision problem is a problem, or question, for which the answer is either "yes" or "no." Finding the shortest tour through a weighted graph is not a decision problem, but "does there exist a tour with total cost less than $k$ in this graph?" is a decision problem. The distinction is real, despite the fact that answering the question could potentially require us to find the shortest tour of the graph.

P

P contains those decision problems that can be solved in polynomial time. Given an instance of a problem in P, it is theoretically possible to produce a solution in polynomial time. However, a problem being in P does not imply that a concrete algorithm is known that solves it in polynomial time, only that such an algorithm exists. See Polynomial-recognition of the Robertson-Seymour theorem on Wikipedia for an example of this.

All problems in P are also in NP.

NP (nondeterministic polynomial time)

The class NP contains those decision problems for which the instances where the answer is "yes" can be verified in polynomial time.

The decision variant of the knapsack problem is in NP. It asks if there exists some combination of items whose total value is greater than or equal to $C$ and whose total weight is less than $B$. Given a combination of items, we can easily verify if their total value and total weight meet the requirements by summing each item's value and weight and comparing the result to $C$ and $B$.

NP-Complete

A problem $B$ is NP-complete if $B$ is in NP, and for every problem $A$ in NP, there is a polynomial-time reduction from $A$ to $B$.

A polynomial-time reduction from $A$ to $B$ takes as its input an instance of $A$ and produces an instance of $B$ in polynomial time. The produced instance of $B$ has the property that it is a "Yes" instance of $B$ if and only if a "Yes" instance of $A$ was input.

We can reduce an instance of the Hamiltonian cycle decision problem ("Does this undirected graph $G = (V, E)$ have a Hamiltonian cycle?"), which is NP-Complete, to an instance of the Traveling Salesman Problem (TSP). TSP takes as its input a complete graph $G'$. We can transform $G$ to $G'$ in polynomial time: set the cost of each edge in $G'$ that is also in $G$ to 1. Add edges to $G'$ until it is complete and set the cost of these to $|V|$. Now, if we have some algorithm that solves instances of TSP by producing the shortest tour possible in the graph, we can use this to determine whether or not $G$ has a Hamiltonian cycle: if the cost of the tour returned by our TSP algorithm is equal to $|V|$ then there exists a Hamiltonian cycle in $G$. However if the cost of the tour returned by our TSP algorithm is greater than $|V|$, which it will be if it contains an edge not in $G$, there is no Hamiltonian cycle in $G$. The instance of the Hamiltonian cycle problem has been reduced to an instance of TSP where we ask if there exists some tour of cost $c \leq |V|$. $% Example taken from pg 36 of TDoAA.$

NP-Hard

"NP-hard problems are at least as hard as the hardest problems in NP." A problem $A$ is NP-hard if there is a polynomial-time algorithm for an NP-complete problem $B$ when the algorithm has oracle access to $A$. If an algorithm has "oracle access to $A$", it can solve an instance of $A$ in a single instruction that is completed in polynomial time. The term "NP-hard" can be applied to either optimization and decision problems.

The optimization version of the knapsack problem is NP-hard: Given an instance of the decision version of the knapsack problem, we just check if the value of the optimal solution is at least the value which the decision version asks about.

Linear programming (LP)

"LP" can refer to both linear programming and a linear program.

A linear program is formulated in terms of some number of decision variables that represent some decision that needs to be made. The variables are constrained by linear inequalities and equalities. Any assignment of real numbers to the variables such that all of the constraints are satisfied is a feasible solution. A typical linear program also has an objective function whose value is dictated by the decision variables, and the purpose is to find a feasible solution that maximizes or minimizes the value of the objective function.

Linear programs can be solved in polynomial time.

An LP can be presented in the form:

$\begin{array} \text{}\text{minimize or maximize} & f \\ \text{subject to} & \text{some restraint on the decision variables} \\ & \text{another restraint} \end{array}$

Where $f$ is the objective function.

An LP formulation of the knapsack problem:

$\begin{array} \text{maximize} & \sum_{i=1}^{n} c_{i}x_{i} \\ \text{subject to} & \sum_{j=1}^{n} w_{j}x_{j} \leq B \\ & x_{j} \geq 0 \end{array}$

Where $c_i$ is the value of item $i$, $x_i$ is the decision variable for item $i$ that indicates if the item is included in the solution, $w_{i}$ is the weight of item $i$, $n$ is the total number of items in the problem instance, and $B$ is the maximum allowed weight. The objective function is $\sum_{i=1}^{n} c_{i}x_{i}$ - the sum of the value of item $i$ times whether or not it is included in the solution for all items.

Would it make more sense if we restricted $x_j$ to be either 0 or 1 instead of a real number greater than zero? Yes it would. But then it would be a 0-1 integer linear program and NP-hard.

Integer Program (IP)

Synonyms: Integer Linear Program (ILP)

An integer program is like a linear program except that the decision variables are restricted to being integers rather than real numbers.

Integer programs are NP-hard.

0-1 Integer Linear Program (01ILP)

Synonyms: Binary LP, 01LP, Binary ILP.

An integer program is like a linear program except that the decision variables are restricted to being either 0 or 1.

In the LP formulation of the knapsack problem above we would have to replace the restraint $x_{j} \geq 0$ with $x_{j} \in \{0, 1\}$.

Formal languages (2013)

Alphabet

Any non-empty, finite set is called an alphabet. Every element of an alphabet $\Sigma$ is called a symbol of $\Sigma$.

Example alphabets:

  • $\Sigma_{\text{bool}} = \left\{0, 1\right\}$
  • $\Sigma_{\text{latin}} = \left\{a, b, c, d, e , ... , z \right\}$
  • $\Sigma_{\text{logic}} = \left\{0, 1, (,), \wedge, \vee, \neg, x \right\}$

Word

Let $\Sigma$ be an alphabet. A word over $\Sigma$ is any finite sequence of symbols of $\Sigma$. The empty word $\Lambda$ is the only word consisting of zero symbols. The set of all words over the alphabet $\Sigma$ is denoted by $\Sigma^*$.

Language

Let $\Sigma$ be an alphabet. Every set $L \subseteq \Sigma^*$ is called a language over $\Sigma$.

$\text{Time}_A(x)$ and $\text{Space}_A(x)$

Let $\Sigma_{\text{input}}$ and $\Sigma_{\text{output}}$ be alphabets. Let $A$ be an algorithm that realizes a mapping from $\Sigma_{\text{input}}^*$ to $\Sigma_{\text{output}}^*$. For every $x \in\Sigma_{\text{input}}^*$, $\text{Time}_A(x)$ denotes the time complexity of the computation $A$ on the input $x$, and $\text{Space}_A(x)$ denotes the space complexity of the computation $A$ on $x$.

Time complexity of an algorithm

Let $\Sigma_{\text{input}}$ and $\Sigma_{\text{output}}$ be two alphabets. Let $A$ be an algorithm that computes a mapping from $\Sigma_{\text{input}}^*$ to $\Sigma_{\text{output}}^*$. The worst case time complexity of $A$ is a function $\text{Time}_A: (N - \left\{ {0} \right\} ) \to N$ defined by $\text{Time}_A(n) = \text{max}\left\{\text{Time}_A(x) | x \in \Sigma_{\text{input_n}}\right\}$ for every positive integer $n$.

Complexity classes (2013)

Time

Complexity class Model of computation Resource constraint
DTIME($f(n)$) Deterministic Turing machine Time $f(n)$
P Deterministic Turing machine Time $\text{poly}(n)$
EXPTIME Deterministic Turing machine Time $2^{\text{poly}(n)}$
NTIME($f(n)$) Non-deterministic Turing machine Time $f(n)$
NP Non-deterministic Turing machine Time $\text{poly}(n)$
Co-NP TBA TBA
NEXPTIME Non-deterministic Turing machine Time $2^{\text{poly}(n)}$

Space

Complexity class Model of computation Resource constraint
DSPACE($f(n)$) Deterministic Turing machine Space $f(n)$
L Deterministic Turing machine Space $O(\log n)$
PSPACE Deterministic Turing machine Space $\text{poly}(n)$
EXPSPACE Deterministic Turing machine Space $2^{\text{poly}(n)}$
NSPACE($f(n)$) Non-deterministic Turing machine Space $f(n)$
NL Non-deterministic Turing machine Space $O(\log n)$
NPSPACE Non-deterministic Turing machine Space $\text{poly}(n)$
NEXPSPACE Non-deterministic Turing machine Space $2^{\text{poly}(n)}$

Cost measurement (2013)

Time efficiency estimates depends on what is defined to be a single step, which takes a single time unit to execute. Two cost models are generally used:

Uniform-cost measurement

Every machine operation is assigned a single constant time cost. This means that an addition between two integers, as an example, is assumed to take the same amount of time regardless of the size of the integers. This is often the case in practical systems for sensibly sized integers. This is the cost measurement used throughout Hromkovič.

Logarithmic-cost measurement

Cost is assigned to the number of bits involved in each operation. This is more cumbersome to use, and is therefore usually only applied when necessary.

Decision problems (2013)

A decision problem is a question with a yes-or-no answer, depending on the input parameters. Formally, it is represented as a triple $(L,U,\Sigma)$ where $\Sigma$ is an alphabet, $L$ and $U$ are languages, and $L \subseteq U \subseteq \Sigma^*$. When $U = \Sigma^*$, which happens quite often, $(L,\Sigma)$ can be used as a shorthand. An algorithm $A$ solves $(L,U,\Sigma)$ if for every $x \in U$:

  • $A(x)$ = $1 \text{if} x \in L$
  • $A(x)$ = $0 \text{if} x \in U - L$

A decision problem is equivalent to a language.

Famous NP-complete problems

This is a list of important or otherwise famous NP-complete problems, which are probably nice to know by heart, as they assumed to be known, and can therefore be used freely in proofs.

Hamiltonian cycle problem (HC)

The Hamiltonian cycle problem is the problem of determining whether a given graph contains a Hamiltonian cycle.

Satisfiability problem (SAT)

The boolean satisfiability problem is the problem of determining whether a given boolean expression in CNF can be satisfied. An expression in CNF is an expression of the form: $(A \vee B \vee ...) \wedge (\neg A \vee ...) \wedge ...$. SAT was the first known NP-complete problem.

Clique problem (CLIQUE)

The clique problem is the problem of determining whether there exists a clique of size $k$ in a given graph $G$ = $(V,E)$. A clique $C$ is a subset of $V$, $C \subseteq V$ such that all vertices in $C$ are connected to all other vertices in $C$ by an edge in $E$.

Vertex cover problem (VCP)

The vertex problem is the problem of determining whether a given graph $G = (V,E)$ has a vertex cover of size $k$. A vertex cover is a subset $C$ of $V$ such that all edges in $E$ are adjacent to a node in $C$. VCP is a special case of SCP.

VCP can be approximated by Maximum Matching by a factor $\rho = 2$.

Set cover problem (SCP)

The decision version of the set cover problem asks whether, given a set of elements $U$ and a set $S$ containing subsets of $U$ whose union equals $U$ (i.e. $U = \bigcup_{X\in S} X$), it is possible to select $k$ or less of the sets from $S$ such that their union equals $U$. The optimization version of the problem tries to minimize $k$, or "find a minimum-size subset $\hat{S} \subseteq S$ so that $U = \bigcup_{X\in \hat{S}} X$."

SCP can be approximated greedily by repeatedly adding the set that contains the largest number of uncovered elements to the set cover. This has an approximation factor of $\rho = H(s)$, $s$ is the size of $U$ and $H(n)$ is the $n$-th harmonic number.

SCP can be represented as an IP:

Minimize sum $\text{cost}(s) \dot x_s$ for all $s \in S$

subject to sum $x_s$ for $s:e \in S_s \ge 1$ for all $e \in U$ $x_ \in \left\{0,1\right\}$

Travelling salesman problem (TSP)

The travelling salesman problem is the problem of finding the the shortest Hamiltonian cycle in a complete graph. While the optimization version of TSP is NP-hard, the decision version of TSP is NP-complete.

TSP, with the restriction that the triangle inequality holds, can be approximated by Christofides algorithm with a ratio $\rho = 1.5$. Regular MST constructive heuristic has $\rho = 2$.

Pseudo-Polynomial Time Algorithms (2013)

A numeric algorithm runs in pseudo-polynomial time if its running time is polynomial in the numeric value of the input. NP-complete problems with pseudo-polynomial time algorithms are called weakly NP-complete. NP-complete problems that are proven to not have a pseudo-polynomial time algorithm are called strongly NP-complete. Strong and weak kinds of NP-hardness are defined analogously.

Formally:

Let $U$ be an integer-valued problem, and let $A$ be an algorithm that solves $U$. $A$ is a pseudo-polynomial-time algorithm for $U$ if there exists a polynomial $p$ of two variables such that $\text{Time}_A(x) = O(p(|x|, \text{Max-Int}(x)))$ for every instance $x \in U$.

Example: Primality test

Consider the decision problem of whether a number $n$ is a prime number. The naïve approach of checking whether each number from $2$ to $\sqrt{n}$ evenly divides $n$ is sub-linear in the value of $n$, but exponential in the size of $n$.

Parametrized complexity (2013)

Fixed-parameter polynomial time algorithms

A parametrized problem is a language $L \subseteq \Sigma^{*} \times \mathbb{N}$. The second component is called the parameter of the problem. A parametrized problem $L$ is fixed-parameter tractable if the question $(x,k) \in? L$ can be decided in running time $f(k) \dot |x|^{O(1)}$, where $f$ is an arbitrary function depending only on $k$. In such cases, it is often practical to fix the parameter $k$ to a small(ish) constant.

Example: SAT

SAT can be parametrised by the number of variables. A given boolean expression of size $x$ with $k$ variables can be checked by brute force in time $O(2^{kx}$. Here, $f(k) = 2^k$, and $|x|^{O(1)} = x$.

Approximation algorithms (2013)

Approximation algorithms are algorithms that calculate an approximate answer to an optimization problem. Informally, approximation algorithms are said to have an approximation ratio of $\rho$ if the algorithm at worst produces an answer which is $\rho$ times worse than the optimal answer. In a manner analogous to numerical stability, approximation algorithms are said to be stable if small changes in the approximation parameters result in correspondingly small changes in the answer.

Polynomial-time approximation scheme (PTAS)

A polynomial-time approximation algorithm, $A$, is a PTAS if the approximation error is bounded for each input, and $\text{Time}_A(x, \frac{1}{\text{error}})$ is polynomial in $|x|$.

Fully polynomial-time approximation scheme (FPTAS)

An PTAS is FPTAS is it is polynomial in $|x|$ and $\frac{1}{\text{error}}$.

Local Search (2013)

Local search is a metaheuristic method for optimization problems. Local search moves from solution to solution in the search space by applying local changes until an sufficiently optimal solution is found, or until a time bound is elapsed.

Straight up regular local search

For your basic local search you need the following:

  • A neighbor graph showing the relation between neighboring solution candidates
  • A fitness function evaluating candidate solutions
  • Good luck

Straight up regular local search will quickly find a local optimum from where the search is started, but does not guarantee finding a global maximum, nor does it know if a global maximum has been found.

There are several ways of selecting a neighbor to follow when doing a local search, two of which are first improvement and steepest descent. Steepest descent evaluates all neighbors, and selects the best neighbor. First improvement selects the first neighbor which is better than the current solution candidate. First improvement is often faster to calculate, but Steepest descent usually converges faster to a local optimum.

Variable-depth search

Variable depth search provides a nice compromise between first improvement and steepest descent. In variable depth search, changed variables in a solution are locked, preventing their further modification in that branch. Kernighan-Lin is the canonical example of variable-depth search.

Simulated annealing

Simulated annealing introduces probability and randomness to which neighbor is selected. This reduces the chance of getting stuck in a non-global local maximum.

Tabu search

Tabu search marks certain solutions as tabu after processing, and never revisits them. This is often done on an LRU or LFU basis. A common implementation is a search which remembers the last $k$ visited candidate solutions, and avoids revisiting them.

Tabu search prevents a candidate solution to be considered repeatedly, getting the search stuck in plataus.

Randomized Tabu search

Like regular Tabu, but randomized (or so the name seems to suggest).

Intensification vs Diversification

Intensification is the intense search of a relatively small area - that is, the exploitation of a discovery of a good area. Diversification is looking at many diverse regions, exploring uncharted territories. Intensification is often quicker at reaching a local optimum, but diversification is better geared towards discovering global optimums.

Branch-and-bound (BB) (2013)

A branch-and-bound algorithm consists of systematic enumeration of all candidate solutions, where large subsets of fruitless candidates are discarded en masse, by using upper and lower estimated bounds of the quantity being optimized.

Canonically, BB is used to minimize a function $f(x)$, where $x \in (S = \left\{ some set of candidate solutions \right\})$. BB requires:

  • For branching: a splitting procedure that splits a candidate solution set $S$ into $S_1, S_2, ..., S_n$ such that $S = \bigcup \left\{S_1, S_2, ..., S_n \right\}, n ≥ 2$, and the minimum of $f(x)$ over $S$ is $\text{min}\left\{v_1, v_2, ..., v_n\right\}$, where each $v_i$ is the minimum of $f(x)$ within $S_i$.
  • For bounding: a procedure that computes upper and lower bounds for the minimum value of $f(x)$ over a given subset of $S$.

For an example of branch and bound used with Integer Programming, see Algorithm Animations for Practical Optimization: A gentle Introduction

Genetic algorithms (2013)

Genetic algorithms mimic evolution to obtain practical acceptable solutions to optimization problems. See for an overview of how to make a genetic algorithm.

Some genetic-domain-specific words need a mapping into the algorithmics domain:

Genetics term Corresponding algorithmics term
an individual a string (or vector) representation of a candidate solution
fitness value cost function
population a subset of the set of feasible candidate solutions
mutation a random local transformation of an individual

Here, the different stages of the genetic algorithm as described in the template are explained in more detail:

Initialization

Initialization is the step of creating a starting population $P = \left\{a_1, a_2, ..., a_k\right\}$ which becomes the first generation of the algorithm. Random generation is a good way of doing this, but there are other approaches.

Selection

Selection is the step of selecting $ n \le k $ indivudials from the population, on which genetic operators will be applied. A good selection strategy is crucial for success in a genetic algorithm. Using the fitness value to select the $n$ best individuals from the population is common ('survival of the fittest'), and throwing in some randomness as well is usually a good move.

Genetic operators

Genetic operators work on individuals to make new, different individuals.

Crossover/recombination

Crossover is when you combine two individuals to make an offspring individual which contains combined parts from both its parents. One way of performing a crossover is to take use the first half of the first individual (a string or vector representation of a candidate solution, remember), and and the second half of the second individual.

As a simple example: consider a genetic algorithm trying to find a string of length 10 that contains only vowels. The two individuals lsjifasdfw and areighifpo are recombined using the previously described method would make a new individual lsjifhifpo

Mutation

Mutation does a slight modification of a single individual to form a new individual. Methods include flipping a single bit of the individual, adding a random number to one of the individual's vector elements, generation-number-dependant modifications, etc.

Following with the offspring individual from the recombination example: lsjifhifpo could mutate to lsjifaifpo, using the mutation rule that a single character in the string should be swapped with a random character.

(regrouping)

(colonization-extinction)

(migration)

Termination

Linear programming (LP) (2013)

Also known as linear optimization. LP is the process of finding the "best" values obtainable for a function of variables constrained by linear inequalities. LP is solvable in polynomial time.

Integer programming (IP)

IP is linear programming where the variables can only take integer values. IP is NP-hard.

Binary linear programming (01LP)

01LP is linear programming where the variables can only take 0 or 1 as values. 01LP is NP-hard.

Relaxation to LP

Many interesting optimizaition problems with useful solutions are easily reduced to IP or 01LP. Because IP and 01LP are both NP-hard, and LP is P, relaxing problems to LP is a good idea when possible. Here is how:

  1. Express an optimization problem $U$ as an instance $I$ of IP or 01LP.
  2. Pretend that $I$ is an instance of LP, and find a solution $a$.
  3. Use $a$ to solve the original problem.

Easy, right?

Layman's guides (2013)

This section contains some useful guides to various methods and techniques for algorithm design.

Parametrized complexity applications

So you have an NP-hard problem. Nice. And difficult. So difficult that it is impossible to solve in polynomial time (unless $P = NP$). Anyway, here is a general plan that might help:

  1. Define a subproblem mechanism for defining subproblems according to their supposed subproblem difficulty.
  2. Use this mechanism to define a class of easy subproblems.
  3. Define a requirement for an algorithm to have a nice time complexity with respect to the subproblem difficulty, in such a way that a nice algorthm can solve an easy problem in polynomial time.
  4. Then, either:
  5. design a nice algorithm, and prove that is nice, or
  6. prove that it is impossible to design a nice algorithm, unless $P = NP$.

Look at these categories for ideas for the different subproblem mechanisms, proofs, etc:

Pseudo-polynomial-time algorithms

Subproblem mechanism Include only instances where the maximal integer in the input is bounded by a non-decreasing function $h$ of the input size.
Easy subproblems Those where he bouding $h$ function is a polynomial.
Nice algorithms Pseudo-polynomial-time algorithms.
Designing the algorithm Just design it
Proving that there are no nice algorithms Prove that there exists a polynomial $h$ such that $\text{Value}(h) - U$ is NP-hard. Then, the problem is strongly NP-hard.

Parametrized complexity

Subproblem mechanism Define a parametrization function Par to give each input instance an integer score to measure the difficulty of the input instance. Higher Par(x) means harder difficulty. Subproblems can be defined by including only instance where Par(x) takes a specific value or range.
Easy subproblems Those where Par(x) is small and does not depend on input size.
Nice algorithms Par-parametrized polynomial-time algorithms.
Designing the algorithm Just design it
Proving that there are no nice algorithms choose a constant k and prove that subproblem where Par(x) = k is NP-hard.

Local search algorithm design

  1. Make sure you have an optimization problem and not a decision problem.

  2. Define a neighborhood function. This function maps for an input instance x each feasible solution a for x to other feasible solutions for x called the neighbors of a. Typically, neighboring solutions differ only by some small modification.

  3. Start the search anywhere you want. Randomly choosing some place is a nice strategy.

  4. Repeat the following: go to best neighbor (or don't, if all neighbors suck, comparitively).

  5. In the case where all neighbors suck, give up. You have a local optimum and the search is done.

Genetic algorithm template

  1. Randomly generate a population of individuals.
  2. Apply a fitness function to calculate a fitness score for each individual in the current generation.
  3. Select individuals for reproduction based on fitness and a little randomness.
  4. Apply crossover and mutation to the selected individuals to produce the next generation.
  5. Stop whenever you feel like it.

Proving that a decision problem is undecidable

You have a decision problem E which you suspect is undecidable. Unfortunately, you need proof. A general method is: 1. Find a different decision problem H which is known to be undecidable. I recommend the halting problem. 2. Suppose a decider R decides E. Define a decider S that decides H using R. 3. If R exists, S can solve H. However H cannot be solved. Therefore, R cannot exist.

The Primal-Dual schema

(taken from cmu.edu)

We typically devise algorithms for minimization problems in the following way:

  1. Write down an LP relaxation of the problem, and find its dual. Try to find some intuitive meaning for the dual variables.
  2. Start with vectors x = 0, y = 0, which will be dual feasible, but primal infeasible.

  3. Until the primal is feasible:

    1. increase the dual values $y_i$ in some controlled fashion until some dual constraint(s) goes tight (i.e. until $sum_i y_i * a_ij = c_j$ for some $j$), while always maintaining the deal feasibility of y.
    2. Select some subset of the tight dual constraints, and increase the primal variable corresponding to them by an integral amount.
  4. For the analysis, prove that the output pair of vectors (x,y) satisfies $c^{[trans]} * x <= \rho * y^{[trans]} * b$ for as small a value of rho as possible. Keep this goal in mind when deciding how to raise the dual and primal variables.

Christofides' algorithm

Use this to approximate the TSP $G = (V,w)$ with $\rho = 1.5$, where the edge weights satisfy the triangle inequality.

  1. Create the MST $T$ of $G$.
  2. Let $O$ be the set of vertices with odd degree in $T$ and find a minimal matching $M$ in the complete graph over $O$.
  3. Combine the edges of $M$ and $T$ to form a multigraph $H$.
  4. Form an eulerian circuit in $H$.
  5. Make the circuit found in previous step Hamiltonian by skipping visited nodes ('shortcutting').

You now have an approximation of TSP with $\rho = 1.5$. Why does this approximate TSP with $\rho = 1.5$, you ask? Well:

Converting problems

VCP to SCP

This is how to turn a VCP into an SCP. We use the variables_G_ = (V,E) for the VCP graph, and U and S for the SCP variables.

  1. Let U be E.
  2. Let $S_i$ be the set of edges touching vertex i.

And that's it!

Appendix: list of algorithms featured in the textbook (2013)

Note: From the 2013 textbook (Algorithmics for Hard Problems, by Hromkovi$\check{c}$).

  • 156, DPKP
  • 165, Ford-Fulkerson
  • 172, Vertex cover algorithm
  • 177, B&amp;B for MAXSAT and TSP
  • 187, D&amp;C-3SAT
  • 191, Local search scheme
  • 197, Kernigan-Lin variable-depth search
  • 434, Metropolis algorithm
  • 436, Simulated Annealing
  • 442, Tabu search
  • 443, Randomized tabu search
  • 446, Genetic algorithm scheme
  • 214, Methods for relaxing problems to LP
  • 226, Simplex
  • 228, SCP(k) relaxation to LP
  • 237, Primal-dual scheme
  • 238, Primal-dual (MINVCP)
  • 250, Greedy makespan schedule
  • 262, Algorithm for VCP
  • 264, Algorithm for SCP
  • 268, Algorithm for WEIGHT-VCP
  • 269, Algorithm for MAX-CUT
  • 272, Greedy-simple KP
  • 273, PTAS for SKP
  • 278, modified PTAS for SKP
  • 280, FPTAS for KP
  • 283, TSP △-ineq 2-approx
  • 288, Christofides algorithm
  • 301, Sekanina's algorithm

Appendix: proofs (2013)

(TODO: write the actual proofs)

  • 201, HC $\le_p$ RHC
  • 201, RHC $\le_p$ SUBOPT_TSP

Curriculum / reading list (previous semesters)

2013, spring

This year (2013), the curriculum is chapters 3, 4 and 6, except 4.3.6-4.5, of Hromkovi$\check{c}$'s Algorithmics for hard problems.

The topics covered are:

  • Deterministic approaches
    • Pseudo-polynomial-time algorithms
    • Parametrized complexity
    • Branch and bound
    • Lowering worst case complexity of exponential algorithms
    • Local serach
    • Relaxation to linear programming
  • Approximation algorithms
    • Fundamentals: Stability, Dual approximation etc
    • Algorithm design: lots of approximations for known hard problems
  • Heuristics
    • Simulated annealing
    • Tabu search
    • Genetic algorithms

Written by

jonasft Stian Jensen sigveseb giraff odd Skarding Lionleaf simenhg
Last updated: Fri, 15 May 2015 14:55:39 +0200 .
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!