Skip to content
UoL CS Notes

Dynamic Programming Applications - Fibonacci, Knapsack & Weighted Interval Scheduling

COMP202 Lectures

Dynamic programming is similar to divide-and-conquer however, recursive calls are replaced by looking up pre-computed values. We can use it to compute optimisation problems where a brute force search is infeasible.

A typical approach to solve a problem using dynamic programming is to first find a recursive solution to the problem:

  • Once this is found, we then try to determine a way to compute the solutions to the sub-problems in a bottom up approach.

    This avoids recomputing the same solutions many times.

Fibonacci Numbers

We have covered a dynamic programming approach to solving this problem in the following lecture:

{0, 1} Knapsack Problem

Previously we have covered sub-optimal solutions to this problem using a greedy approach. By using dynamic programming we can make an optimal solution that doesn’t have exponential time.

See Greedy Algorithm Applications - Knapsack, Scheduling, Clustering to revise the notation used.

{0, 1} Knapsack Problem Solution

Let $S_k={1,2,\ldots,k}$ denote the set containing the first $k$ items and define $S_0=\emptyset$.

Let $B[k,w]$ be the maximum total benefit obtained using a subset of $S_k$, and having total weight at most $w$.

Then we define $B[0,w]=0$ for each $w\leq W_\max$ and:

\[B[k,w]= \begin{cases} B[k-1,w]&\text{if } w<w_k\\ \max\{B[k-1,w],b_k+B[k-1,w-w_k]\} & \text{otherwise} \end{cases}\]

Our solution is then $B[n,W_\max]$.

To make our solution as easy as possible to find, we will compute these values from the bottom-up:

  • First we compute $B[1,w]$ for each $w\leq W_\max$ and so on.

We can implement this in the following pseudo-code:

01knapSack(S,W)

for w = 0 to W
	do 
		B[0, w] = 0
for k = 1 to n
	do
		for w = 0 to wk - 1
			do 
				B[k, w] = B[k - 1, w]
		for w = W downto wk
			do
				if B[k - 1, w - wk] + bk > B[k - 1, w] then
					B[k, w] = B[k - 1, w - wk] + bk
				else
					B[k, w] = B[k - 1, w]
  • Input - Set $S$ of $n$ items with weights $w_i$, benefits $b_i$ and total weight $W$.
  • Output - A subset $T$ of $S$ with weight at most $W$.

{0, 1} Knapsack Problem Time Complexity

The running time of 01knapSack is dominated by the two nested for loops, where the outer loop iterates $n$ times and the inner loop $W$ times.

Therefore, this algorithms finds the highest benefit subset of $S$ with total weight at most $W$ in:

\[O(nW)\]

Weighted Interval Scheduling

This is similar to interval scheduling however, now the tasks have a value $v_i$.

The goal is to schedule a subset of non-conflicting intervals that have a maximal total value.

We cannot achieve this goal using a greedy approach.

Weighted Interval Scheduling Solution

A divide and conquer approach would look like the following:

  1. Sort the intervals in terms of their finishing time.
  2. Define $p(j)$ to be the interval $i$ where:
    • $i$ and $j$ are non-conflicting.
    • $i$ has the largest finish time whilst occuring before $j$ ($f_i\leq s_j$).

    $p(j)=0$ if there is no such $i$.

  3. Define $S_j$ to be the optimal solution and $\text{Opt}(j)$ to be the value of $S_j$.
    • If $j\in S_j$ then:

      \[\text{Opt}(j) = v_j + \text{Opt}(p(j))\]
    • If $j\notin S_j$ then:

      \[\text{Opt} = \text{Opt}(j - 1)\]
  4. Therefore:

    \[\text{Opt}(j) = \max\{v_j+\text{Opt}(p(j)), \text{Opt}(j-1)\}\]

    To avoid an exponential time solution, we solve all values $\text{Opt}(1), \ldots,\text{Opt}(n)$ in advance of the next step.

  5. We can then derive $S_j$ recursively depending on the value above:

    \[S_j= \begin{cases} S_{p(j)} \cup \{j\} & \text{if } \text{Opt}(j) = v_j + \text{Opt}(p(j))\\ S_{j-1} & \text{if } \text{Opt} = \text{Opt}(j - 1) \end{cases}\]