# Lecture 25

## NP completeness and the Traveling Salesperson Problem.

Consider the following problem:

A salesperson has a number of towns to visit. How can they save fuel by visiting them in the order that gives the shortest total path.

One method for solving this is to look at all possible paths and choose the shortest. Unfortunately, for n towns there are (n-1)! possible paths. 16 factorial is 20922789888000 so even for small numbers of cities this method is impractical.

It is possible to find an approximate solution to the TSP by using a non-deterministic algorithm. Instead of going through every possibility we choose one at random and try and improve it by making small changes. The small changes will always try to reduce the total path length. This type of algorithm is called a gradient descent algorithm; it is a polynomial algorithm.

This algorithm is not guaranteed to get the optimum solution, but it will usually get a good one. Problems that can be solved in this way are called NP (nondetermanistic polynomial) problems. There is a set of similar NP problems (including the TSP) that are called NP complete.

## Correctness

Most computer programs contain errors - bugs. How can we write programs without bugs?

Two ways - testing and proving.

## Testing

Use test data to see if the program works.This will test the program for most cases but is no guarantee that it will be correct for all cases of input data. e.g. Pentium f.p. bug.

## Proving

A proof attempts to show that the program actually does what it is supposed to.

Two types - Formal and Informal

An informal proof is an extension of the documentation of the program to include an argument that it does what it is supposed to. This is really just a representation of good understanding of the program. Your documentation should be an informal proof.

A formal proof uses logical assertions to proove the correctness of a program.

In order to prove that an algorithm is correct, it is necessary to know what the problem to be solved is. There must be a specification for the problem. This is usually written in a formal specification language such as Z or VDM.

Discovering if an algorithm is correct from its specification is a non-computable problem. It is up to the programmer to prove correctness. One way of doing this is to use assertions.

An assertion is a statement about the state of the machine during execution of the program. There are three types - preconditions, postconditions and loop invariants.

The precondition is a true statement about the state of the machine before an operation begins. The postcondition is a statement about the state of the machine after the operation has terminated. The operation here could be a C statement, a block of C statements or an algorithm.

A loop invariant is a statement about the state of the machine during execution of a loop.

If it can be shown that the postcondition will be true if the algorithm terminates then the program is said to be partially correct.

If the program is partially correct and it can be proved to always terminate then it is called totally correct.

To prove total correctness, it is necessary to prove that some value increases or decreases every time the loop executes until the loop condition is satisfied. Example: prove the correctness of a factorial algorithm.

```f=1;
n=1;
/* precondition {f=n!,n=1} */
do  {
n=n+1; /* {f=(n-1)!,n>1} */
f=f*n; /* {f=n*(n-1)!,n>1} */
/* loop invariant {f=n!,n>1} */
} while(n!=10);
/* postcondition {f=n!,n=10 } */```
This algorithm is totally correct because the loop invariant will always be true since n! is defined as n*(n-1)!. It will always terminate because n starts at 1 and increases by one each time round the loop, n must therefore reach 10.

Now lets try something a little more difficult, the partition algorithm. Given an array x[0..n-1], x is partitioned if:

```forall k(0<=k<=a => x[k]<=r &&
a<k<=n-1   => x[k]>r )```
for all possible values of k
if k is between 0 and a inclusive then x[k] is less than or equal to r
if k is greater than a and less than or equal to n-1 then x[k] is greater than r
Lets split this into two conditions P and Q.

let P([0..a]) be

`    forall k(0<=k<=a => x[k]<=r)`
let Q([b..n-1]) be
`    forall k(b<k<=n-1 => r<x[k])`
we need to prove that after the algorithm terminates:
`P([0..a] && Q([a+1..n-1]) assume n>=1`
Now the algorithm with all assertions.

```a=-1;
b=n;
// { P([]) && Q([]) }
while(a+1 != b)  {
// { P([0..a]) && Q([b..n-1]) }
if (x[a+1]<=r) {
a=a+1;           // { P([0..a-1]) && x[a]<=r && Q([b..n-1]) }
}                // { P([0..a]) && Q([b..n-1]) }
else if x[b-1]>r {
b=b-1;           // { P([0..a]) && Q([b+1..n-1]) && x[b]>r }
}                // { P([0..a]) && Q([b..n-1]) }
else {
// { x[b-1]<=r && x[a+1]>r }
t=x[a+1];
x[a+1]=x[b-1];   // { x[a+1]<=r && t>r }
x[b-1]=t;        // { x[a+1]<=r && x[b-1]>r }
a=a+1;           // { P([0..a-1]) && x[a]<=r && Q([b..n-1]) }
b=b-1;           // { P([0..a]) && Q([b+1..n-1]) && x[b]>r }
// { P([0..a]) && Q([b..n-1]) }
}
}                     // { P([0..a]) && Q([b..n-1]) && a+1==b }
// { P([0..a]) && Q([a+1..n-1]) }```
The program terminates because:
• The initial value of (b-a) is n+1,
• (b-a) decreases by one or two each time round the loop.
The only problem is caused by (b-a) being decremented by 2 and missing the terminating condition. If this happened:
```new a == new b
old a+1==old b-1 (call this v)```
so that after x[a+1] and x[b-1] are swapped we have:
`{ x[v]<=r && x[v]>r }`
This is not possible, so the algorithm must terminate.