Slide 1

Slide 1 text

Travelling Salesman in Python Rory Hart @falican

Slide 2

Slide 2 text

The Travelling Salesman Problem Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?

Slide 3

Slide 3 text

Applications ● Planning ● Logistics ● Microchips ● DNA

Slide 4

Slide 4 text

Brute Force Approximately O(n!) where n is the number of cities. 4! = 24 8! = 40320 16! = 20922789888000 Oh no! Lets not bother with that.

Slide 5

Slide 5 text

Some better approaches?

Slide 6

Slide 6 text

A Greedy Algorithm From the first city travel to closest city. Keep repeating, traveling to the nearest unvisited city. Also known as Nearest Neighbour (NN).

Slide 7

Slide 7 text

Simple to Implement def nearest_tour(distances, count): available = range(1, count) tour = [0] while available: nearest = numpy.argmin(distances[tour[-1]][available]) tour.append(available.pop(nearest)) solution = numpy.array(tour, dtype=numpy.int) cost = calc_cost(solution, distances) return solution, cost

Slide 8

Slide 8 text

Simple to Implement def nearest_tour(distances, count): available = range(1, count) tour = [0] while available: nearest = numpy.argmin(distances[tour[-1]][available]) tour.append(available.pop(nearest)) solution = numpy.array(tour, dtype=numpy.int) cost = calc_cost(solution, distances) return solution, cost

Slide 9

Slide 9 text

With Some NumPy/SciPy def calc_distances(coords): return numpy.rint( scipy.spatial.distance.squareform( scipy.spatial.distance.pdist(coords, 'euclidean'))) def calc_cost(solution, distances): cost = distances[solution[:-1], solution[1:]].sum() cost += distances[solution[-1], solution[0]] return cost

Slide 10

Slide 10 text

Demo

Slide 11

Slide 11 text

Local Search 1. Make a small change. 2. Check if the change is better. 3. If better keep the solution, otherwise revert.

Slide 12

Slide 12 text

2-opt: A local search heuristic

Slide 13

Slide 13 text

Not too complex def two_opt(solution, cost, count, calc_cost): best_cost = cost = calc_cost(solution) improving = True while improving: improving = False for i in xrange(1, count-2): for j in xrange(i+1, count): solution[i:j+1] = solution[j-count:i-count-1:-1] cost = calc_cost(solution) if cost < best_cost: best_cost = cost improving = True else: solution[i:j+1] = solution[j-count:i-count-1:-1] return solution, cost

Slide 14

Slide 14 text

Not too complex, thanks NumPy def two_opt(solution, cost, count, calc_cost): best_cost = cost = calc_cost(solution) improving = True while improving: improving = False for i in xrange(1, count-2): for j in xrange(i+1, count): solution[i:j+1] = solution[j-count:i-count-1:-1] cost = calc_cost(solution) if cost < best_cost: best_cost = cost improving = True else: solution[i:j+1] = solution[j-count:i-count-1:-1] return solution, cost

Slide 15

Slide 15 text

Demo

Slide 16

Slide 16 text

Local Search Issues ● Gets stuck in at a local minimum. ● Cannot prove a solution is the optimum.

Slide 17

Slide 17 text

Meta-heuristics

Slide 18

Slide 18 text

Guided Local Search (GLS) Each time we reach a local minimum penalise the edges that have the maximum utility. Then used these penalties in a modified cost function.

Slide 19

Slide 19 text

Utility Function and the Penalties def penalise(solution): sol_dists = distances[solution[0:-1], solution[1:]] sol_dists = numpy.append(sol_dists, (distances[solution[0], solution[-1]],)) sol_pens = penalties[solution[0:-1], solution[1:]] sol_pens = numpy.append(sol_pens, (penalties[solution[0], solution[-1]],)) utility = sol_dists/(sol_pens+1) index_a = numpy.argmax(utility) index_b = index_a+1 if index_a != count-1 else 0 a = solution[index_a] b = solution[index_b] penalties[a,b] += 1.0 penalties[b,a] += 1.0

Slide 20

Slide 20 text

Cost Function with Penalties def gls_calc_cost(solution, distances, penalties, best_cost): cost = calc_cost(solution, distances) sol_pens = penalties[solution[:-1], solution[1:]] gls_cost = ( cost + LAMBDA* # lambda (generally 0.2 to 0.3 for TSP) (best_cost/count+1) * # alpha (sol_pens.sum() + penalties[solution[-1], solution[0]])) return gls_cost

Slide 21

Slide 21 text

Search function while 1: solution, gls_cost = two_opt( solution, gls_cost, len(distances), gls_calc_cost, redraw_guess) cost = calc_cost(solution, distances) if cost < best_cost: best_cost = cost best_solution[:] = solution penalise(solution)

Slide 22

Slide 22 text

Demo

Slide 23

Slide 23 text

Guided Local Search Issues ● Still cannot prove a solution is the optimum. ● Will never terminate.

Slide 24

Slide 24 text

Resources Demo source https://github.com/hartror/gls_tsp Christos Voudouris, Edward Tsang, Guided local search and its application to the traveling salesman problem, European Journal of Operational Research 113 (1999) 469- 499 http://www.ceet.niu. edu/faculty/ghrayeb/IENG576s04/papers/Local% 20Search/local%20search%20for%20tsp.pdf TSP Problem Library http://comopt.ifi.uni-heidelberg. de/software/TSPLIB95/