(ενρισκειν) which means “to find” l Faster than mathematical optimization (branch & bound, simplex, etc) l “Meta” means “beyond, in an upper lever” l Meta-heuristics are strategies that “guide” the search process l The goal is to explore the search space in order to find (near-)optimal solutions l Meta-heuristics are not problem-specific l The basic concepts of meta-heuristics permit an abstract level description l They may incorporate mechanisms to avoid getting trapped in confined areas of the search space
algorithms (hill-climbing…) l Meta-heuristic l Trajectory methods: Describe a trajectory in the search space during the search process l Variable Neighbourhood Search l Iterated Local Search l Simulated Annealing l Tabu Search l Population-based: Perform search processes which describe the evolution of a set of points in the search space l Evolutionary computation
initially empty partial solution) components, until the solution is complete l A greedy algorithm works in phases. At each phase: l You take the best you can get right now, without regard for future consequences l You hope that by choosing a local optimum at each step, you will end up at a global optimum
solution l Explore the neighbourhood of the current solution l Replace the current solution by a better solution l Neighbourhood: being X the search space, we define the neighbourhood system N in X as N : x → N (x )
and termination criteria l Stochastic: choose at random l Hill climbing: only permit moves to neighbours that improve the current l Greedy – the best neighbour l Anxious – the first neighbour improving l Sideways moves – allows moves with same fitness
state (solution) in memory l Generate only the neighbours of that state l Keep one of the neighbours and discard others l Key features: l No systematic l No incremental l Key advantages: l Use very little memory (constant amount) l Find solutions in search spaces too large for systematic algorithms
of landscape l Shape of landscape depends on problem formulation and fitness function l Landscapes for realistic problems often look like a worst-case scenario l NP-hard problems typically have exponential number of local-minima l Failing on neighbourhood search l Propensity to deliver solutions which are only local optima l Solutions depend on the initial solution
solution space l Uses memory techniques to avoid cycling l “The overall approach is to avoid entrainment in cycles by forbidding or penalizing moves which take the solution, in the next iteration, to points in the solution space previously visited” (Fred Glover, Computers and Operations Research, 1986)
are maked as Tabu l Change value of x from false to true: [x = false] is tabu l Swap elements i and j: [Change j,i] is tabu l Drop i and add j: [add i] and [drop j] are tabu
of the links used is a minimum l All the points are connected together l Constraints: l x1 + x2 + x6 ≤ 1 l x1 ≤ x3 l Penalty: 50 x1 x2 x3 x4 x6 x7 x5 6 2 0 8 12 18 9
yields relatively good solutions to previously intractable problems § Tabu Search provides comparable or superior solutions to other optimization techniques l Cons: § Tabu Search does not guarantee optimality § Tabu Search is awkward for problems with continuous variables § Tabu Search assumes fast performance evaluation § The construction of tabu list is heuristic
Most of the initial moves must be accepted (> 60%) l Cooling schedule l Temperature is reduced after every move l Two main methods: T T α = T T T β + = 1 α close to 1 β close to 0