Slide 66
Slide 66 text
Support Vector Machines
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Figure 3: Left to right: approximation of the function sinc x with precisions ε = 0.1, 0.2, and 0.5. The solid top and the bottom
lines indicate the size of the ε–tube, the dotted line in between is the regression.
Figure 4: Left to right: regression (solid line), datapoints (small dots) and SVs (big dots) for an approximation with ε = 0.1, 0.2,
and 0.5. Note the decrease in the number of SVs.
5 Optimization Algorithms
While there has been a large number of implementations of
SV algorithms in the past years, we focus on a few algorithms
which will be presented in greater detail. This selection is
somewhat biased, as it contains these algorithms the authors
are most familiar with. However, we think that this overview
contains some of the most effective ones and will be useful for
practitioners who would like to actually code a SV machine
by themselves. But before doing so we will briefly cover ma-
jor optimization packages and strategies.
5.1 Implementations
proximations are close enough together, the second sub-
algorithm, which permits a quadratic objective and con-
verges very rapidly from a good starting value, is used.
Recently an interior point algorithm was added to the
software suite.
CPLEX by CPLEX Optimization Inc. [1994] uses a primal-
dual logarithmic barrier algorithm [Megiddo, 1989] in-
stead with predictor-corrector step (see eg. [Lustig et al.,
1992, Mehrotra and Sun, 1992]).
MINOS by the Stanford Optimization Laboratory [Murtagh
and Saunders, 1983] uses a reduced gradient algorithm
in conjunction with a quasi-Newton algorithm. The con-