Slide 1

Slide 1 text

Regret Ratio Minimization in Multi-objective Submodular Function Maximization P(S) a Tasuku Soma (U. Tokyo) with Yuichi Yoshida (NII & PFI) 1 / 15

Slide 2

Slide 2 text

Submodular Func. Maximization f : 2E → R+ is submodular: f(X + e) − f(X) ≥ f(Y + e) − f(Y) (X ⊆ Y, e ∈ E \ Y) “diminishing return” max f(X) s.t. X ∈ C Applications • Influence Maximization • Data Summarization, etc 2 / 15

Slide 3

Slide 3 text

Multiple Criteria 3 / 15

Slide 4

Slide 4 text

Multiple Criteria 3 / 15

Slide 5

Slide 5 text

Multiple Criteria 1. coverage 3 / 15

Slide 6

Slide 6 text

Multiple Criteria 1. coverage 2. diversity 3 / 15

Slide 7

Slide 7 text

Multi-objective Optimization? × Exponentially Many Pareto Solutions! 4 / 15

Slide 8

Slide 8 text

Multi-objective Optimization? × Exponentially Many Pareto Solutions! “Good” Subsets of Pareto Solutions • k-representative skyline queries [Lin et al. 07, Tao et al. 09] • top-k dominating queries [Yiu and Mamoulis 09] • regret minimizing database [Nanongkai et al. 10] 4 / 15

Slide 9

Slide 9 text

Multi-objective Optimization? × Exponentially Many Pareto Solutions! “Good” Subsets of Pareto Solutions • k-representative skyline queries [Lin et al. 07, Tao et al. 09] • top-k dominating queries [Yiu and Mamoulis 09] • regret minimizing database [Nanongkai et al. 10] Issue These assume data points are explicitly given... 4 / 15

Slide 10

Slide 10 text

Our Results Extend regret ratio framework to submodular maximization 5 / 15

Slide 11

Slide 11 text

Our Results Extend regret ratio framework to submodular maximization Upper Bound Given an α-approx algorithm for (weighted) single objective problem, • regret ratio 1 − α/d for any d • regret ratio 1 − α + O(1/k) for any k and d = 2. d = # objectives, k = # of feasible solutions 5 / 15

Slide 12

Slide 12 text

Our Results Extend regret ratio framework to submodular maximization Upper Bound Given an α-approx algorithm for (weighted) single objective problem, • regret ratio 1 − α/d for any d • regret ratio 1 − α + O(1/k) for any k and d = 2. d = # objectives, k = # of feasible solutions Lower Bound • Even if α = 1 and d = 2, it is impossible to achieve regret ratio o(1/k2). 5 / 15

Slide 13

Slide 13 text

Regret Ratio Single Objective The regret ratio for S ⊆ C and f is rr(S) = 1 − maxX∈S f(X) maxX∈C f(X) . 6 / 15

Slide 14

Slide 14 text

Regret Ratio Single Objective The regret ratio for S ⊆ C and f is rr(S) = 1 − maxX∈S f(X) maxX∈C f(X) . Multi Objective The regret ratio for S ⊆ C and f1 , . . ., fd is rrf1 ,...,fd ,C(S) = max a∈Rd + rrfa ,C(S), where fa := a1 f1 + · · · + ad fd . (linear weighting) 6 / 15

Slide 15

Slide 15 text

Geometry of Regret Ratio f1 f2 Pareto opt point 7 / 15

Slide 16

Slide 16 text

Geometry of Regret Ratio f1 f2 Pareto opt point P(S) point in S 7 / 15

Slide 17

Slide 17 text

Geometry of Regret Ratio f1 f2 Pareto opt point P(S) point in S ε−1P(S) rr(S) ≤ 1 − ε ⇐⇒ f(X) ∈ ε−1P(S) (X ∈ C). 7 / 15

Slide 18

Slide 18 text

Regret Ratio Minimization Given: f1 , . . ., fd : submodular, C ⊆ 2E, k > 0 minimize rr(S) subject to S ⊆ C, |S| ≤ k. 8 / 15

Slide 19

Slide 19 text

Algorithm 1: Coordinate f1 f2 approx solution to maxX∈C f1 (X) approx solution to maxX∈C f2 (X) Scoord : α-approx. solutions to maxX∈C fi (X) (i = 1, . . ., d) =⇒ rr(Scoord ) ≤ 1 − α/d. 9 / 15

Slide 20

Slide 20 text

Algorithm 2: Polytope f1 f2 a: normal vector 10 / 15

Slide 21

Slide 21 text

Algorithm 2: Polytope f1 f2 approx solution to maxX∈C fa (X) 10 / 15

Slide 22

Slide 22 text

Algorithm 2: Polytope f1 f2 10 / 15

Slide 23

Slide 23 text

Algorithm 2: Polytope f1 f2 10 / 15

Slide 24

Slide 24 text

Algorithm 2: Polytope f1 f2 S: output of Coordinate and d = 2, =⇒ rr(S) ≤ 1 − α − O(1/k). 10 / 15

Slide 25

Slide 25 text

Lower Bound f1 (X) = cos π|X| 2n , f2 (X) = sin π|X| 2n > π 2k f1 f2 distance = O(1/k2) 11 / 15

Slide 26

Slide 26 text

Experiment Algorithms • Coordinate • Polytope • Random: Pick k random directions a1 , . . ., ak and output the family {X1 , . . ., Xk } of solutions, where Xi is an approx solution to max X∈C fai (X). Machine • Intel Xeon E5-2690 (2.90 GHz) CPU, 256 GB RAM • implemented in C# 12 / 15

Slide 27

Slide 27 text

Data Summarization Dataset: MovieLens E: set of movies, si,j : similarities of movies i and j f1 (X) = i∈E j∈X si,j , coverage f2 (X) = λ i∈E j∈E si,j − λ i∈X j∈X si,j diversity C = 2E (unconstrained), 1 ≤ k ≤ 20, λ > 0, single-objective algorithm: double greedy (1/2-approx) [Buchbinder et al. 12] 13 / 15

Slide 28

Slide 28 text

Result 0 5 10 15 20 k 10−3 10−2 10−1 100 101 Estimated regret ratio Polytope Random Coordinate

Slide 29

Slide 29 text

Result 0 5 10 15 20 k 10−3 10−2 10−1 100 101 Estimated regret ratio Polytope Random Coordinate regret ratio decreases dramatically 14 / 15

Slide 30

Slide 30 text

Our Results Extend regret ratio framework to submodular maximization Upper Bound Given an α-approx algorithm for (weighted) single objective problem, • regret ratio 1 − α/d for any d • regret ratio 1 − α + O(1/k) for any k and d = 2. d = # objectives, k = # of feasible solutions Lower Bound • Even if α = 1 and d = 2, it is impossible to achieve regret ratio o(1/k2). 15 / 15