Slide 1

Slide 1 text

1 Maciej Gryka RainforestQA Michael Terry University of Waterloo Gabriel Brostow University College London Learning to Remove Soft Shadows

Slide 2

Slide 2 text

2

Slide 3

Slide 3 text

3

Slide 4

Slide 4 text

4

Slide 5

Slide 5 text

5

Slide 6

Slide 6 text

6

Slide 7

Slide 7 text

7

Slide 8

Slide 8 text

8

Slide 9

Slide 9 text

Now we can remove and edit shadows. Exciting! 9

Slide 10

Slide 10 text

Previous Work 10 Intrinsic images - [Barrow & Tenenbaum 1978] - [Tappen et al. 2005] - [Bosseau et al. 2009] Shadow rendering - [Parker et al. 1998] - [Chan and Durand 2003] Hard shadow removal - [Finlayson et al. 2009] illumination-invariant images - [Shor and Lischinski 2008] Soft shadow removal - [Mohan et al. 2007] - [Arbel and Hel-Or 2011] - [Wu et al. 2007], [Guo et al. 2012] matting methods

Slide 11

Slide 11 text

Soft shadows: what do penumbrae look like? 11

Slide 12

Slide 12 text

Soft shadows: what do penumbrae look like? 12

Slide 13

Slide 13 text

Soft shadows: what do penumbrae look like? 13

Slide 14

Slide 14 text

Soft shadows: what do penumbrae look like? 14

Slide 15

Slide 15 text

Soft shadows: what do penumbrae look like? 15

Slide 16

Slide 16 text

Soft shadows: what do penumbrae look like? 16

Slide 17

Slide 17 text

Soft shadows: what do penumbrae look like? 17

Slide 18

Slide 18 text

Soft shadows: what do penumbrae look like? 18

Slide 19

Slide 19 text

Instead, we looked at many examples.

Slide 20

Slide 20 text

20 Instead, we looked at many examples.

Slide 21

Slide 21 text

The algorithm taught itself how penumbrae behave, while ignoring texture variation.

Slide 22

Slide 22 text

System Overview 22 compute a feature vector f( ) Random Forest for each masked intensity patch use a trained Random Forest to obtain several matte suggestions RGB image and a binary mask divide and align patches re-align and regularize to get a single patch per site optimize for color inpaint

Slide 23

Slide 23 text

System Overview 23 compu feature v f( ) for each m RGB image and a binary mask divide and align patches inpaint

Slide 24

Slide 24 text

System Overview 24 compute a feature vector f( ) Random Forest for each masked intensity patch use a trained Random Forest to obtain several matte suggestions and align ches re-align and r to get a s patch pe paint

Slide 25

Slide 25 text

System Overview 25 Random in several estions re-align and regularize to get a single patch per site optimize for color

Slide 26

Slide 26 text

System Overview 26 for e inpaint

Slide 27

Slide 27 text

Guided inpainting 27 input image guided inpainting result

Slide 28

Slide 28 text

28 sim ilar pixels Guided inpainting input image guided inpainting result

Slide 29

Slide 29 text

Off-the-shelf inpainting 29 Barnes et al., PatchMatch: a Randomized Correspondence Algorithm for Structural Image Editing, SIGGRAPH 2009

Slide 30

Slide 30 text

Preprocessing: patch alignment 30

Slide 31

Slide 31 text

Preprocessing: patch alignment 31

Slide 32

Slide 32 text

Preprocessing: patch alignment 32 f( )

Slide 33

Slide 33 text

Preprocessing: patch alignment 33 f( )

Slide 34

Slide 34 text

Preprocessing: patch alignment 34 - minimize the amount of training data needed
 - only learn things, what we cannot parameterize: penumbra fall-off
 - we find a Euclidean transform for each patch to bring it as close as possible to the template patch

Slide 35

Slide 35 text

Learning: customized Regression Random Forests 35 nearest-neighbor search in non-Euclidean space guided by training data

Slide 36

Slide 36 text

Learning 36

Slide 37

Slide 37 text

Learning 37 false true true false true false

Slide 38

Slide 38 text

Learning 38 false true true false true false

Slide 39

Slide 39 text

39 false true true false true false

Slide 40

Slide 40 text

40 false true true false true false

Slide 41

Slide 41 text

41 false true true false true false

Slide 42

Slide 42 text

42 [Criminisi et al. 2013] false true true false true false

Slide 43

Slide 43 text

Feature vector 43 Our feature vector contains: - normalized pixel intensity values (shifted in the intensity domain so that their mean falls at 0.5),

Slide 44

Slide 44 text

Feature vector 44 Our feature vector contains: - normalized pixel intensity values (shifted in the intensity domain so that their mean falls at 0.5), - x- and y-gradients (finite differences),

Slide 45

Slide 45 text

Feature vector 45 Our feature vector contains: - normalized pixel intensity values (shifted in the intensity domain so that their mean falls at 0.5), - x- and y-gradients (finite differences), - normalized distance from the edge of the user-masked region,

Slide 46

Slide 46 text

Feature vector 46 Our feature vector contains: - normalized pixel intensity values (shifted in the intensity domain so that their mean falls at 0.5), - x- and y-gradients (finite differences), - normalized distance from the edge of the user-masked region, - predicted matte for this patch (initial guess).

Slide 47

Slide 47 text

Feature frequency 47 0 200 400 600 800 1000 5000 10000 distance from the edge initial guess x-gradient y-gradient normalized intensity

Slide 48

Slide 48 text

Regularization 48

Slide 49

Slide 49 text

49 Kolmogorov, V., Convergent Tree-reweighted Message Passing for Energy Minimization, PAMI 2006 Regularization

Slide 50

Slide 50 text

Post-processing 50 - Before “putting the patches down” on the graph, we re-align them to their original orientation. - After regularization we have a single-channel shadow matte. - Final optimization recovers color.

Slide 51

Slide 51 text

Post-processing 51 naive solution

Slide 52

Slide 52 text

Post-processing: color optimization 52 0.0 0.5 1.0 1.0

Slide 53

Slide 53 text

Post-processing 53 naive solution our solution

Slide 54

Slide 54 text

Results! 54

Slide 55

Slide 55 text

55

Slide 56

Slide 56 text

56

Slide 57

Slide 57 text

57 ours our matte Arbel & Hel-Or 2011 Guo et al. 2012 input

Slide 58

Slide 58 text

58 ours Arbel & Hel-Or 2011 Guo et al. 2012 input our matte

Slide 59

Slide 59 text

59 input

Slide 60

Slide 60 text

60 input initial guess

Slide 61

Slide 61 text

61 input initial guess output

Slide 62

Slide 62 text

62 input input mask output

Slide 63

Slide 63 text

Summary of results 63 - The first perceptual user study of soft shadow removal methods. - 2x more soft shadow images than previously available. - Our results were chosen as the most convincing overall. - Our method also wins in synthetic measures (distance to ground truth), but this is not a good success criterion.

Slide 64

Slide 64 text

Two-phase perceptual study 64 Ranking

Slide 65

Slide 65 text

Two-phase perceptual study 65 Ranking Scoring

Slide 66

Slide 66 text

66 Probability of winning ranking Kruschke, J., Doing Bayesian Data Analysis, 2011 probability probability density

Slide 67

Slide 67 text

67 Kruschke, J., Doing Bayesian Data Analysis, 2011 probability probability probability density

Slide 68

Slide 68 text

Limitations 68

Slide 69

Slide 69 text

Limitations 69 input image unshadowed result

Slide 70

Slide 70 text

Limitations 70 initial guess (inpainting) unshadowed result

Slide 71

Slide 71 text

Conclusions 71 Our method learns about physical phenomena from synthetic data. It uses the fruits of graphics research together with machine learning to create perceptually superior results. http://visual.cs.ucl.ac.uk/pubs/softshadows/ See our website for: - paper + video + results - data generation / rendering scripts - user study code - algorithm code

Slide 72

Slide 72 text

Thanks!