1
Maciej Gryka
RainforestQA
Michael Terry
University of Waterloo
Gabriel Brostow
University College London
Learning to Remove Soft Shadows

2

3

4

5

6

7

8

Now we can remove and
edit shadows. Exciting!
9

Previous Work
10
Intrinsic images
- [Barrow & Tenenbaum 1978]
- [Tappen et al. 2005]
- [Bosseau et al. 2009]
Shadow rendering
- [Parker et al. 1998]
- [Chan and Durand 2003]
Hard shadow removal
- [Finlayson et al. 2009] illumination-invariant
images
- [Shor and Lischinski 2008]
Soft shadow removal
- [Mohan et al. 2007]
- [Arbel and Hel-Or 2011]
- [Wu et al. 2007], [Guo et al. 2012] matting
methods

Soft shadows: what do
penumbrae look like?
11

Soft shadows: what do
penumbrae look like?
12

Soft shadows: what do
penumbrae look like?
13

Soft shadows: what do
penumbrae look like?
14

Soft shadows: what do
penumbrae look like?
15

Soft shadows: what do
penumbrae look like?
16

Soft shadows: what do
penumbrae look like?
17

Soft shadows: what do
penumbrae look like?
18

Instead, we looked at
many examples.

20
Instead, we looked at
many examples.

The algorithm taught itself how
penumbrae behave, while
ignoring texture variation.

System Overview
22
compute a
feature vector
f( ) Random
Forest
for each masked intensity patch
use a trained Random
Forest to obtain several
matte suggestions
RGB image
and a binary
mask
divide and align
patches
re-align and regularize
to get a single
patch per site
optimize for color
inpaint

System Overview
23
compu
feature v
f( )
for each m
RGB image
and a binary
mask
divide and align
patches
inpaint

System Overview
24
compute a
feature vector
f( ) Random
Forest
for each masked intensity patch
use a trained Random
Forest to obtain several
matte suggestions
and align
ches
re-align and r
to get a s
patch pe
paint

System Overview
25
Random
in several
estions
re-align and regularize
to get a single
patch per site
optimize for color

System Overview
26
for e
inpaint

Guided inpainting
27
input image guided inpainting result

28
sim
ilar pixels
Guided inpainting
input image guided inpainting result

Off-the-shelf inpainting
29
Barnes et al., PatchMatch: a Randomized Correspondence Algorithm for Structural Image Editing, SIGGRAPH 2009

Preprocessing: patch alignment
30

Preprocessing: patch alignment
31

Preprocessing: patch alignment
32
f( )

Preprocessing: patch alignment
33
f( )

Preprocessing: patch alignment
34
- minimize the amount of training data needed
- only learn things, what we cannot parameterize: penumbra fall-off
- we ﬁnd a Euclidean transform for each patch to bring it as close as
possible to the template patch

Learning: customized
Regression Random Forests
35
nearest-neighbor search
in non-Euclidean space
guided by training data

Learning
36

Learning
37
false
true
true
false true false

Learning
38
false
true
true
false true false

39
false
true
true
false true false

40
false
true
true
false true false

41
false
true
true
false true false

42
[Criminisi et al. 2013]
false
true
true
false true false

Feature vector
43
Our feature vector contains:
- normalized pixel intensity values (shifted in the intensity domain so that
their mean falls at 0.5),

Feature vector
44
Our feature vector contains:
- normalized pixel intensity values (shifted in the intensity domain so that
their mean falls at 0.5),
- x- and y-gradients (ﬁnite differences),

Feature vector
45
Our feature vector contains:
- normalized pixel intensity values (shifted in the intensity domain so that
their mean falls at 0.5),
- x- and y-gradients (ﬁnite differences),
- normalized distance from the edge of the user-masked region,

Feature vector
46
Our feature vector contains:
- normalized pixel intensity values (shifted in the intensity domain so that
their mean falls at 0.5),
- x- and y-gradients (ﬁnite differences),
- normalized distance from the edge of the user-masked region,
- predicted matte for this patch (initial guess).

Feature frequency
47
0 200 400 600 800 1000
5000
10000
distance from the edge
initial guess
x-gradient
y-gradient
normalized intensity

Regularization
48

49
Kolmogorov, V., Convergent Tree-reweighted Message Passing for Energy Minimization, PAMI 2006
Regularization

Post-processing
50
- Before “putting the patches down” on the graph, we re-align
them to their original orientation.
- After regularization we have a single-channel shadow matte.
- Final optimization recovers color.

Post-processing
51
naive solution

Post-processing: color
optimization
52
0.0
0.5
1.0
1.0

Post-processing
53
naive solution our solution

Results!
54

55

56

57
ours our matte
Arbel & Hel-Or 2011 Guo et al. 2012
input

58
ours
Arbel & Hel-Or 2011 Guo et al. 2012
input
our matte

59
input

60
input initial guess

61
input initial guess output

62
input input mask output

Summary of results
63
- The ﬁrst perceptual user study of soft shadow removal methods.
- 2x more soft shadow images than previously available.
- Our results were chosen as the most convincing overall.
- Our method also wins in synthetic measures (distance to ground truth),
but this is not a good success criterion.

Two-phase perceptual study
64
Ranking

Two-phase perceptual study
65
Ranking Scoring

66
Probability of winning ranking
Kruschke, J., Doing Bayesian Data Analysis, 2011
probability
probability density

67
Kruschke, J., Doing Bayesian Data Analysis, 2011
probability
probability
probability density

Limitations
68

Limitations
69
input image unshadowed result

Limitations
70
initial guess (inpainting) unshadowed result

Conclusions
71
Our method learns about physical phenomena from synthetic data. It uses the
fruits of graphics research together with machine learning to create perceptually
superior results.
http://visual.cs.ucl.ac.uk/pubs/softshadows/
See our website for:
- paper + video + results
- data generation / rendering scripts
- user study code
- algorithm code

Thanks!