Slide 1

Slide 1 text

1 Point2color: 3D Point Cloud Colorization Using a Conditional Generative Network and Differentiable Rendering for Airborne LiDAR Takayuki Shinohara, Haoyi Xiu, and Masashi Matsuoka Tokyo Institute of Technology 19/June/2021 Online, EarthVision2021

Slide 2

Slide 2 text

2 Tokyo Tech Point2color l3D Point Cloud colorization task n Estimating the color of each points from geometric 3D Point Clouds observed by airborne LiDAR Point2color Colorization Input: Point Cloud (x,y,z) Output: Colored Point Cloud (x,y,z,R,G,B) Low visual readability High visual readability

Slide 3

Slide 3 text

3 Tokyo Tech 1. Background and Objective

Slide 4

Slide 4 text

4 Tokyo Tech Public Open 3D Point Cloud lEasy to access 3D Point Clouds n Open Topography n Association for Promotion of Infrastructure Geospatial Information Distribution(AIGID) To improve the visual readability of point clouds when only geometric data is available, we developed a colorization method. Many open data have only geometric information. Making the visual readability of point clouds a problem.

Slide 5

Slide 5 text

5 Tokyo Tech lConditional GAN-based Colorization n Image Colorization methods l Realistic colorization results from actual images n Point Colorization method[Liu et.al, 2019] l Only for simple CAD data lDifferentiable Rendering n Projecting Point Cloud onto 2D images using differentiable rendering. We propose a cGAN-based colorization model (Point2color) using raw Point Cloud and image from differentiable rendering. Related Studies

Slide 6

Slide 6 text

6 Tokyo Tech 2. Proposed Method

Slide 7

Slide 7 text

7 Tokyo Tech Overall Colorization Strategy lcGAN-based pipeline Colorized Fake points 𝑪𝒇𝒂𝒌𝒆 Input Data 𝑷 PointNet++ PointNet++ Real Points 𝑪𝒓𝒆𝒂𝒍 Real/Fake Generator Point Cloud Discriminator Colorized Fake Image 𝑰𝒇𝒂𝒌𝒆 Real Image 𝑰𝒓𝒆𝒂𝒍 CNN Real/Fake Image Discriminator Differentiable Rendering

Slide 8

Slide 8 text

8 Tokyo Tech Network: Generator lPointNet++[Qi et al. 2017]-based x,y,z N Input Patch R,G,B N Skip connection (concatenate) 8,192 4,096 2,048 8,192 4,096 2,048 Fake Color Dow nsam pling Convolution U psam pling Convolution Generator estimates color of each points using PointNet++ with Encode-Decoder

Slide 9

Slide 9 text

9 Tokyo Tech Network: Point Discriminator lPointNet++[Qi et al. 2017]-based x,y,z,R,G,B N Input Patch Prob. Real 8,192 4,096 2,048 1DCNN Downsampling ( ) Sampling Grouping Fake Colored Points Real Colored Points Judge fake or real

Slide 10

Slide 10 text

10 Tokyo Tech Network: Image Discriminator lPix2Pix[Isola. 2017]-based W H Input Image Patch Convolution Prob. Real Fake Image Real Image Fake Colored Points Real Colored Points Differentiable rendering Judge fake or real

Slide 11

Slide 11 text

11 Tokyo Tech Optimization lRegression: L1 distance of RGB 𝐿$% &'()* = 𝔼 𝑪+,-. −𝑪/.,0 %, 𝐿$% (1,2. = 𝔼 𝑰+,-. −𝑰/.,0 % lGAN: Wasserstein distance 𝐿3 &'()* = −𝔼 𝐷4(𝑪+,-.) , 𝐿3 (1,2. = −𝔼 𝐷5(𝑰+,-.) 𝐿6 &'()* = 𝔼 𝐷4 (𝑪+,-. ) - 𝔼 𝐷4 (𝑪/.,0 ) 𝐿6 (1,2. = 𝔼 𝐷5(𝑰+,-.) - 𝔼 𝐷5(𝑰/.,0) l Total loss 𝐿3 = 𝐿3 &'()*+ 𝐿3 (1,2.+ 𝜆𝐿$% &'()*+ 𝜆𝐿$% (1,2. 𝐿6 = 𝐿6 &'()*+ 𝐿6 (1,2.

Slide 12

Slide 12 text

12 Tokyo Tech 3. Experimental Results

Slide 13

Slide 13 text

13 Tokyo Tech Experimental Data lGRSS Data Fusion Contest 2018 n Airborne LiDAR and aerial photo data l Target Area l urban area l GT Color l From Aerial photo l Preprocess l Isolated points removing l Training Patch l 25 m2 with 5 m buffer l 4,096 points l 1,000 patches Training Patch 30 m 30 m Target Area 25 m 25 m Example of Point Cloud

Slide 14

Slide 14 text

14 Tokyo Tech Colorized Point Cloud Proposed colorization method generated better results than previous models. The color of small objects were ignored. ) ! !"#$ !%$"& " !"#$ ( "%$"& Previous Method Point2color GT Input Non-vivid colors Vivid Rendered Image MAE=0.25 MAE=0.22 MAE=0.1 😃 😫

Slide 15

Slide 15 text

15 Tokyo Tech 4. Conclusion and Future Work

Slide 16

Slide 16 text

16 Tokyo Tech Conclusion and Future Work l Conclusion n We propose a colorization model (point2color) for Point Cloud observed by airborne LiDAR using cGAN. n We combined two discriminators for Point Cloud and 2D image via differentiable rendering. n Generated color have shown more realistic color and lower MAE than previous model. l Future work n Limited test data Generalization performance using various test data n Only MAE evaluation Evaluating Segmentation performance