Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Low-rank plus sparse decompositions for exoplanet detection on direct imaging ADI sequences

Low-rank plus sparse decompositions for exoplanet detection on direct imaging ADI sequences

Presentation on "Low-rank plus sparse decompositions for high-contrast imaging" given during my PhD (ULiege, Belgium).

Carlos Alberto Gomez Gonzalez

September 04, 2015
Tweet

More Decks by Carlos Alberto Gomez Gonzalez

Other Decks in Science

Transcript

  1. Carlos A. Gomez Gonzalez 4 / 9 / 2015 Université

    de Liège 1 Low-rank plus sparse decompositions for exoplanet detection on direct imaging ADI sequences
  2. PCA and low-rank approximations 2 • PCA gives the optimal

    low-rank approximation of M in the least-squares sense (Eckart and Young, 1936) PCA is sensitive to non gaussian noise (Schmidt 1907, Hotelling 1933) • Let’s consider a rectangular matrix:
  3. R-PCA through PCP 3 • Let’s consider this problem: •

    Principal component pursuit (Candes 2011):
  4. 4 Low rank Sparse Frame Frame Reconstructed frame Residuals Star

    PSF + speckles Background Foreground Lr+Sp PCA SPHERE IRDIS
  5. Low-rank plus sparse (ideal case) 5 PCP doesn’t work that

    perfectly with real life data (not only astro ADI cubes). But the problem can be formulated in several different ways.
  6. GoDec • Go decomposition, fastest implementation of a low rank

    plus sparse decomposition (Zhou 2011). Different formulation than the PCP-RPCA one: • This can be expressed as the minimization of the decomposition error: • Which can be solved iteratively alternating: 6 sition. Among this approaches we find the approximated RP algorithm: GoDec (Zhou & Tao 2011b). This convenient, in terms of computational cost, appro mated approach to the decomposition of M proposes a thr member decomposition (instead of the typical low-rank p sparse one): M = L + S + G , rank ( L )  k , card ( S )  c , where G is a dense noise component. GoDec produces an proximated decomposition of M , whose exact low-rank p sparse decomposition does not exist due to additive noise restricting rank ( L ) and the cardinality of S in order to con model complexity. This three-member decomposition can be pressed as the minimization of the decomposition error: min L , S k M L S k2 F , s . t . rank ( L )  k , card ( S )  c . M = L + S + G , rank ( L )  k , card ( S )  c , where G is a dense noise component. GoDec produces an proximated decomposition of M , whose exact low-rank sparse decomposition does not exist due to additive nois restricting rank ( L ) and the cardinality of S in order to co model complexity. This three-member decomposition can b pressed as the minimization of the decomposition error: min L , S k M L S k2 F , s . t . rank ( L )  k , card ( S )  c . us sparse means in the ideal case that the background, speckle noise and residual starli ll stay in the sparse component. of arbitrarily large and k L k⇤ denotes lues. The nuclear ns of rank ( L ) and ractable approxi- on recovers under components that eground outliers. The optimization problem of (1) is solved by altern solving the following two subproblems until convergence 8 > > > > > < > > > > > : Lt = argmin rank ( L ) k k M L S t 1 k2 F ; S t = argmin card ( S ) c k M Lt S k2 F .
  7. GoDec - SSGoDec • Lt can be updated via singular

    value hard thresholding of M − St−1 (via SVD in each iteracon) and St via entry-wise hard thresholding of X − Lt • Instead of SVD we can use M Bilateral Random Projeccons: • Instead of hard-thresholding we use sof-thresholding (l1 regularizacon): • This way we push toward zero the sparse component entries. 7 of multiple companions, as the subtracted from the data (espe- or bright speckles could turn into ly this type of three-member de- the geometrical structure of the singular value thresholding for e low-rank of M we make use ions (BRP, Zhou & Tao 2011a). w-rank approximation technique random projections, Y1 = MA1 Rnxk and A2 2 Rmxk are random ximation of M is computed as: ximated L requires less floating- d approximation. The bounds of P are close to the error of SVD itions (Zhou & Tao 2011a). gether to build an ADI post- are di↵erent rix decompo- mated RPCA ost, approxi- oses a three- ow-rank plus duces an ap- ow-rank plus tive noise G , der to control on can be ex- error: ages is problematic in presence of multiple compan dimmest one could get severely subtracted from the cially for close-in companions) or bright speckles cou false positives. We therefore apply this type of three-m composition locally exploiting the geometrical struc ADI datasets. Instead of using singular value thres computing on each iteration the low-rank of M w of its Bilateral Random Projections (BRP, Zhou & T BRP is a fast randomized low-rank approximation which uses M ’s left and right random projections, and Y2 = MT A2 where A1 2 Rnxk and A2 2 Rmxk matrices. Then the rank- k approximation of M is com L = Y1 ( AT 2 Y1 ) 1 YT 2 The computation of this approximated L requires le point operations than SVD based approximation. Th the approximation error in BRP are close to the er approximation under mild conditions (Zhou & Tao 2 These ideas were put together to build an roximated RPCA nal cost, approxi- proposes a three- al low-rank plus produces an ap- ct low-rank plus additive noise G , n order to control osition can be ex- ion error: cially for close-in companions) or bright speckle false positives. We therefore apply this type of th composition locally exploiting the geometrical ADI datasets. Instead of using singular value computing on each iteration the low-rank of of its Bilateral Random Projections (BRP, Zho BRP is a fast randomized low-rank approxim which uses M ’s left and right random project and Y2 = MT A2 where A1 2 Rnxk and A2 2 R matrices. Then the rank- k approximation of M L = Y1 ( AT 2 Y1 ) 1 YT 2 The computation of this approximated L requi point operations than SVD based approximatio the approximation error in BRP are close to t approximation under mild conditions (Zhou & These ideas were put together to build processing algorithm for boosting point-like s nds on the postulation CP, there are di↵erent parse matrix decompo- e approximated RPCA tational cost, approxi- f M proposes a three- typical low-rank plus , oDec produces an ap- e exact low-rank plus e to additive noise G , S in order to control composition can be ex- position error: ( S )  c . (1) Restricting the cardinality of X Lt operating on the whole ages is problematic in presence of multiple companions, as dimmest one could get severely subtracted from the data (es cially for close-in companions) or bright speckles could turn i false positives. We therefore apply this type of three-member composition locally exploiting the geometrical structure of ADI datasets. Instead of using singular value thresholding computing on each iteration the low-rank of M we make of its Bilateral Random Projections (BRP, Zhou & Tao 2011 BRP is a fast randomized low-rank approximation techni which uses M ’s left and right random projections, Y1 = M and Y2 = MT A2 where A1 2 Rnxk and A2 2 Rmxk are rand matrices. Then the rank- k approximation of M is computed a L = Y1 ( AT 2 Y1 ) 1 YT 2 The computation of this approximated L requires less floati point operations than SVD based approximation. The bound the approximation error in BRP are close to the error of S approximation under mild conditions (Zhou & Tao 2011a). These ideas were put together to build an ADI p processing algorithm for boosting point-like source detecti The algorithm follows four main steps: S and k L k⇤ denotes values. The nuclear ions of rank ( L ) and ly tractable approxi- ation recovers under rse components that foreground outliers. high computational omponent is exactly ctly sparse, contrary n corrupted by noise ouwmans & Zahzah ch decomposition to would be captured moving planets (re- parse component as presence of speckle ow-rank plus sparse dim planets. oposed to address its background subtrac- solving the following two subproblems until converge 8 > > > > > < > > > > > : Lt = argmin rank ( L ) k k M L S t 1 k2 F ; S t = argmin card ( S ) c k M Lt S k2 F . In (2) Lt can be updated via singular value hard th of M S t 1 (via SVD in each iteration) and S t via hard thresholding of X Lt . It must be noted that sing hard thresholding is equivalent to the truncation of the principal components in the PCA low-rank approxim cardinality constraint can be modified by introducing larization which induces soft-thresholding in updatin & Tao 2013). The soft-thresholding operator S with applied on the elements of a matrix X can be expres S X = sgn( Xi j ) max ⇣ Xi j , 0 ⌘ 3. Local randomized fast low-rank plus spa decomposition of ADI datasets
  8. Novel algorithm based on a L+S+G decomposition 8 per segment

    L+S+G decomposition: local low-rank approximation plus local thresholding 1. the images of the cube are broken in in quadrants of annulus of width 2λ/D, 2. each of this quadrants is decomposed separately for a fixed number of iteracons. Gamma is calculated as MAD(St) 3. for each patch we keep the S component of its decomposicon, 4. we rotate each frame to a common north and we median combine.
  9. Performance evaluation 12 • 100 random injections per annulus (1

    per cube). 50% chance of injection • PCA and our algorithm post-proc • Detection threshold on SNR is varied, changes the decision of the classifier • We count true positives and false positives to get the TPR and FPR for each threshold 2,4,6,8,10,14 • Tools available in signal detection theory and machine learning • Receiver operating characteristic (ROC) curves