sha1_base64="gWT57j7NEYDBub1HllnLdJrZUsQ=">AAAMbXicnVb9T91UGD7M6caGbmhMlmhMI5hBciGwGGdITObYmIsuQxkwI9vS3nu4VHrbpqflY8gf6n+gP/kv+LzPe1rai6DxEtpz3vN+Pu/HaZQnsSuXln6fuPLO1Xffu3Z98sbNqfc/uHV7+sMtl1VF3272syQrXkahs0mc2s0yLhP7Mi9sOIoSux3tr8r59oEtXJylL8rj3L4ahcM03o37YQlSNn3VmR0TGWuGJjapOcGzxG6E91u8T80NnNe0AOfCvYvVqsnAH5s+1usmNAV4ZJeAMzBPsA/NABQLrhKUR1g5nOv+1KyAlpmcUrU1B1qI/11TgavPM7ES+OcsOGfxzmkvxM7Ss4KSs2bOHJk3Zt/0cCbveXLPQVMIv8SzgY8hpk/5mN8Ofs2DXpo9nAiHSIrliFEdglskS0YhvqrdHXgSUkqxWYP1FHbH6T+T3gPdIcIRdifw8huzDLuvsR7hrZ7vwFMLn0ruT3xcpz4i0ageHNKv2kuHkwqSxw1PAB2Kb+nx28EqIwIpfCrBqxhl5oBILoyhW/jcDIhMRN195sPhZEAZ0TVs2czA9ytWmsED0hSlymOS09fsfyAnOhQFrYnapqPffW9fZMT/kEgXTW3JM+7IHYK7ALaukXtIPXvkT1kxq9gJCuJR7wL64oWdssEcDBqU1ohuCpsLZhs2E3BZ3xGO+wPvr+IsuyNWa4H90OfGNQgcMs59yNW5GXksSmqULDrm2MLLIf575EvA6XxPRL5DzlCo8Zk1L8wWnguwJUiN2J+yamOoHiwwy45VW/hKlR7PaGsI/+ZhPYBGwU0zkoCe+b7ao/a4mQMa/4BnOgmEP/AYtKfLoDNd2h0sdXTcwbKAvG3qQOdNzFlWkp6yNyNy1mg4nuUex0e0k0Gv9XOlxwlXQb942ONMtDgvzJ+ditmAVAXdMf2Vjqp4orYcc2f/MXJ3SYV9T/mU3qiNDV9PD1nzmmOtloJxaEcoWmFnzu4SAdmlvBXCVg+fn8tRS38dY8hIJErHSs1Z/1r7jhMlZAxSDz1fobbVd0L7iXMs7mD3zE9rqbUKtbbGqK3XUnP9QI8czwqfGZEcsntS7i0rQzJ1MabPfXUX7Dyp4gr8Ifc6GfdAHfDOWGG+Vf8xPJOpmjdTSedg4Wu3K6tI1dPG0ddNZO8pZstZTI9BWQcmT9E57Vg1Twtjc0Kmq84KxVNnSXSuEobEx/r7YuR7YNRMtDrH3bkxB8+ew8t1Vm/tSRDcCe74b4KSmipWujN38SdZ+Rb+b0CmnfHHnHUxsRH6E86ByEdTd9KG16T12o7/O+bnLRGVTEouLc/Ofce8uT2ztLjEX3B+sewXM8b/1rPpifu8hTNgWDFrElmfNSC+/IJbe4nfL69grfsdoV9NFT3WSbzPCrSQqnwcer86Svf9hCwgGZgvWG8VKy1p5mBFzUK9SPeJvxcc8inzS3XV96j7V7ma87/KSSwleuVrxiA5z0mxnBGqpSIq+k14FlXJGtXeGjD/2iO2wTnw32P6bVP42+yP1k3eb6Z9hRl7WXTSZ7u+Mi/LjEPNDccyc9TJzWUIjt9jMTG4WCIBT+YnfyqcqNDl8Xo8v9i6t7j81eKXP96bebDia/W6+cR8jr5cNvfNA/TEOiZIf/LZpJv8bfL05l9TH099OvWZsl6Z8DIfmc5v6u7fTcFJfQ==</latexit> • Conic Particle Gradient Descent: optimizes a function on m parame- ters (xk, ak ) (called particles) that can be written as Fn (Yn, qm k=1 ak”xk ) then we can study the limit m æ Œ (over-parametrized) by considering the objective µ ‘æ Fn (Yn, µ) on the space of measures as in the works of Bach and Chizat, and Chizat. • Sliding Frank-Wolfe: solves convex programs on weakly compact sets (e.g., closed balls of the TV -norm for the weak-ı topology). This algorithm is a conditional gradient descent that may converge in a finite number of steps (Denoyelle, Duval, Peyr´ e, and Soubies) under some conditions. • Kernel SoS: Based on representation of nonnegative function based and a subsampling strategy, see Bach, Rudi, and Marteau-Ferey, and Lasserre, Magron, et al. • Other popular methods: Prony-type spectral methods such as MUSIC and ESPRIT, and non-convex approaches based on greedy minimization (e.g., (COMP) and “Continuous” LARS), see Elvira, Gribonval, Soussen, and Herzet.