Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Midterm Presentation

にまび
October 25, 2016

Midterm Presentation

にまび

October 25, 2016
Tweet

More Decks by にまび

Other Decks in Research

Transcript

  1. My Research Nearest Neighbor Search …… 1000 1011 0110 0100

    1100 …… in Parallel / Distributed Computation High-Dimensionally
  2. My Research Nearest Neighbor Search …… 1000 1011 0110 0100

    1100 …… in Parallel / Distributed Computation High-Dimensionally
  3. My Research Nearest Neighbor Search …… 1000 1011 0110 0100

    1100 …… in Parallel / Distributed Computation High-Dimensionally
  4. My Research Nearest Neighbor Search …… 1000 1011 0110 0100

    1100 …… in Parallel / Distributed Computation High-Dimensionally
  5. My Research Nearest Neighbor Search …… 1000 1011 0110 0100

    1100 …… in Parallel / Distributed Computation High-Dimensionally
  6. Nearest Neighbor Search Important and foundational problem in numerous fields

    of CS Pattern Recognition Statistical Classification Computer Vision Computational Geometry Databases etc. A Practical Example: Similar Picture Searching Theoretically, A Representation of high dimension problems
  7. Publication and Award Mingmou Liu, Xiaoyin Pan, and Yitong Yin.

    2016. Randomized Approximate Nearest Neighbor Search with Limited Adaptivity. In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA '16). Best Paper Finalist of SPAA’16
  8. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units
  9. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Space Complexity Limited by Machine Time Complexity
  10. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Access memory one by one
  11. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Access memory one by one
  12. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Access memory one by one
  13. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Access memory one by one
  14. Complexity Model …… 1000 1011 0110 0100 1100 …… Classical

    Cell-Probe Model: s memory units used w size of memory unit have to read t memory units Access memory one by one
  15. Complexity Model …… 1000 1011 0110 0100 1100 …… Access

    memory one by one Parallelized Cell-Probe Model: s memory units used w size of memory unit k time cost in real world have to read t memory units in total
  16. Complexity Model …… 1000 1011 0110 0100 1100 …… Parallelized

    Cell-Probe Model: s memory units used w size of memory unit k time cost in real world have to read t memory units in total Access multiple unites at once
  17. Complexity Model …… 1000 1011 0110 0100 1100 …… Parallelized

    Cell-Probe Model: s memory units used w size of memory unit k time cost in real world have to read t memory units in total Access multiple unites at once
  18. Hardness of NNS People believe that NNS is hard if

    we have to find the exactly nearest neighbor or if we have to find the answer deterministically
  19. Hardness of NNS People believe that NNS is hard if

    we have to find the exactly nearest neighbor or if we have to find the answer deterministically So called Curse of Dimensionality
  20. Hardness of NNS People believe that NNS is hard if

    we have to find the exactly nearest neighbor or if we have to find the answer deterministically So called Curse of Dimensionality Toward removing the curse, we deal with NNS approximately and randomizedly.
  21. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = ⌦( 1 k (log d ) 1/k )
  22. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = O ( k (log d ) 1/k ) t = ⌦( 1 k (log d ) 1/k ) t = O ( k + ( 1 k log d ) O(1/k) ) s = nO(1), w = O(d) if
  23. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = O ( k (log d ) 1/k ) t = ⌦( 1 k (log d ) 1/k ) t = O ( k + ( 1 k log d ) O(1/k) ) Tight if k is constant or t=k s = nO(1), w = O(d) if
  24. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = O ( k (log d ) 1/k ) t = ⌦( 1 k (log d ) 1/k ) t = O ( k + ( 1 k log d ) O(1/k) ) Tight if k is constant or t=k t = ⇥( log log d log log log d) s = nO(1), w = O(d) if
  25. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = O ( k (log d ) 1/k ) t = ⌦( 1 k (log d ) 1/k ) t = O ( k + ( 1 k log d ) O(1/k) ) Tight if k is constant or t=k s = nO(1), w = O(d) if
  26. Our Result Toward removing the curse, we deal with NNS

    approximately and randomizedly. if s = nO(1), w = dO(1) t = O ( k (log d ) 1/k ) t = ⌦( 1 k (log d ) 1/k ) t = O ( k + ( 1 k log d ) O(1/k) ) Tight if k is constant or t=k Asymptotically tight in other case s = nO(1), w = O(d) if
  27. Publication Mingmou Liu, Xiaoyin Pan, and Yitong Yin. 2016. Randomized

    Approximate Nearest Neighbor Search with Limited Adaptivity. In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA '16). Best Paper Finalist of SPAA’16
  28. Furthermore - Seminar speaker in the seminar of Communication Complexity

    of Gap Hamming Distance. - Paper study on data structure and communication complexity. - Teaching Assistant of Randomized Algorithms in Fall 2015.
  29. Grade Course Type Grade Teacher Ꮧॊኞ᝕᧍ A عץ ሴَ࿆ᒵ ӾࢵᇙᜋᐒտԆԎቘᦞӨਫ᪢

    A 91 ਃԔ୩ Ḙظ௏ԆԎӨᐒտᑀ਍ොဩᦞ A 76 ᠗࿯ හഝ೵യ B 74 לಛ̵ୟڥ٠ ړ૲ୗᔮᕹ B 62 ᝞̵ً᰸ຶӾ ᦇᓒቘᦞ੕୚ B 76 ਟොභ ᕟݳහ਍ D 97 ੭Ӟ᭗ ᵋ๢ᓒဩ D 97 ੭Ӟ᭗ ړ૲ୗᓒဩفᳪ D 85 Ἆਜ Ꭵᴣቘᦞٌ݊ଫአ D 90 ᩶ᰂᆦ ཛྷୗᦩڦ D 88 ޓୌᰎ ݢᦇᓒ௔Өݢڣෙ௔ D 84 ࡎᜉ
  30. Currently - Studying the λ-Near-Neighbor problem - The best known

    lower bound for λNN was given in 2006 - Trying to prove a better lower bound - λNN is a decision problem - Any better result will be the highest lower bound among any decision problem
  31. Q&A