Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From NumPy to PyTorch

From NumPy to PyTorch

PyCon 2021 slides

7c57b4038d528e9df32a384d27157043?s=128

Mike Ruberry

May 03, 2021
Tweet

Transcript

  1. Mike Ruberry Software Engineer @Facebook From NumPy to PyTorch

  2. Agenda 0 PyTorch & NumPy 1 NumPy Ops are Important

    2 PyTorch + NumPy Ops 3 Conclusions & Future Work
  3. PyTorch & NumPy

  4. P y To r c h N u m P

    y 01 import torch 02 03 a = torch.tensor((1, 2, 3)) 04 b = torch.tensor((4, 5, 6)) 05 torch.add(a, b) >> tensor([5, 7, 9]) 06 t = torch.randn(2, 2, dtype=torch.complex128) 07 pd = torch.matmul(t, t.T.conj()) 08 l = torch.linalg.cholesky(pd) >> tensor( [[0.4893+0.0000j, 0.0000+0.0000j], [0.4194+0.0995j, 1.3453+0.0000j]], dtype=torch.complex128) 01 import numpy as np 02 03 a = np.array((1, 2, 3)) 04 b = np.array((4, 5, 6)) 05 np.add(a, b) >> array([5, 7, 9]) 06 a = t.numpy() 07 pd = np.matmul(a, a.T.conj()) 08 l = np.linalg.cholesky(pd) >> array( [[0.48927494+0.j, 0.+0.j], [0.41939711+0.0994527j, 1.34532112+0.j]])
  5. P y To r c h N u m P

    y 01 import torch 02 03 a = torch.tensor((1, 2, 3), device='cuda') 04 b = torch.tensor((4, 5, 6), device='cuda') 05 torch.add(a, b) >> tensor([5, 7, 9]) 06 t = torch.randn(2, 2, dtype=torch.complex128, device='cuda') 07 pd = torch.matmul(t, t.T.conj()) 08 l = torch.linalg.cholesky(pd) >> tensor( [[0.4893+0.0000j, 0.0000+0.0000j], [0.4194+0.0995j, 1.3453+0.0000j]], device='cuda:0', dtype=torch.complex128) 01 import numpy as np 02 03 a = np.array((1, 2, 3)) 04 b = np.array((4, 5, 6)) 05 np.add(a, b) >> array([5, 7, 9]) 06 a = t.cpu().numpy() 07 pd = np.matmul(a, a.T.conj()) 08 l = np.linalg.cholesky(pd) >> array( [[0.48927494+0.j, 0.+0.j], [0.41939711+0.0994527j, 1.34532112+0.j]])
  6. NumPy Ops are Important

  7. The Importance of NumPy Ops • Community Requests & Engagement

    • User expectations
  8. Add a function that behaves like NumPy’s moveaxis It would

    be good to add a torch.percentile function that has a similar API to NumPy’s np.percentile, including interpolation schemes Add aliases for functions which are present in NumPy and perform similar (same?) functionality in PyTorch I wonder if there is a possibility that PyTorch would implement a cumulative integration function, similar to cumulative_trapezoid in SciPy Add an entropy function in PyTorch analogous to scipy.stats.entropy
  9. None
  10. None
  11. PyTorch + NumPy Ops

  12. t o r c h . n u m p

    y t o r c h + n u m p y 1 import torch.numpy as tnp 2 3 a = tnp.array((1, 2, 3)) 4 b = tnp.array((4, 5, 6)) 5 tnp.add(a, b) >> tensor([5, 7, 9]) 1 import torch 2 3 a = torch.tensor((1, 2, 3)) 4 b = torch.tensor((4, 5, 6)) 5 torch.add(a, b) >> tensor([5, 7, 9])
  13. t o r c h . n u m p

    y t o r c h + n u m p y 1 import torch.numpy as tnp 2 3 a = tnp.array((1, 2, 3)) 4 b = tnp.array((4, 5, 6)) 5 tnp.add(a, b) >> tensor([5, 7, 9]) 1 import torch 2 3 a = torch.tensor((1, 2, 3)) 4 b = torch.tensor((4, 5, 6)) 5 torch.add(a, b) >> tensor([5, 7, 9]) X
  14. None
  15. Device tensor Device tensor Add enqueued Device tensor Transpose, conjugate,

    matmul enqueued Cholesky enqueued Cholesky diagnostics gathered I n t e r p r e t e r C U D A D e v i c e 01 import torch 02 03 a = torch.tensor((1, 2, 3), device='cuda') 04 b = torch.tensor((4, 5, 6), device='cuda') 05 torch.add(a, b) 06 t = torch.randn(2, 2, dtype=torch.complex128, device='cuda') 07 pd = torch.matmul(t, t.T.conj()) 08 l = torch.linalg.cholesky(pd) Waiting for diagnostics
  16. Device tensor Device tensor Add enqueued Device tensor Transpose, conjugate,

    matmul enqueued Cholesky enqueued I n t e r p r e t e r C U D A D e v i c e 01 import torch 02 03 a = torch.tensor((1, 2, 3), device='cuda') 04 b = torch.tensor((4, 5, 6), device='cuda') 05 torch.add(a, b) 06 t = torch.randn(2, 2, dtype=torch.complex128, device='cuda') 07 pd = torch.matmul(t, t.T.conj()) 08 l, errors = torch.linalg.cholesky_ex(pd)
  17. 01 import torch 02 03 t = torch.diag(torch.tensor((1., 2, 3)))

    04 w, v = torch.linalg.eig(t) >> torch.return_types.linalg_eig( eigenvalues=tensor([1.+0.j, 2.+0.j, 3.+0.j]), eigenvectors=tensor( [[1.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 1.+0.j]])) 01 import numpy as np 02 03 a = np.diag(np.array((1., 2, 3))) 04 w, v = np.linalg.eig(a) >> array([1., 2., 3.]), array( [[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) P y To r c h N u m P y
  18. C o m p u t a t i o

    n a l G r a p h transpose 32x16, float32 01 import torch 02 03 t = torch.randn(32, 16) 04 w = torch.randn(8, 16) 05 b = torch.randn(8) 06 out = t @ w.T + b 07 out.relu() 8x16, float32 8, float32 matmul add relu 16x8, float32 32x8, float32 32x8, float32 32x8, float32
  19. C o m p u t a t i o

    n a l G r a p h transpose 32x16, float32 01 import torch 02 03 t = torch.randn(32, 16) 04 w = torch.randn(8, 16) 05 b = torch.randn(8) 06 out = t @ w.T + b 07 out.relu() 8x16, float32 8, float32 matmul add relu 16x8, float32 32x8, float32 32x8, float32 32x8, float32 F u s i o n
  20. C o m p u t a t i o

    n a l G r a p h transpose 32x16, float32 01 import torch 02 03 t = torch.randn(32, 16) 04 w = torch.randn(8, 16) 05 b = torch.randn(8) 06 out = torch.addmm(b, t, w.T) 07 out.relu() 8x16, float32 8, float32 addmm relu 16x8, float32 32x8, float32 32x8, float32
  21. C o m p u t a t i o

    n a l G r a p h 3, float32 diag 3x3, float32 eig 3, complex64 3x3, complex64 01 import torch 02 03 t = torch.diag(torch.tensor((1., 2, 3))) 04 w, v = torch.linalg.eig(t) >> torch.return_types.linalg_eig( eigenvalues=tensor([1.+0.j, 2.+0.j, 3.+0.j]), eigenvectors=tensor( [[1.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 1.+0.j]]))
  22. C o m p u t a t i o

    n a l G r a p h 3, float32 diag 3x3, float32 eig 3, ??? 3x3, ??? 3x3, float32 add
  23. 01 import torch 02 03 t = torch.tensor((complex(1, 2), complex(2,

    1))) 04 torch.amax(t) >> RUNTIME ERROR 05 06 torch.sort(t) >> RUNTIME ERROR 01 import numpy as np 02 03 a = t.numpy() 04 np.amax(a) >> (2+1j) 05 06 np.sort(a) >> array([1.+2.j, 2.+1.j], dtype=complex64) P y To r c h N u m P y
  24. C o n c l u s i o n

    & F u t u r e W o r k
  25. Conclusions 1/2 • Supporting NumPy operators is increasingly important •

    The majority of NumPy operators can be implemented in PyTorch without issue • Sometimes PyTorch needs to extend, tweak, or diverge from NumPy
  26. Conclusions 2/2 • Do the work to engage your community

    • When adopting another project’s user experience, be clear about your goals to understand what changes, if any, need to be made, or what gaps remain to be filled
  27. Future Work • torch.linalg will be in PyTorch 1.9 •

    Python Array API support • More modules, including torch.special, coming in future PyTorch releases!
  28. rgommers RockingJavaBean muthuArivoli krshrimali kshitij12345 Justin-Huber cloudhan Kiyosora peterbell10 jessebrizzi

    kurtamohler mattip antocuni IvanYashchuk aayn Thank you! soulitzer ranman arbfay ejguan heitorschueroff nikitaved Lezcano anjali411 v0dro vfdev-5 asi1024 jbschlosser emcastillo aocsa pearu