Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Jupyter in Production - Rev 3

Jupyter in Production - Rev 3

Presented at the Rev 3 MLOps Conference in New York City on May 5, 2022.

Patrick Harrison

May 05, 2022
Tweet

More Decks by Patrick Harrison

Other Decks in Programming

Transcript

  1. Jupyter Notebooks just turned ten years old The original IPython

    Notebook was fi rst released on December 19, 2011 Source: https://ipython.org/ipython-doc/rel-0.12/whatsnew/version0.12.html
  2. 8,000+ new public Jupyter Notebooks posted on GitHub every day

    in 2022, on average Source: https://github.com/parente/nbestimate/blob/master/ipynb_counts.csv
  3. Source: https://blog.jupyter.org/congratulations-to-the-ligo-and-virgo-collaborations-from-project-jupyter-5923247be019 On behalf of the entire Project Jupyter team,

    we’d like to say congratulations to Rainer Weiss, Barry C. Barish, Kip S. Thorne and the rest of the LIGO and VIRGO teams for the Nobel Prize in Physics 2017. Since 2015, the LIGO and VIRGO Collaborations have observed multiple instances of gravitational waves due to colliding black holes (and more recently neutron stars). These observations represent decades of work and confirm what Einstein had theorized a hundred years ago. ... To communicate to the broader community, the LIGO/VIRGO Collaboration has created tutorials with Jupyter Notebooks that describe how to use LIGO/ VIRGO data and reproduce analyses related to their academic publications.
  4. Source: https://blog.jupyter.org/jupyter-receives-the-acm-software-system-award-d433b0dfe3a2 It is our pleasure to announce that Project

    Jupyter has been awarded the 2017 ACM Software System Award, a significant honor for the project. We are humbled to join an illustrious list of projects that contains major highlights of computing history, including Unix, TeX, S (R’s predecessor), the Web, Mosaic, Java, INGRES (modern databases) and more.
  5. #1

  6. #2

  7. Build a computational narrative bringing together code, results, explanatory prose,

    plots, images, widgets, and more in a single, human-friendly document #2
  8. ...many more people and roles can access, use, and collaborate

    on programming and data analysis in their work Lower barriers to entry
  9. "We’ve found that we’re 2x-3x more productive using [notebook-based development]

    than using traditional programming tools... Source: https://www.fast.ai/2019/12/02/nbdev/
  10. "We’ve found that we’re 2x-3x more productive using [notebook-based development]

    than using traditional programming tools... ...this is a big surprise, since I have coded nearly every day for over 30 years, and in that time have tried dozens of tools, libraries, and systems for building programs." Source: https://www.fast.ai/2019/12/02/nbdev/
  11. "We’ve found that we’re 2x-3x more productive using [notebook-based development]

    than using traditional programming tools... ...this is a big surprise, since I have coded nearly every day for over 30 years, and in that time have tried dozens of tools, libraries, and systems for building programs." Source: https://www.fast.ai/2019/12/02/nbdev/ — Jeremy Howard, fast.ai
  12. This is a pain to version control. This is monolithic.

    How will we collaborate effectively?
  13. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code?
  14. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards?
  15. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards? How do we test this code?
  16. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards? How do we test this code? Will this work with our continuous integration system?
  17. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards? How do we test this code? Will this work with our continuous integration system? How do we schedule and trigger automatic execution?
  18. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards? How do we test this code? Will this work with our continuous integration system? How do we schedule and trigger automatic execution? Out-of-order cell execution!
  19. This is a pain to version control. This is monolithic.

    How will we collaborate effectively? How can we share and reuse this code? How do we apply our code quality standards? How do we test this code? Will this work with our continuous integration system? How do we schedule and trigger automatic execution? Out-of-order cell execution! ...
  20. OK, how should we get this work into production? “It

    looks like there's a lot going on in your notebook…"
  21. Your notebook has reusable code... ... you're going to need

    to reimplement this code as proper software libraries, How should we get this work into production?
  22. Your notebook has reusable code... ... you're going to need

    to reimplement this code as proper software libraries, ... subject to our company-wide software engineering standards, How should we get this work into production?
  23. Your notebook has reusable code... ... you're going to need

    to reimplement this code as proper software libraries, ... subject to our company-wide software engineering standards, ... with reimplemented tests using our company's preferred testing framework, How should we get this work into production?
  24. Your notebook has reusable code... ... you're going to need

    to reimplement this code as proper software libraries, ... subject to our company-wide software engineering standards, ... with reimplemented tests using our company's preferred testing framework, ... using our preferred enterprise continuous integration system, How should we get this work into production?
  25. Your notebook has reusable code... ... you're going to need

    to reimplement this code as proper software libraries, ... subject to our company-wide software engineering standards, ... with reimplemented tests using our company's preferred testing framework, ... using our preferred enterprise continuous integration system, ... and deploy to our preferred enterprise artifact repository. How should we get this work into production?
  26. Your notebook is accessing and transforming data... ... you're going

    to need to reimplement this logic as data pipelines in our preferred enterprise data pipeline framework, How should we get this work into production?
  27. Your notebook is accessing and transforming data... ... you're going

    to need to reimplement this logic as data pipelines in our preferred enterprise data pipeline framework, ... which has its own engineering practices and conventions, How should we get this work into production?
  28. Your notebook is accessing and transforming data... ... you're going

    to need to reimplement this logic as data pipelines in our preferred enterprise data pipeline framework, ... which has its own engineering practices and conventions, ... and may not even use the same programming language. How should we get this work into production?
  29. Your notebook generates predictions... ... you're going to need to

    reimplement the model as a web service, How should we get this work into production?
  30. Your notebook generates predictions... ... you're going to need to

    reimplement the model as a web service, ... wrap it in a Docker container, How should we get this work into production?
  31. Your notebook generates predictions... ... you're going to need to

    reimplement the model as a web service, ... wrap it in a Docker container, ... store it in our preferred enterprise container registry, How should we get this work into production?
  32. Your notebook generates predictions... ... you're going to need to

    reimplement the model as a web service, ... wrap it in a Docker container, ... store it in our preferred enterprise container registry, ... and deploy it to our preferred enterprise container orchestration platform. How should we get this work into production?
  33. Your notebook presents results to end users... ... you're going

    to need to reimplement these reports in our preferred enterprise business intelligence platform, How should we get this work into production?
  34. Your notebook presents results to end users... ... you're going

    to need to reimplement these reports in our preferred enterprise business intelligence platform, ... which has its own engineering practices and conventions, How should we get this work into production?
  35. Your notebook presents results to end users... ... you're going

    to need to reimplement these reports in our preferred enterprise business intelligence platform, ... which has its own engineering practices and conventions, ... and may not even use the same programming language. How should we get this work into production?
  36. So you're telling me that if we're going to get

    our work in production, either:
  37. So you're telling me that if we're going to get

    our work in production, either: 1. Our data science teams have to be stacked with unicorns,
  38. So you're telling me that if we're going to get

    our work in production, either: 1. Our data science teams have to be stacked with unicorns, or
  39. So you're telling me that if we're going to get

    our work in production, either: 1. Our data science teams have to be stacked with unicorns, or 2. We have to loop in a bunch of other teams and create dependencies between them
  40. de • notebook • i fi cation The long, painful

    process of exploding a Jupyter Notebook that de fi nitely works into a constellation of disparate production artifacts that maybe don't
  41. ⚠ WARNING: De-notebook-i fi cation has been shown to have

    side effects including increased complexity, elongated timelines, unhappy stakeholders, frustrated data scientists, increased risk of project cancelation, and loss of data science team credibility.
  42. Additional problem: If Jupyter is only for demos and prototypes...

    Why bother writing good code in notebooks?
  43. For this talk, we'll focus on: •Developing and distributing software

    libraries What does in production mean, anyway?
  44. For this talk, we'll focus on: •Developing and distributing software

    libraries •Building and running data pipelines What does in production mean, anyway?
  45. For this talk, we'll focus on: •Developing and distributing software

    libraries •Building and running data pipelines •Creating interactive reports and dashboards What does in production mean, anyway?
  46. ... what is it? ... what do I have to

    do to use it? For each of these tools, I'll try to answer...
  47. ... what is it? ... what do I have to

    do to use it? ... what's in it for me? For each of these tools, I'll try to answer...
  48. A collection of tools that let you use Jupyter Notebooks

    as the source code for Python software libraries nbdev
  49. Setup • pip install nbdev or conda install nbdev -c

    fastai • Initialize your git repository as an nbdev project: nbdev_new 
 (Or, copy the of fi cial nbdev template repo on GitHub) nbdev
  50. Setup • pip install nbdev or conda install nbdev -c

    fastai • Initialize your git repository as an nbdev project: nbdev_new 
 (Or, copy the of fi cial nbdev template repo on GitHub) • Install the nbdev git hooks: nbdev_install_git_hooks nbdev
  51. Setup • pip install nbdev or conda install nbdev -c

    fastai • Initialize your git repository as an nbdev project: nbdev_new 
 (Or, copy the of fi cial nbdev template repo on GitHub) • Install the nbdev git hooks: nbdev_install_git_hooks • Enter some basic project information in settings.ini nbdev
  52. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when it would make sense to reuse or share bits of the code you write nbdev
  53. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when it would make sense to reuse or share bits of the code you write • Reshape this code into functions and classes in a notebook nbdev
  54. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when it would make sense to reuse or share bits of the code you write • Reshape this code into functions and classes in a notebook • Add the #export fl ag (code comment) at the start of your main code cells nbdev
  55. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when it would make sense to reuse or share bits of the code you write • Reshape this code into functions and classes in a notebook • Add the #export fl ag (code comment) at the start of your main code cells • Next to your main code cells, add rich explanatory text, images, code usage examples, sample output, and assert statements nbdev
  56. Automatically export the code from your Jupyter Notebooks into a

    fully-functional Python package: nbdev nbdev_build_lib
  57. Automatically generate a rich documentation website for your package from

    your Jupyter Notebooks: nbdev nbdev_build_docs
  58. Avoid common version control con fl icts and resolving them

    when they occur: nbdev nbdev_clean_nbs & nbdev_fix_merge
  59. nbdev $ nbdev_test_nbs testing: card.ipynb testing: deck.ipynb All tests are

    passing! Source: https://nbdev.fast.ai/tutorial.html
  60. "The magic of nbdev is that it doesn’t actually change

    programming that much; you add a #export or #hide tag to your notebook cells once in a while, and you run nbdev_build_lib and nbdev_build_docs when you fi nish up your code. 
 Source: https://www.overstory.com/blog/how-nbdev-helps-us-structure-our-data-science-work fl ow-in-jupyter-notebooks nbdev
  61. "The magic of nbdev is that it doesn’t actually change

    programming that much; you add a #export or #hide tag to your notebook cells once in a while, and you run nbdev_build_lib and nbdev_build_docs when you fi nish up your code. 
 That’s it! There’s nothing new to learn, nothing to unlearn. It’s just notebooks." Source: https://www.overstory.com/blog/how-nbdev-helps-us-structure-our-data-science-work fl ow-in-jupyter-notebooks nbdev
  62. “[nbdev] incentives us to write clear code, use proper Git

    version control and document and test our codebase continuously... [while] preserving the bene fi ts of having interactive Jupyter notebooks in which it is easy to experiment." Source: https://www.overstory.com/blog/how-nbdev-helps-us-structure-our-data-science-work fl ow-in-jupyter-notebooks nbdev
  63. “[nbdev] incentives us to write clear code, use proper Git

    version control and document and test our codebase continuously... [while] preserving the bene fi ts of having interactive Jupyter notebooks in which it is easy to experiment." Source: https://www.overstory.com/blog/how-nbdev-helps-us-structure-our-data-science-work fl ow-in-jupyter-notebooks nbdev — Overstory
  64. $ nbqa black my_notebook.ipynb reformatted my_notebook.ipynb All done! ✨ 🍰

    ✨ 1 files reformatted. Source: https://nbqa.readthedocs.io/en/latest/examples.html nbQA
  65. “We’re currently in the process of migrating all 10,000 of

    the scheduled jobs running on the Net fl ix Data Platform to use notebook-based execution… 
 Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6
  66. “We’re currently in the process of migrating all 10,000 of

    the scheduled jobs running on the Net fl ix Data Platform to use notebook-based execution… 
 When we’re done, more than 150,000 [pipeline executions] will be running through notebooks on our platform every single day.” Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6
  67. “We’re currently in the process of migrating all 10,000 of

    the scheduled jobs running on the Net fl ix Data Platform to use notebook-based execution… 
 When we’re done, more than 150,000 [pipeline executions] will be running through notebooks on our platform every single day.” Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6 — Net fl ix (2018)
  68. Setup • pip install ploomber or 
 conda install ploomber

    -c conda-forge • Initialize your git repository as a ploomber project: ploomber
  69. Setup • pip install ploomber or 
 conda install ploomber

    -c conda-forge • Initialize your git repository as a ploomber project: • ploomber scaffold --empty ploomber
  70. Setup • pip install ploomber or 
 conda install ploomber

    -c conda-forge • Initialize your git repository as a ploomber project: • ploomber scaffold --empty • Add information about your pipeline to pipeline.yaml as you go ploomber
  71. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when chunks of your code would make sense as modular "tasks" in a data transformation work fl ow ploomber
  72. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when chunks of your code would make sense as modular "tasks" in a data transformation work fl ow • Move the code for each task into its own dedicated notebook ploomber
  73. Basic Usage • Start with exploratory programming in Jupyter Notebooks,

    as usual • As you go, notice when chunks of your code would make sense as modular "tasks" in a data transformation work fl ow • Move the code for each task into its own dedicated notebook • Next to your code cells, add rich explanatory text, images, example expected output, and data quality checks ploomber
  74. Basic Usage • Record information about your task notebooks in

    pipeline.yaml • Add a few variables to your task notebooks to de fi ne upstream dependencies ploomber
  75. Basic Usage • Record information about your task notebooks in

    pipeline.yaml • Add a few variables to your task notebooks to de fi ne upstream dependencies • Run your pipeline with ploomber build ploomber
  76. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  77. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  78. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  79. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  80. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  81. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb product: ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  82. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb product: nb: output/clean.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  83. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb product: nb: output/clean.ipynb data: output/clean_data.parquet 
 ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  84. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb product: nb: output/clean.ipynb data: output/clean_data.parquet 
 - source: plot.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  85. pipeline.yaml tasks: # source is the code you want to

    execute 
 - source: raw.ipynb # products are task's outputs 
 product: # tasks generate executed notebooks as outputs 
 nb: output/raw.ipynb # you can define as many outputs as you want 
 data: output/raw_data.csv 
 - source: clean.ipynb product: nb: output/clean.ipynb data: output/clean_data.parquet 
 - source: plot.ipynb product: output/plot.ipynb ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  86. $ ploomber build Building task ‘raw': 0%| | 0/5 [00:00<?,

    ?it/s] 
 Executing: 0%| | 0/6 [00:00<?, ?cell/s] 
 Executing: 17%|█▋ | 1/6 [00:04<00:21, 4.25s/cell] 
 Executing: 33%|███▎ | 2/6 [00:04<00:07, 1.82s/cell] 
 Executing: 100%|██████████| 6/6 [00:05<00:00, 1.11cell/s] ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  87. $ ploomber build Building task ‘raw': 0%| | 0/5 [00:00<?,

    ?it/s] 
 Executing: 0%| | 0/6 [00:00<?, ?cell/s] 
 Executing: 17%|█▋ | 1/6 [00:04<00:21, 4.25s/cell] 
 Executing: 33%|███▎ | 2/6 [00:04<00:07, 1.82s/cell] 
 Executing: 100%|██████████| 6/6 [00:05<00:00, 1.11cell/s] Building task 'clean': 20%|██ | 1/5 [00:05<00:21, 5.47s/it] 
 Executing: 0%| | 0/7 [00:00<?, ?cell/s] 
 Executing: 14%|█▍ | 1/7 [00:01<00:10, 1.76s/cell] 
 Executing: 43%|████▎ | 3/7 [00:23<00:34, 8.63s/cell] 
 Executing: 71%|███████▏ | 5/7 [00:25<00:09, 4.69s/cell] 
 Executing: 86%|████████▌ | 6/7 [00:28<00:04, 4.14s/cell] 
 Executing: 100%|██████████| 7/7 [00:29<00:00, 4.24s/cell] ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  88. $ ploomber build Building task ‘raw': 0%| | 0/5 [00:00<?,

    ?it/s] 
 Executing: 0%| | 0/6 [00:00<?, ?cell/s] 
 Executing: 17%|█▋ | 1/6 [00:04<00:21, 4.25s/cell] 
 Executing: 33%|███▎ | 2/6 [00:04<00:07, 1.82s/cell] 
 Executing: 100%|██████████| 6/6 [00:05<00:00, 1.11cell/s] Building task 'clean': 20%|██ | 1/5 [00:05<00:21, 5.47s/it] 
 Executing: 0%| | 0/7 [00:00<?, ?cell/s] 
 Executing: 14%|█▍ | 1/7 [00:01<00:10, 1.76s/cell] 
 Executing: 43%|████▎ | 3/7 [00:23<00:34, 8.63s/cell] 
 Executing: 71%|███████▏ | 5/7 [00:25<00:09, 4.69s/cell] 
 Executing: 86%|████████▌ | 6/7 [00:28<00:04, 4.14s/cell] 
 Executing: 100%|██████████| 7/7 [00:29<00:00, 4.24s/cell] Building task ‘plot': 40%|████ | 2/5 [00:35<00:59, 19.75s/it] 
 Executing: 0%| | 0/9 [00:00<?, ?cell/s] 
 Executing: 11%|█ | 1/9 [00:02<00:22, 2.80s/cell] 
 Executing: 33%|███▎ | 3/9 [00:02<00:04, 1.28cell/s] 
 Executing: 56%|█████▌ | 5/9 [00:03<00:01, 2.42cell/s] 
 Executing: 100%|██████████| 9/9 [00:03<00:00, 2.26cell/s] ploomber Source: https://docs.ploomber.io/en/latest/get-started/ fi rst-pipeline.html
  89. “[W]e’ve gained a key improvement over a non-notebook execution pattern:

    our input and outputs are complete documents, wholly executable and shareable in the same interface.” Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6 — Net fl ix (2018)
  90. “Say something went wrong… How might we debug and fi

    x the issue? The fi rst place we’d want to look is the notebook output. It will have a stack trace, and ultimately any output information related to an error… 
 Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6 — Net fl ix (2018)
  91. “Say something went wrong… How might we debug and fi

    x the issue? The fi rst place we’d want to look is the notebook output. It will have a stack trace, and ultimately any output information related to an error… 
 [W]e simply take the output notebook with our exact failed runtime parameterizations and load it into a notebook server… With a few iterations… we can quickly fi nd a fi x for the failure. Source: https://net fl ixtechblog.com/scheduling-notebooks-348e6c14cfd6 — Net fl ix (2018)
  92. Setup • pip install voila or conda install voila -c

    conda-forge • To serve a single notebook: voila my_notbook.ipynb voilà
  93. Setup • pip install voila or conda install voila -c

    conda-forge • To serve a single notebook: voila my_notbook.ipynb • To serve a whole directory of notebooks: voila voilà
  94. Setup • pip install voila or conda install voila -c

    conda-forge • To serve a single notebook: voila my_notbook.ipynb • To serve a whole directory of notebooks: voila • Optionally specify a custom template: voilà
  95. Setup • pip install voila or conda install voila -c

    conda-forge • To serve a single notebook: voila my_notbook.ipynb • To serve a whole directory of notebooks: voila • Optionally specify a custom template: • voila my_notebook.ipynb --template=gridstack voilà
  96. • Software Libraries → nbdev projects • Data Transformation Work

    fl ows → ploomber pipelines • Reports and Dashboards → voilà dashboards
  97. Data science teams can own a project end-to-end in a

    tool and environment they're already comfortable with
  98. We can retain the interactivity and computational narrative strengths of

    Jupyter Notebooks, even in production settings