NIPS 2017 Summary

5779dc3bf8c8a0c9dadb7ff95c67e9e3?s=47 Corey Chivers
December 15, 2017

NIPS 2017 Summary

Themes and highlights from the Neural Information Processing Systems conference in Long Beach, CA, December 2017.

5779dc3bf8c8a0c9dadb7ff95c67e9e3?s=128

Corey Chivers

December 15, 2017
Tweet

Transcript

  1. NIPS 2017 Summary Corey Chivers, PhD Penn Medicine Data Science

  2. Themes • Fairness & Bias • More than just a

    consequence of sampling • Interpretability & Explainability • Don’t just take my word for it • Causality • “no unobserved confounders?”, “Not bloody likely.” • Bayesian/Probabilistic • Uncertainty matters, but it’s hard to compute • Adversarial Examples • Foolproof AI is hard • Learning from very few examples • K-shot learning but GANs are still the hotness
  3. Fairness/Bias http://mrtz.org/nips17/

  4. On Fairness and Calibration http://papers.nips.cc/paper/7151-on-fairness-and-calibration.pdf Calibration is compatible only with

    a single error constraint. i.e. You can ensure equal false- negatives rates across groups, or equal false-positive rates across groups, but not both simultaneously.
  5. Explainability & Interpretability • Debate over what it even means

    for a model to be interpretable. • Some novel methods for explaining why models do what they do • Three main flavours: 1. What concepts does the model know about? 2. What is the model ‘looking’ at when making a prediction? 3. Prove that this model won’t do something we don’t want (secretly discriminate, for example).
  6. https://arxiv.org/pdf/1711.08037.pdf “We should determine what the various stakeholders demanding interpretability

    want, and which of these desiderata can actually be satisfied within the current learning paradigm. To do any of this effectively, we must invite the stakeholders to participate in the conversation.”
  7. SHAP (SHapley Additive exPlanations) https://arxiv.org/pdf/1705.07874.pdf

  8. SHAP (SHapley Additive exPlanations) https://github.com/slundberg/shap

  9. Causality • Judea Pearl!! (literally wrote the book on causal

    inference) • Clever methods for inferring causal structure from observational data (eg not experiments) • Performing counterfactual reasoning with ML models • What would have happened if a different action had been taken?
  10. Theoretical Impediments to Machine Learning - Judea Pearl http://web.cs.ucla.edu/~kaoru/theoretical-impediments.pdf

  11. Counterfactual Reasoning • Medical time-series application by Schulman & Saria

    • A causal GP model that enables treatment decision support https://papers.nips.cc/paper/6767-counterfactual-gaussian-processes-for-reliable-decision-making-and-what-if-reasoning.pdf
  12. Bayesian/Probabilistic • Need to quantify uncertainty, but typically computationally difficult.

    https://bayesianbiologist.com/2012/10/18/introduction-to-bayesian-methods-guest-lecture/
  13. Variational Inference • Analytic approximations to posteriors • Current approaches

    are model specific • Capturing the covariance properly and generally is hard
  14. Variational Inference • Analytic approximations to posteriors • Current approaches

    are model specific • Capturing the covariance properly and generally is hard
  15. GPFlow • GPflow is a package for building Gaussian process

    models in python, using TensorFlow. (https://github.com/GPflow/GPflow)
  16. Adversarial Examples Can we attain some provable level of robustness

    against attack? Important for mission critical applications (self driving cars, medicine)
  17. Learning from very few examples • Sample Generation • It’s

    all about dem GANs • K-Shot Learning • Transfer knowledge to subdomains • Both RL and supervised examples
  18. None
  19. MEDICAL TIME SERIES GENERATION WITH RECURRENT CONDITIONAL GANS https://arxiv.org/pdf/1706.02633.pdf

  20. Other Interesting Things • Python suite to construct benchmark machine

    learning datasets from the MIMIC-III clinical database. https://arxiv.org/abs/1703.07771 https://github.com/YerevaNN/mimic3-benchmarks • Fei-Fie Li’s group is moving into the healthcare domain. Focusing now on operations using privacy preserving depth sensing computer vision. • Detecting hand washing • Mobility in ICU • Meta-Learning http://papers.nips.cc/paper/7266-a-meta-learning- perspective-on-cold-start-recommendations-for-items.pdf
  21. Other Interesting Things • More hospital DS teams are coming

    online. Notable example: MGH, BWH, Harvard: https://clindatsci.com/ (~6months old) • Jeff Dean says ML all the things! Including the things that do the ML (http://learningsys.org/nips17/assets/slides/dean-nips17.pdf) • Lols