Upgrade to Pro — share decks privately, control downloads, hide ads and more …

NIPS 2017 Summary

Corey Chivers
December 15, 2017

NIPS 2017 Summary

Themes and highlights from the Neural Information Processing Systems conference in Long Beach, CA, December 2017.

Corey Chivers

December 15, 2017
Tweet

More Decks by Corey Chivers

Other Decks in Science

Transcript

  1. Themes • Fairness & Bias • More than just a

    consequence of sampling • Interpretability & Explainability • Don’t just take my word for it • Causality • “no unobserved confounders?”, “Not bloody likely.” • Bayesian/Probabilistic • Uncertainty matters, but it’s hard to compute • Adversarial Examples • Foolproof AI is hard • Learning from very few examples • K-shot learning but GANs are still the hotness
  2. On Fairness and Calibration http://papers.nips.cc/paper/7151-on-fairness-and-calibration.pdf Calibration is compatible only with

    a single error constraint. i.e. You can ensure equal false- negatives rates across groups, or equal false-positive rates across groups, but not both simultaneously.
  3. Explainability & Interpretability • Debate over what it even means

    for a model to be interpretable. • Some novel methods for explaining why models do what they do • Three main flavours: 1. What concepts does the model know about? 2. What is the model ‘looking’ at when making a prediction? 3. Prove that this model won’t do something we don’t want (secretly discriminate, for example).
  4. https://arxiv.org/pdf/1711.08037.pdf “We should determine what the various stakeholders demanding interpretability

    want, and which of these desiderata can actually be satisfied within the current learning paradigm. To do any of this effectively, we must invite the stakeholders to participate in the conversation.”
  5. Causality • Judea Pearl!! (literally wrote the book on causal

    inference) • Clever methods for inferring causal structure from observational data (eg not experiments) • Performing counterfactual reasoning with ML models • What would have happened if a different action had been taken?
  6. Counterfactual Reasoning • Medical time-series application by Schulman & Saria

    • A causal GP model that enables treatment decision support https://papers.nips.cc/paper/6767-counterfactual-gaussian-processes-for-reliable-decision-making-and-what-if-reasoning.pdf
  7. Bayesian/Probabilistic • Need to quantify uncertainty, but typically computationally difficult.

    https://bayesianbiologist.com/2012/10/18/introduction-to-bayesian-methods-guest-lecture/
  8. Variational Inference • Analytic approximations to posteriors • Current approaches

    are model specific • Capturing the covariance properly and generally is hard
  9. Variational Inference • Analytic approximations to posteriors • Current approaches

    are model specific • Capturing the covariance properly and generally is hard
  10. GPFlow • GPflow is a package for building Gaussian process

    models in python, using TensorFlow. (https://github.com/GPflow/GPflow)
  11. Adversarial Examples Can we attain some provable level of robustness

    against attack? Important for mission critical applications (self driving cars, medicine)
  12. Learning from very few examples • Sample Generation • It’s

    all about dem GANs • K-Shot Learning • Transfer knowledge to subdomains • Both RL and supervised examples
  13. Other Interesting Things • Python suite to construct benchmark machine

    learning datasets from the MIMIC-III clinical database. https://arxiv.org/abs/1703.07771 https://github.com/YerevaNN/mimic3-benchmarks • Fei-Fie Li’s group is moving into the healthcare domain. Focusing now on operations using privacy preserving depth sensing computer vision. • Detecting hand washing • Mobility in ICU • Meta-Learning http://papers.nips.cc/paper/7266-a-meta-learning- perspective-on-cold-start-recommendations-for-items.pdf
  14. Other Interesting Things • More hospital DS teams are coming

    online. Notable example: MGH, BWH, Harvard: https://clindatsci.com/ (~6months old) • Jeff Dean says ML all the things! Including the things that do the ML (http://learningsys.org/nips17/assets/slides/dean-nips17.pdf) • Lols