consequence of sampling • Interpretability & Explainability • Don’t just take my word for it • Causality • “no unobserved confounders?”, “Not bloody likely.” • Bayesian/Probabilistic • Uncertainty matters, but it’s hard to compute • Adversarial Examples • Foolproof AI is hard • Learning from very few examples • K-shot learning but GANs are still the hotness
for a model to be interpretable. • Some novel methods for explaining why models do what they do • Three main flavours: 1. What concepts does the model know about? 2. What is the model ‘looking’ at when making a prediction? 3. Prove that this model won’t do something we don’t want (secretly discriminate, for example).
inference) • Clever methods for inferring causal structure from observational data (eg not experiments) • Performing counterfactual reasoning with ML models • What would have happened if a different action had been taken?
learning datasets from the MIMIC-III clinical database. https://arxiv.org/abs/1703.07771 https://github.com/YerevaNN/mimic3-benchmarks • Fei-Fie Li’s group is moving into the healthcare domain. Focusing now on operations using privacy preserving depth sensing computer vision. • Detecting hand washing • Mobility in ICU • Meta-Learning http://papers.nips.cc/paper/7266-a-meta-learning- perspective-on-cold-start-recommendations-for-items.pdf
online. Notable example: MGH, BWH, Harvard: https://clindatsci.com/ (~6months old) • Jeff Dean says ML all the things! Including the things that do the ML (http://learningsys.org/nips17/assets/slides/dean-nips17.pdf) • Lols