Powered by ever more sophisticated sensors, the machines of the future will interact with humans far more naturally than is possible today. Most of us find face-to-face interaction the most comfortable and for good reason - the brain does an outstanding job of interpreting the identity and emotional state of another human - all from just looking at their face.
With this in mind we seek to develop algorithmic approaches to understanding the human face. I’ll explain how powerful generative models of the face can be constructed, and what we can learn from them. I’ll then demonstrate the usefulness of such models in identity and emotion recognition, and highlight how our collaboration with Great Ormond Street Hospital is helping advance craniofacial surgery techniques. Finally, I’ll talk about where we can take statistical facial modelling in the future, and discuss some of the challenges that we most overcome in order to advance.