of the individual columns of data provides a crude approximation of the regression line. The slope is exactly 0.50 and the correlation is approximately r = 0.51. Many, though not all, Sweetpea size heritability Galton (1894)1 1 Stanton (2001). Galton, Pearson, and the Peas. Journal of Statistics Education.
performance on a variety of most notably visual classiﬁcation Ns are now able to classify objects an-level performance, questions differences remain between com- A recent study  revealed that a lion) in a way imperceptible to to label the image as something eling a lion a library). Here we easy to produce images that are e to humans, but that state-of-the- ecognizable objects with 99.99% with certainty that white noise ally, we take convolutional neu- form well on either the ImageNet en ﬁnd images with evolutionary cent that DNNs label with high o each dataset class. It is possi- lly unrecognizable to human eyes ar certainty are familiar objects, ages” (more generally, fooling ex- Figure 1. Evolved images that are unrecognizable to humans, 3 Nguyen 2015. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Computer Vision and Pattern Recognition, IEEE, 2015.
data. Accuracy and loss function. Overfit vs underfit. Models. Linear and logistic regression. Convolutional, recurrent, and adversarial neural networks. Support vector machines. Ensemble regression trees. Etc. Libraries. TensorFlow. Theano. Caffe. Torch. Keras. SciPy. NumPy. HelloWorlds. MNIST. ImageNet. Iris images. Networks. Inception. AlexNet. VGGNet. Resources. Intro to ML, Andrew Ng (Coursera). Siraj Ravel (Youtube). TensorFlow Summit videos. This Week in Machine Learning.