Slide 1

Slide 1 text

Detecting Asteroids with Neural Networks Dustin Ingram Advanced Artificial Intelligence, Winter 12-13 Drexel University Department of Computer Science March 10, 2013

Slide 2

Slide 2 text

Outline What’s the goal? What’s the data? Getting started Building a feature set Building the neural network Training the network Results

Slide 3

Slide 3 text

The goal Build and train a neural network to correctly identify asteroids in astrophotography data.

Slide 4

Slide 4 text

So... How do we do it?

Slide 5

Slide 5 text

So... How do we really do it?

Slide 6

Slide 6 text

The data The Sloan Digital Sky Survey: ”One of the most ambitious and influential surveys in the history of astronomy.” Approx 35% of sky Largest uniform survey of the sky yet accomplished Data is freely available online Each image is 922x680 pixels

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

An example asteroid

Slide 10

Slide 10 text

An example asteroid

Slide 11

Slide 11 text

How does this work? This exploits a property of CCDs: SDSS telescopes use five different filters They are not simultaneous Moving objects appear in different locations Always the same order

Slide 12

Slide 12 text

An example asteroid

Slide 13

Slide 13 text

Getting started Getting the initial training data: Small tool to extract potential candidates from full-scale images Extremely na¨ ıve, approx 100:5 false positives to actual positives Very low false negatives (approx 1:1000) Incredibly slow (complex scan of 100Ks of potentials) Manual classification, somewhat slow Yields approx 250 valid items, 500 invalid items

Slide 14

Slide 14 text

The feature set Good ideas for features: Ratio valid hues to non-valid hues Best possible cluster collinearity Best possible average cluster distance

Slide 15

Slide 15 text

Feature: Ratio valid hues to non-valid hues The goal here is to match the colors, a.k.a. “hues”: First step: convert to HSV space For pixels in the valid value-spectrum (0.25 < v < 0.90) How many are within 2 standard deviations from an optimal value? What’s the ratio to ones that aren’t?

Slide 16

Slide 16 text

An example HSV plot

Slide 17

Slide 17 text

Feature: Best possible cluster collinearity k-means clustering Using the valid hues from the previous feature Attempts to cluster n points into k groups Here, k = 3 Produces three centroids

Slide 18

Slide 18 text

An example asteroid

Slide 19

Slide 19 text

k-means clustering of the same asteroid

Slide 20

Slide 20 text

Feature: Best possible cluster collinearity Collinearity: The property of a set of points which lie on the same line Iterate the k-means clustering approx. 20 times The resulting metric is the ratio between the actual collinearity and the maximum potential colinearity Given points a, b, and c: colin = |(c.x − a.x) ∗ (b.y − a.y) + (c.y − a.y) ∗ (a.x − b.x)|

Slide 21

Slide 21 text

An example asteroid

Slide 22

Slide 22 text

k-means clustering of the same asteroid

Slide 23

Slide 23 text

An example non-asteroid

Slide 24

Slide 24 text

k-means clustering of the same non-asteroid

Slide 25

Slide 25 text

Feature: Best possible average cluster distance Using the same k-means clusters from the previous features What is the average distance from any point in a cluster to the center of the cluster?

Slide 26

Slide 26 text

An example asteroid

Slide 27

Slide 27 text

k-means clustering of the same asteroid

Slide 28

Slide 28 text

An example non-asteroid

Slide 29

Slide 29 text

k-means clustering of the same non-asteroid

Slide 30

Slide 30 text

A comparison of all three features Hue Ratio Collinearity Cluster distance Asteroid 0.687 0.046 0.432 Non-asteroid 0.376 0.388 0.557 We see that the for a valid asteroid: The hue ratio is much higher The colinearity metric is much lower The mean cluster disance is smaller

Slide 31

Slide 31 text

Ok... where’s the AI? This type of classification is extrememly well suited for a neural network: We have a clear set of training data The output is either affirmative (1) or negative (0) Each of the input features can be resolved to a 0 → 1 metric There is a small amount of input features which can accurately define an item Neural network activation will be much faster than almost any algorithm we can come up with

Slide 32

Slide 32 text

Building the neural network The resulting neural network: Use supervised learning Uses a backpropagation trainer Three layers: Input layer Single hidden layer Output layer Total of 8 “neurons”: 3 input neurons (hue ratio, collinearity metric, distance metric) 4 hidden neurons 1 output neuron (1 if valid asteroid, 0 if invalid) Learning rate of 0.01, momentum of 0.99

Slide 33

Slide 33 text

Building the neural network The resulting neural network: I1 I2 I3 H1 H2 H3 H4 O1

Slide 34

Slide 34 text

Training the network Approx 250 valid items Approx 500 invalid items Trained for 5,000 iterations Took approx. 3 hours Probably could have gotten by with less iterations

Slide 35

Slide 35 text

Results Trial Found Valid Actual Valid Total False positive Trial 1 8 5 190 37.50% Trial 2 23 21 286 8.70% Trial 3 54 46 955 14.81%

Slide 36

Slide 36 text

Results Trial Found Invalid Actual Invalid Total False negative Trial 1 182 182 190 0.00% Trial 2 263 262 286 0.38% Trial 3 901 892 955 1.00%

Slide 37

Slide 37 text

Conclusion Using a neural network allows us to do it faster, and more accurately Need to spend time coming up with good features for the data When paired with human validation, the process would become very quick and very accurate

Slide 38

Slide 38 text

References http://www.sdss.org/ http://pybrain.org/ http://en.wikipedia.org/wiki/Sloan_Digital_Sky_Survey http://en.wikipedia.org/wiki/Collinearity http://en.wikipedia.org/wiki/K-means_clustering