Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Experimenting with TensorFlow

Breandan Considine
November 05, 2016
250

Experimenting with TensorFlow

From implementing new research to validating results, running machine learning experiments can often be a process of trial and error. TensorFlow is an open source computing framework that places state of the art algorithms and tools into developers' hands, allowing you to rapidly iterate experiments, monitor their results and scale into production. We'll show you how to train machine learning pipelines from conception to release with TensorFlow.

Breandan Considine

November 05, 2016
Tweet

More Decks by Breandan Considine

Transcript

  1. Who am I? • Background in Computer Science, Machine Learning

    • Worked for a small ad-tech startup out of university • Spent two years as Developer Advocate @JetBrains • Interested machine learning and speech recognition • Enjoy writing code, traveling to conferences, reading • Say hello! @breandan | breandan.net | [email protected]
  2. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!
  3. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! 't'
  4. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! 't'
  5. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! 't'
  6. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! 't'
  7. So what’s a Tensor anyway? • A “tensor’ is just

    an n-dimensional array • Useful for working with complex data • We use tiny (and large) tensors every day! 't'
  8. What are they good for? • Modeling complex systems, data

    sets • Capturing higher order correlations • Representing dynamic relationships • Doing machine learning!
  9. 0 1

  10. Cool learning algorithm def classify(datapoint, weights): prediction = sum(x *

    y for x, y in zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1
  11. Cool learning algorithm def classify(datapoint, weights): prediction = sum(x *

    y for x, y in zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1
  12. Cool learning algorithm def train(data_set): class Datum: def __init__(self, features,

    label): self.features = [1] + features self.label = label
  13. Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features)

    total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)
  14. Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features)

    total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)
  15. Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features)

    total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)
  16. weights = [w + RATE * error * i for

    w, i in zip(weights, item.features)] Cool learning algorithm * 1 i1 i2 in * * * w0 w1 w2 wn Σ
  17. Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features)

    total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)
  18. Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features)

    total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)
  19. Even Cooler Algorithm! (Backprop) train(trainingSet) : initialize network weights randomly

    until average error stops decreasing (or you get tired): for each sample in trainingSet: prediction = network.output(sample) compute error (prediction – sample.output) compute error of (hidden -> output) layer weights compute error of (input -> hidden) layer weights update weights across the network save the weights
  20. Further resources • CS231 Course Notes • TensorFlow Models •

    Visualizing MNIST • Neural Networks and Deep Learning • Andrew Ng’s Machine Learning class • Awesome Public Datasets • Amy Unruh & Eli Bixby's TensorFlow Workshop