SER594 Lecture 07

SER594 Lecture 07

Human Computer Interaction
Neural Net
(201903)

B546a9b97d993392e4b22b74b99b91fe?s=128

Javier Gonzalez

April 23, 2019
Tweet

Transcript

  1. SER594 Human-Computer Interaction Lecture 07 Neural Net Javier Gonzalez-Sanchez, PhD

    javiergs@asu.edu javiergs.engineering.asu.edu Office Hours: By appointment
  2. Integration 2 Agent Integration Agent Agent System Multimodal

  3. Continuous Dimensional Model

  4. Continuous Dimensional Model -++ --- --+ +-- ++- -+- --+

  5. Continuous Dimensional Model

  6. What about? • Skin Conductance • Pupil Size • Heart

    Rate
  7. Pattern Recognition

  8. Tools

  9. Neural Network | Neuron input output

  10. Neural Network | Neuron X1 X2 X3 x4

  11. Neural Network | Neuron X1 X2 X3 x4 • Add

    • Normalize
  12. Neural Network | Neuron X1=1 X2=0.5 X3=0.5 X4=-1 • Add

    • Normalize 0.73
  13. Neural Network | Weights X1=1 X2=0.5 X3=0.5 X4=-1 0.37 W1=0.5

    W2=1 W3=1 W4=2 0.18 -3.7 W1=0.5 W2=1 W3=-10
  14. Neural Network | Weights

  15. Neural Network And, How to train the model? Weights are

    important
  16. Neural Network X1 X2 X3 X4 W1 W2 W3 W4

    sig (W1*X1 + W2*X2 + W3*X3 + W4*X4)
  17. Neural Network X1 X2 X3 X4 W1,1 W2,1 W3,1 W4,1

    sig (W11*X1 + W21*X2 + W31*X3 + W41*X4) W1,2 W2,2 W3,2 W4,2 sig (W12*X1 + W22*X2 + W32*X3 + W42*X4) W [i][j] = W[weight][neuron]
  18. Neural Network X1 X2 X3 X4 sig (W111*X1 + W121*X2

    + W131*X3 + W141*X4) W1,1,2 W1,2,2 W1,3,2 W1,4,2 W [k][i][j] = W[layer][weight] [neuron] sig ( ) sig (W112*X1 + W122*X2 + W132*X3 + W142*X4) sig ( ) W2,1,1 W2,2,1 W2,1,2 W2,2,2 layer_1 layer_2
  19. Neural Network | Weights input output

  20. Cost Function

  21. Back propagation • Partial derivative of the cost function with

    respect to any weight in the network. • We want to see how the function changes as we let just one of those variables change while holding all the others constant. • Black-box
  22. Example

  23. Hand writing • 60,000 of training data • 10,000 of

    test data • Black white digit images of 28X28 pixels • Each pixel contains a number from 0-255 showing the gray scale, 0 while and 255 black. http://ramok.tech/tag/java-neural-networks-example/
  24. Hand writing • Input (X) – array 784 numbers 0-255

    • 60,000 of these • Output - array of 10 elements (probabilities) • More layer better it is, but the training it will also be much slower. http://ramok.tech/tag/java-neural-networks-example/
  25. Hand writing public void train(Integer trainData, Integer testFieldValue) { initSparkSession();

    List<LabeledImage> labeledImages = IdxReader.loadData(trainData); List<LabeledImage> testLabeledImages = IdxReader.loadTestData(testFieldValue); Dataset<Row> train = sparkSession.createDataFrame(labeledImages, LabeledImage.class).checkpoint(); Dataset<Row> test = sparkSession.createDataFrame(testLabeledImages,LabeledImage.class).checkpoint(); //in=28x28=784, hidden layers (128,64), out=10 int[] layers = new int[]{784, 128, 64, 10}; MultilayerPerceptronClassifier trainer = new MultilayerPerceptronClassifier() .setLayers(layers) .setBlockSize(128) .setSeed(1234L) .setMaxIter(100); model = trainer.fit(train); // evalOnTest(test); }
  26. Homework http://ramok.tech/tag/java-neural-networks-example/

  27. Homework Recognize states

  28. SER594 – Human Computer Interaction Javier Gonzalez-Sanchez javiergs@asu.edu Spring 2019

    Disclaimer. These slides can only be used as study material for the SER594 course at ASU. They cannot be distributed or used for another purpose.