Constructivist Augmented Machine Learning: Using video games and human-in-the-loop techniques to directly transfer human knowledge into ML models

E7d6e390a90513756419be75a43609ca?s=47 finid
June 28, 2019

Constructivist Augmented Machine Learning: Using video games and human-in-the-loop techniques to directly transfer human knowledge into ML models

ML has made many strides over the past several years, but in most cases the overall training methodology has remained consistent. The current ML training process mimics Kolb’s Experiential Learning paradigm found in classrooms. This model drives students to learn from personal experimentation, often without any outside instruction. This technique can provide rapid understanding, but also has the drawback of not being able to take advantage of knowledge and experience provided by expert guidance.

Constructivism is a broader learning theory which incorporates the addition of knowledge gained from past experiences as well as social interaction and collaboration with an expert. This allows for students to learn from an instructor’s past experience and knowledge as a supplement to the experimentation process.

BALANCED’s HEWMEN platform utilizes Constructivist Augmented Machine Learning (CAML) methodology that allows for humans to interact with ML algorithms and techniques. CAML is a human-in-the-loop methodology scaled by using human computation video game techniques. This process allows algorithm guidance by augmenting inputs as well as directly modifying hidden layers, weight and connections throughout the training process. Humans are capable of identifying patterns and optimization opportunities during training and can subsequently modify the ML model to take advantage of the human’s intuition. In short, CAML allows for the direct transference of human knowledge into ML models.

Adding CAML to an existing ML pipeline can improve model accuracy, compress model size, or allow model improvement in absence of large data sets. This talk will show examples of CAML being used to guide ML model when analyzing medical and satellite imagery as well as knowledge transfer directly into the Leela Zero deep learning model (Open Source version AlphaGo Zero). The process is compatible with HEWMENs distributed ML techniques, such as federate learning, which allows for scaling of CAML on both human and machine components.

Key Takeaways Points:
1. See examples of how human-in-the-loop techniques can dramatically accelerate ML training as well as techniques to extract knowledge from trained ML models.
2. Demonstration of BALANCED’s HEWMEN platform, which combines video games, human intuition and distributed computing into a single cloud environment.

E7d6e390a90513756419be75a43609ca?s=128

finid

June 28, 2019
Tweet

Transcript

  1. None
  2. Big Data & AI Conference Dallas, Texas June 27 –

    29, 2019 www.BigDataAIconference.com
  3. Constructivist Augmented Machine Learning (CAML) Constructivist Augmented Machine Learning (CAML)

    Corey Clark, PhD BALANCED Media | Technology Using human-in-the-loop techniques to transfer human knowledge into ML models
  4. Deep Learning AI PCG Computer Vision Algorith m Analytics Machine

    Learning Serverles s Cloud Decentraliz ed Blockchain Distribute d Computin g Education Human Computation Entertainment Citizen Science Therapies/Treatm ent Video Games Volunteer Computing HEWMEN Federat ed Learnin g Human-in- Loop
  5. Mimic what we see.

  6. https://www.thermalvac.com/wp-content/uploads/2017/04/CitySteel-Annealing-1-1.jpg https://www.symmetrymagazine.org/sites/default/fles/styles/2015_hero/public/images/st andard/12-0272-03D.feature.jpg?itok=v5JW86lv https://i.kinja-img.com/gawker-media/image/upload/s-- xbPE1CoV--/c_scale,f_progressive,q_80,w_800/sga8ylxu4 https://cdn-images- 1.medium.com/max/1600/1*TnoOQTgvhc25t0hDmQMKng

  7. • Scientists put a roundworm’s (Caenorhabditis elegans) brain in a

    Lego robot. • They mapped connections between worm’s 302 neurons and simulated them in software. • IP address and port number are used to address each neuron.
  8. https://thumbs.dreamstime.com/z/vector-evolution-concept-ape-to-cyborg-robots-growth-process-monkey-caveman-businessman-suit-artifcial-legs- Evolution of intelligence

  9. https://thumbs.dreamstime.com/z/vector-evolution-concept-ape-to-cyborg-robots-growth-process-monkey-caveman-businessman-suit-artifcial-legs- We are striving for this

  10. https://thumbs.dreamstime.com/z/vector-evolution-concept-ape-to-cyborg-robots-growth-process-monkey-caveman-businessman-suit-artifcial-legs- But are training for this

  11. How Does Human Learning Occur?

  12. https://ripslawlibrarian.fles.wordpress.com/2017/03/bcc-

  13. Constructivism & Experiential Learning • Knowledge is constructed/discovered and unknown

    • More than one reality • Based upon learners experience • Not just a single solution to a problem • Learner focused and Learner in control. • Experiential Learning is a subset of Constructivism
  14. Experiential Learning (Kolb)

  15. Validat e Sample Training Data Error Calculatio n Back ML

    Training Experiential Learning (Kolb)
  16. Constructivism • Experiential Learning ignores Collaboration • Learning can include

    experimentation, along with collaboration with mentor/trainer/teacher. Learn from the mistakes of others ou can't live long enough make them all yourself.”
  17. How Do Humans and Machines Collaborate?

  18. Constructivist Augmented Machine Learning (CAML) Human Computation Games & Machine

    Learning
  19. Clark C., M. Ouellette, 2017. Games as a Distributed Computing

    Resource. Proceedings from International Conference Foundations of Digital Games Conference , Cape Cod, MA, Aug., 2017 ® + Community Resources: VOLUNTARY Grid Computing Human Intelligence: GAME ENABLED Human Guided Machine Learning + Socially Leveraged EcoSystem: INNOVATION ENGINE Collaboration Platform
  20. BALANCED | SMU, Retina Foundation John Hopkins OCT Image Analysis

    for Age Related Macular Degeneration Using CAML and HCGs TM Clark, C., M. Ouellette, K. Csaky 2019. Training Player ot Analyze Age-Related Macular Degeneration Optical Coherence Tompography Scans Using Human Computation Gaming.
  21. RETINA FOUNDATION – Macular Degeneration

  22. Learning and Filters Image 4 (generally learned, with some experiments)

    A quick look of the learning progression from the frst set: Image 1 (humans clearly experimenting)
  23. RETINA FOUNDATION – Macular Degeneration

  24. Adding CAML Input Filter via HCGs Deep Learnin g Bruch’s

    Membrane Boundaries OCT Image HCG CAML Input Filter
  25. OCT CAML Input Filter Modifcation Process Original OCT Scan CAML

    Input Filter Combined Focus This provides a nice, focused region for the DL algorithm to focus on. +
  26. 0 1 2 3 4 5 6 7 8 9

    10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 U-Net Accuracy CAML U-Net Standard U-Net Epochs Accuracy
  27. U-Net Training Gradient Standard U-Net CAML Input Filter U- Net

  28. None
  29. None
  30. Convolutional Kernel Optimization for Leela Zero DNN using CAML TM

  31. Leela Zero Network • Leela Zero is the open source

    version of AlphaGo • Currently at or better than human professional level (depending on hardware) • Weights for many generations of networks are available Image http://zero.sjeng.org/home
  32. Network structure of Leela Zero Image from https://applied- data.science/static/main/res/alpha_ go_zero_cheat_sheet.png

  33. Visualizing hidden layers with given input FMRI for studies human

    brains
  34. None
  35. Stable Enhan ce Growing Leela Zero Layer Progression Visualization Model

    Input
  36. Kernels visualized over time

  37. Patterns in kernals over time

  38. Clean the kernels Top 32 centroid kernels are adjusted with

    the following methods: - Sharpening - Forcing symmetry - Removing noise
  39. 1 2 3 4 5 6 7 0 0.2 0.4

    0.6 0.8 1 1.2 Distance to Kernel D is t a n c e T o K e r n e l 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 1.2 Distance to Kernel Distance To Kernel(Normalized) 1 2 3 4 5
  40. LZ_187 LZ_188 LZ_189 LZ_190 LZ_191 LZ_192 LZ_193 0 0.2 0.4

    0.6 0.8 1 1.2 Average kernel distance to cluster centroid over diferent networks
  41. LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0

    0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 Original kernel Ideal kernel
  42. Results • Of the 32 kernels that were adjusted •

    7 had their minimum distance shifted to a newer network • 6 showed fattening in newer networks • Thus 40.6% of the kernels were improved upon • 9.4% were worse than the original • The remaining showing no signifcant change
  43. QUESTION S? • Corey Clark, PhD: cclark@bmt.world • Robert Atkins:

    Robert@bmt.world