Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Constructivist Augmented Machine Learning: Using video games and human-in-the-loop techniques to directly transfer human knowledge into ML models

finid
June 28, 2019

Constructivist Augmented Machine Learning: Using video games and human-in-the-loop techniques to directly transfer human knowledge into ML models

ML has made many strides over the past several years, but in most cases the overall training methodology has remained consistent. The current ML training process mimics Kolb’s Experiential Learning paradigm found in classrooms. This model drives students to learn from personal experimentation, often without any outside instruction. This technique can provide rapid understanding, but also has the drawback of not being able to take advantage of knowledge and experience provided by expert guidance.

Constructivism is a broader learning theory which incorporates the addition of knowledge gained from past experiences as well as social interaction and collaboration with an expert. This allows for students to learn from an instructor’s past experience and knowledge as a supplement to the experimentation process.

BALANCED’s HEWMEN platform utilizes Constructivist Augmented Machine Learning (CAML) methodology that allows for humans to interact with ML algorithms and techniques. CAML is a human-in-the-loop methodology scaled by using human computation video game techniques. This process allows algorithm guidance by augmenting inputs as well as directly modifying hidden layers, weight and connections throughout the training process. Humans are capable of identifying patterns and optimization opportunities during training and can subsequently modify the ML model to take advantage of the human’s intuition. In short, CAML allows for the direct transference of human knowledge into ML models.

Adding CAML to an existing ML pipeline can improve model accuracy, compress model size, or allow model improvement in absence of large data sets. This talk will show examples of CAML being used to guide ML model when analyzing medical and satellite imagery as well as knowledge transfer directly into the Leela Zero deep learning model (Open Source version AlphaGo Zero). The process is compatible with HEWMENs distributed ML techniques, such as federate learning, which allows for scaling of CAML on both human and machine components.

Key Takeaways Points:
1. See examples of how human-in-the-loop techniques can dramatically accelerate ML training as well as techniques to extract knowledge from trained ML models.
2. Demonstration of BALANCED’s HEWMEN platform, which combines video games, human intuition and distributed computing into a single cloud environment.

finid

June 28, 2019
Tweet

More Decks by finid

Other Decks in Technology

Transcript

  1. Big Data & AI Conference Dallas, Texas June 27 –

    29, 2019 www.BigDataAIconference.com
  2. Constructivist Augmented Machine Learning (CAML) Constructivist Augmented Machine Learning (CAML)

    Corey Clark, PhD BALANCED Media | Technology Using human-in-the-loop techniques to transfer human knowledge into ML models
  3. Deep Learning AI PCG Computer Vision Algorith m Analytics Machine

    Learning Serverles s Cloud Decentraliz ed Blockchain Distribute d Computin g Education Human Computation Entertainment Citizen Science Therapies/Treatm ent Video Games Volunteer Computing HEWMEN Federat ed Learnin g Human-in- Loop
  4. • Scientists put a roundworm’s (Caenorhabditis elegans) brain in a

    Lego robot. • They mapped connections between worm’s 302 neurons and simulated them in software. • IP address and port number are used to address each neuron.
  5. Constructivism & Experiential Learning • Knowledge is constructed/discovered and unknown

    • More than one reality • Based upon learners experience • Not just a single solution to a problem • Learner focused and Learner in control. • Experiential Learning is a subset of Constructivism
  6. Validat e Sample Training Data Error Calculatio n Back ML

    Training Experiential Learning (Kolb)
  7. Constructivism • Experiential Learning ignores Collaboration • Learning can include

    experimentation, along with collaboration with mentor/trainer/teacher. Learn from the mistakes of others ou can't live long enough make them all yourself.”
  8. Clark C., M. Ouellette, 2017. Games as a Distributed Computing

    Resource. Proceedings from International Conference Foundations of Digital Games Conference , Cape Cod, MA, Aug., 2017 ® + Community Resources: VOLUNTARY Grid Computing Human Intelligence: GAME ENABLED Human Guided Machine Learning + Socially Leveraged EcoSystem: INNOVATION ENGINE Collaboration Platform
  9. BALANCED | SMU, Retina Foundation John Hopkins OCT Image Analysis

    for Age Related Macular Degeneration Using CAML and HCGs TM Clark, C., M. Ouellette, K. Csaky 2019. Training Player ot Analyze Age-Related Macular Degeneration Optical Coherence Tompography Scans Using Human Computation Gaming.
  10. Learning and Filters Image 4 (generally learned, with some experiments)

    A quick look of the learning progression from the frst set: Image 1 (humans clearly experimenting)
  11. Adding CAML Input Filter via HCGs Deep Learnin g Bruch’s

    Membrane Boundaries OCT Image HCG CAML Input Filter
  12. OCT CAML Input Filter Modifcation Process Original OCT Scan CAML

    Input Filter Combined Focus This provides a nice, focused region for the DL algorithm to focus on. +
  13. 0 1 2 3 4 5 6 7 8 9

    10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 U-Net Accuracy CAML U-Net Standard U-Net Epochs Accuracy
  14. Leela Zero Network • Leela Zero is the open source

    version of AlphaGo • Currently at or better than human professional level (depending on hardware) • Weights for many generations of networks are available Image http://zero.sjeng.org/home
  15. Clean the kernels Top 32 centroid kernels are adjusted with

    the following methods: - Sharpening - Forcing symmetry - Removing noise
  16. 1 2 3 4 5 6 7 0 0.2 0.4

    0.6 0.8 1 1.2 Distance to Kernel D is t a n c e T o K e r n e l 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 1.2 Distance to Kernel Distance To Kernel(Normalized) 1 2 3 4 5
  17. LZ_187 LZ_188 LZ_189 LZ_190 LZ_191 LZ_192 LZ_193 0 0.2 0.4

    0.6 0.8 1 1.2 Average kernel distance to cluster centroid over diferent networks
  18. LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0

    0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 LZ_187LZ_188LZ_189LZ_190LZ_191LZ_192LZ_193 0 0.2 0.4 0.6 0.8 1 1.2 Original kernel Ideal kernel
  19. Results • Of the 32 kernels that were adjusted •

    7 had their minimum distance shifted to a newer network • 6 showed fattening in newer networks • Thus 40.6% of the kernels were improved upon • 9.4% were worse than the original • The remaining showing no signifcant change