Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Softwareforen Leipzig | User Group Softwarearchitektur - Machine-Learning-basierte Systeme: Eine Einführung für Architekten

Softwareforen Leipzig | User Group Softwarearchitektur - Machine-Learning-basierte Systeme: Eine Einführung für Architekten

KI und insbesondere Machine Learning sind in aller Munde und eine zunehmende Anzahl von Unternehmen möchte die Vorteile nutzen, die diese Ansätze bieten. Architekten müssen sich daher ausführlicher mit diesen Themen auseinandersetzen. Die Einführungen, die man dazu findet sind aber häufig sehr algorithmisch, technologie-zentriert oder code-nah. Der Blick auf die Gesamtarchitektur von Systemen, die KI enthalten, fehlt dabei oft. In unserem Vortrag heben wir die Themen KI und machinelles Lernen auf die Architekturebene. Dazu erklären wir einige Grundlagen, orientieren uns aber vor allem an den typischen Aspekten, die Architekten kennen: funktionale Dekomposition, Daten, Deployment, etc. So helfen wir diese Aspekte systematisch zu beleuchten. Unser Vortrag unterstützt damit Softwarearchitekten den Überblick zu behalten und die wesentlichen Fragen zu stellen.

Dominik Rost

June 18, 2020
Tweet

More Decks by Dominik Rost

Other Decks in Technology

Transcript

  1. © Fraunhofer IESE 1 Dr. Dominik Rost Dr. Johannes C.

    Schneider 18.06.2020 User Group "Softwarearchitektur und Softwareentwicklung" Software-Foren | Leipzig Machine-Learning-basierte Systeme: Eine Einführung für Architekten
  2. © Fraunhofer IESE 9 Dominik Rost Johannes Schneider Software Architect

    Software Architect Data Science ¯\_(ツ)_/¯ Artificial Intelligence ¯\_(ツ)_/¯ Autonomous Systems ¯\_(ツ)_/¯ Machine Learning¯\_(ツ)_/¯
  3. © Fraunhofer IESE 10 Information Sources Company Websites, Success Stories,

    etc. Architecture ? Tech Tutorials, Math. Foundations etc.
  4. © Fraunhofer IESE 11 Goals of this Talk We elaborate

    the topic for software architects ◼ Create ◼ “Big Picture” for architecture of ML-based systems ◼ Architecture language for ML-based systems ◼ Foundation for ◼ Structured thinking about and designing ML-based systems ◼ Talking to ML experts and data scientists ◼ Judging existing concepts and technologies and filling the own toolbox
  5. © Fraunhofer IESE 12 Approach Top-Down Bottom-Up System Decompostion according

    to Architecture Views Functions, Data, Deployment, … Explanation & Classification of Major Concepts ML Concepts, Process Steps, …
  6. © Fraunhofer IESE 13 Example: Autonomous Driving Information partially based

    on „Tesla Autonomy Day“, https://www.youtube.com/watch?v=-b041NXGPZ8
  7. © Fraunhofer IESE 15 Some Terms System Input Data Exp.

    Output Model Symbolic AI Based on explicit rules provided by humans Machine Learning Make machines derive rules themselves Artifical Intelligence Simulation of human intelligence System Input Data Rules Result
  8. © Fraunhofer IESE 16 Supervised Learning Finding mapping rules between

    input data and expected results based on labeled data Unsupervised Learning Knowledge discovery in unlabeled input data Reinforcement Learning Learning through consequences of actions in specific environments Classification Allocation to a class (discrete) ML Component Class e.g. for spam detection Regression Prediction of value (continuous) ML Component Number e.g. for stock market value prediction Clustering Identifying data subgroups ML Component Cluster Allocation e.g. for customer analysis Association Finding relations between items ML Component Related Items e.g. for recommendation systems Anomaly Detection Identifying unusual items / behavior ML Component Anomalies e.g. for fraud detection Dimensionality Reduction Finding most important features ML Component Relevant Features e.g. for data analysis https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0 ML Component Data Labels ML Component Data ML Component Reward ML Component Actions e.g. for data center cooling Reward
  9. © Fraunhofer IESE 18 Traditional ML-Based Engineering Traditional Systems vs.

    ML-based Systems System Input Data Program Output System Input Data Expected Output Model Mix of Dimensions :(
  10. © Fraunhofer IESE 19 Engineering Traditional Systems vs. ML-based Systems

    Software System Traditional Software Engineering (Methods & Tools) Requirements Input Data Software System Output Data Traditional DevTime RunTime SE for ML-Based Systems (Methods & Tools) Machine Learning Data Science (Methods & Tools) Software System based on ML ML Component Requirements Output Data Software System based on ML ML Component Input Data ML-Based ML-Training ML-Inference Input Data Expected Output
  11. © Fraunhofer IESE 21 Scope and Focus wrt. AI /

    ML Software System Traditional Software Engineering (Methods & Tools) used to develop Software System based on ML ML Component used to develop SE for ML-Based Systems (Methods & Tools) Data Science (Methods & Tools) used to develop used to develop System with substantial size, complexity, quality requirements
  12. © Fraunhofer IESE 22 Out of Scope ◼ Foundations of

    ML ◼ Algorithms in detail ◼ Topology design of NNs ◼ Detailed technologies in ML ◼ Data analytics with respective tools (dashboard visualizations) ◼ Detailed architecture of autonomous driving systems
  13. © Fraunhofer IESE 24 Data Flow through Software System based

    on ML Software System based on ML ML Component Input Data Output Data Data Pre- Processing Data Post- Processing
  14. © Fraunhofer IESE 25 Multiple ML-Components in a System Software

    System based on ML ML C1 Input Data Output Data ML C3 ML C4 Architecture Decision: How many ML-Components and which ones? ML C2
  15. © Fraunhofer IESE 26 Example Autonomous Driving Alternative Functions Software

    System based on ML “Driving” Sensors, Cameras, … Driving Actuators Data Pre- Processing Data Post- Processing Alternative A Software System based on ML Driving Area Detection Sensors, Cameras, … Driving Actuators “Steering” Obstacle Detection Roadsign Detection Alternative B Software System based on ML Road Marking Detection Sensors, Cameras, … Driving Actuators … Alternative C
  16. © Fraunhofer IESE 27 Logical Structure of a ML Component

    (Generalized, Neural Network) ML Component Weights, Biases (Trained) Topology (Layers, Neurons, Connections) Hyperparameters Config Data Basic Neural Network Logic Learning / Training Logic Code / Logic ML Model (fixed in inference) Training Data Activations State (Optional) Data Input Data Output Data Config Data The ML Component can be treated as a black box, architecturally The ML Component is the unit of training State: E.g. in Recurrent Neural Networks with feedback relationships
  17. © Fraunhofer IESE 28 ML Component Example Topology of a

    Convolutional Neural Network (CNN) Topology (Layers, Nodes, Relationships) Decisions about the topology of the Neural Network are mainly done by data scientists. Architects need a basic understanding to judge external implications. https://www.easy-tensorflow.com/tf-tutorials/convolutional-neural-nets-cnns
  18. © Fraunhofer IESE 30 Differences of Types of Learning /

    Training Supervised Learning Unsupervised Learning Reinforcement Learning (active) ML Component Training Data Action Labels Training Data ML Component ML Component Environment (e.g. simulated, real) Observation Reward
  19. © Fraunhofer IESE 31 1 Learning / Training Step ML

    Component Activations State (Optional) Weights, Biases (Trained) Topology (Layers, Neurons, Connections) Hyperparameters Basic Neural Network Logic Learning / Training Logic Data Config Data Code / Logic ML Model (fixed in inference) Output Data 2) Calculate loss function Selected Training Data 1) Feed training data into NN 3) Adjust Config Data - Weights, Biases - Adjust topology - Hyperparameters [by learning logic or data scientist] Feed Forward Back Propagation https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6
  20. © Fraunhofer IESE 32 Overall Lifecycles / Workflows and Data

    Involved ML-Training (DevTime) Data Collection Data Preparation Model Selection & Training Model Evaluation Model Persistence ML-Inference (RunTime) Data Ingestion Data Preparation Inference Model Deployment Large amounts of data Computing-intensive training Exploratory approach Concrete input data Inference is comparably cheap “Just computation”
  21. © Fraunhofer IESE 33 Feedback Data and Optimization (Batch Learning)

    ML-Inference (RunTime) Data Ingestion Data Preparation Inference Model Deployment New Training Data from Live Operation Deploy optimized model ML-Training (DevTime) Data Collection Data Preparation Model Selection & Training Model Evaluation Model Persistence
  22. © Fraunhofer IESE 34 Example Autonomous Driving Tesla: Data Collection

    from Current Fleet – Driving Real World, not Autonomously Yet Deploy optimized driving functions model New Training Data from Live Operation Camera images Driving situations Data labelled from driver behaviour / steering Data labelled from explicit user feedback Data labelled from additional sensors (e.g. radar) Central Data Collection and Learning Model Selection & Training & Evaluation Data Preparation Model Persistence Instruct cars, which data to collect Partially human pre-processed data
  23. © Fraunhofer IESE 35 Example Autonomous Driving Software System based

    on ML “Driving” Data Pre- Processing Data Post- Processing Central Data Collection and Learning Model Selection & Training & Evaluation Data Preparation Model Persistence ▪ Architects need overall system perspective ▪ Strong integration between runtime system (cars) and development time (learning and improvement) ▪ Continuous improvement and deployment ▪ Learning from the pre-phase of autonomous driving and continuously during operation
  24. © Fraunhofer IESE 36 Feedback Data and Optimization (Online Learning)

    ML-Inference (RunTime) Data Ingestion Data Preparation Inference ML-Training / Retraining (RunTime) Model Training / Optimization Model Persistence Model Deployment New Training Data from Live Operation Learning can happen at defined points in time (rather not after every inference) (DevTime) el on & ng Model Evaluation Model Persistence Model Deployment The data science work is still done at DevTime Model is selected and training is done At Runtime, only optimization of the model
  25. © Fraunhofer IESE 38 Data Aspects ◼ Large amounts of

    data for training needed ◼ Amount depends on application area, available data and on ML models / algorithms ◼ Very different types and formats of data ◼ Text ◼ Images ◼ Video ◼ Audio ◼ … ◼ → require very different treatment ◼ → result in very different computational load
  26. © Fraunhofer IESE 39 Example Autonomous Driving Data Aspects in

    Autonomous Driving ◼ Data needs ◼ Large data ◼ Varied data ◼ Real data ◼ Collect data from the fleet ◼ Create simulation data ◼ Cover edge and unusual cases Image: https://www.youtube.com/watch?v=-b041NXGPZ8
  27. © Fraunhofer IESE 41 Design Alternatives Deployment Options Model Training

    / Optimization Model Persistence ML-Inference (RunTime) Data Ingestion Data Preparation Inference Model Deployment (DevTime) el on & ng Model Evaluation Model Persistence Model Deployment Training HW Powerful Server ML Component Client Server ML Component Client ML Component Server Design Alternatives Client Server ML Component Client ML Component Server ML-Training (RunTime)
  28. © Fraunhofer IESE 42 Example Autonomous Driving Multiple Instances of

    Systems (Cars) ◼ Learning strategies ◼ Online-Learning in each car? ◼ Batch-Learning in a central system, only? ◼ Can cars communicate? ◼ Compare: ◼ Learning of typing recognition on mobile phone Training HW Powerful Server ML Component ML-Training (DevTime) ML-Inference (RunTime) New Training Data from Live Operation Deploy optimized model Software System based on ML ML Component Software System based on ML ML Component Software System based on ML ML Component Software System based on ML ML Component Software System based on ML ML Component Software System based on ML ML Component
  29. © Fraunhofer IESE 44 Available Technologies as Services / Libraries

    for ML Different Level of Reuse ML Component Activations State (Optional) Weights, Biases (Trained) Topology (Layers, Neurons, Connections) Hyperparameters Basic Neural Network Logic Learning / Training Logic Data Config Data Code / Logic ML Model (fixed in inference) Fully trained model, immutable (as API or library) [e.g. Service for image tagging] Fully trained model, retrainable (as API or library) [e.g. Service for image tagging] Predefined topology (as API or library) [e.g. predefined CNNs] Basic ML model (as library) [e.g. general NN logic] Degree of freedom Knowledge needed Effort needed
  30. © Fraunhofer IESE 45 Microsoft AI & ML Technologies https://www.credera.com/wp-content/uploads/2018/04/The-Microsoft-AI-platform.png

    Fully trained model, immutable (as API or library) [e.g. Service for image tagging] Fully trained model, retrainable (as API or library) [e.g. Service for image tagging] Predefined topology (as API or library) [e.g. predefined CNNs] Basic ML model (as library) [e.g. general NN logic] Predefined topology (as API or library) [e.g. predefined CNNs]
  31. © Fraunhofer IESE 50 Quality Attributes in ML-based Systems (1/2)

    ◼ ML as a technology does inherently aim more at realizing functionality than at realizing quality attributes (in contrast to e.g. communication middleware, blockchain, …) ◼ However, ML can be used to support achieving some quality attributes (e.g. achieving certain aspects of security by for example detecting attack patterns with ML) ◼ The usage of ML has significant impact on quality attributes, and thus needs architectural treatment ◼ One key aspect: missing comprehensibility / explainability what is happening in the ML- component ◼ Safety, reliability: conflicting with safety standards, needs counter-measures ◼ UX: Explaining to the user what happens / integrating user into overall flow
  32. © Fraunhofer IESE 51 Quality Attributes in ML-based Systems (2/2)

    ◼ Fulfil the respective quality attributes of the system, respecting the overall “scale” of the system ◼ Performance (latency, throughput), scalability, … ◼ Considering the runtime system, but also the devtime / learning system ◼ Completely different settings for quality attributes in different systems ◼ Playing “Go” against the world champion ◼ a single complex task with massive power needed ◼ Calculation of product recommendations of Amazon ◼ a single, rather simple task; however, executed massively in parallel ◼ Provide an adequate execution environment ◼ Sufficient computing power ◼ Sufficient storage capacity ◼ Provide the right data with adequate frequency and latency ◼ Architect has to know the requirements / implications of the ML algorithm / model
  33. © Fraunhofer IESE 52 Data Science & Software Engineering ML-Training

    (DevTime) Data Collection Data Preparation Model Selection & Training Model Evaluation Model Persistence ML-Inference (RunTime) Data Ingestion Data Preparation Inference Model Deployment Large amounts of data Computing-intensive training Exploratory approach Concrete input data Inference is comparably cheap “Just computation” Data Science Software Engineering
  34. © Fraunhofer IESE 53 Data Science & Software Engineering ML-Training

    (DevTime) Data Collection Data Preparation Model Selection & Training Model Evaluation Model Persistence ML-Inference (RunTime) Data Ingestion Data Preparation Inference Model Deployment Large amounts of data Computing-intensive training Exploratory approach Concrete input data Inference is comparably cheap “Just computation” Data Science Software Engineering Source: https://gotochgo.com/2020/sessions/1414/keys-to-building-machine-learning-systems
  35. © Fraunhofer IESE 54 Conclusion: What does it mean for

    me? What can I do? ◼ Keep an eye on the architectural big picture, even if there is ML in the system ;-) ◼ Understand the very nature of ML-based systems ◼ Learn from existing systems and their solution approaches ◼ Remember the essentials of software architecture ◼ Achieving quality attributes ◼ Dealing with uncertainty ◼ Organizing and distributing work ◼ Fill your toolbox with knowledge about patterns and technologies in the ML-area ◼ Start working with data scientists / data engineers and establish a common language
  36. © Fraunhofer IESE 55 Dr. Dominik Rost Dr. Johannes C.

    Schneider 18.06.2020 User Group "Softwarearchitektur und Softwareentwicklung" Software-Foren | Leipzig Machine-Learning-basierte Systeme: Eine Einführung für Architekten