Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Security : Machine Learning, Deep Learning and Computer Vision Security

CO
October 05, 2021

AI Security : Machine Learning, Deep Learning and Computer Vision Security

http://deeplab.co
cihan [ at ] deeplab.co

CO

October 05, 2021
Tweet

Other Decks in Programming

Transcript

  1. DeepLab : Technologies ML/DL/CV Based • Technologies – Go, Python,

    C/C++, Rust, C# – PyTorch, TensorFlow, Keras, scikit-learn • As Web, Mobile, IoT/Edge ve Back-End… – OpenCV – … and dozens of different tools and equipment… • Cloud Computing – AWS Machine Learning – Google Cloud Machine Learning – IBM Watson Machine Learning – Microsoft Azure Machine Learning – … and various cloud solutions… • Distributed Systems – Distributed Databases – Distributed Deep Learning
  2. AI Security Machine Learning, Deep Learning and Computer Vision Security

    Cihan Özhan | Founder of DeepLab | Developer, AI Engineer, AI Hacker, Data Master
  3. AI Data Objects • Image • Text • File •

    Voice • Video • Data • 3D Object
  4. ML/DL Applications • Image Classification • Pose Estimation • Face

    Recognition • Face Detection • Object Detection • Question Answering System • Semantic Segmentation • Text Classification • Text Recognition • Sentiment Analysis • Industrial AI • Autonomous Systems • and more…
  5. ML/DL Algorithms • Classification (Supervised) • Clustering (Unsupervised) • Regression

    (Supervised) • Generative Models (Semi-Supervised) • Dimensionality Reduction (Unsupervised) • Reinforcement Learning (Reinforcement)
  6. MLaaS? Machine Learning as a Service MLaaS is the method

    in which ML/DL algorithms and software are offered as a component of cloud computing services. MLaaS = (SaaS + [ML/DL/CV])
  7. Model Lifecycle Machine Learning Model Development Lifecycle We start here!

    ML model preparation process The chore but the imperative: Preparing the data! We prepared the model! We train the model with data. Cloud or On-Premise We tested the trained model with test data! The trained model is packaged for the programmatic environment. Post release: The model is constantly monitored.
  8. Basic Security Issues Intentional Issues Unintentional Issues Perturbation Attack Reward

    Hacking Poisoning Attack Side Effects Model Inversion Distributional Shifts Membership Inference Natural Adversarial Examples Model Stealing Common Corruption Reprogramming ML system Incomplete Testing Adversarial Example in Pyhsical Domain Malicious ML provider recovering training data Attacking the ML supply chain Backdoor ML Exploit Software Dependencies
  9. Exploit Software Dependencies • It takes advantage of the vulnerabilities

    of the software the system is connected to, not algorithms. • Prevention: – Security Scan – Security Reports – Be Careful : Wrappers ve Pre-Build Environment – Use Less Dependency – Dependency Management Tools • Synk : Synk.io • Python Poetry : python-poetry.org • Bandit : – Bandit is a tool designed to find common security issues in Python code. – https://github.com/PyCQA/bandit • pyup.io/safety • requires.io – etc…
  10. Tool/Library Security (TensorFlow) • TensorFlow (tools like) is designed for

    internal communication, not for running on untrusted networks. • These tools (ModelServer etc.) do not have built-in authorization. • It can read and write files, send and receive data over the network… • (!) TensorFlow Models as Programs • (!) Running Untrusted Models • (!) Accepting Untrusted Inputs https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md
  11. Cihan Özhan Links • cihanozhan.com • linkedin.com/in/cihanozhan • medium.com/@cihanozhan •

    youtube.com/user/OracleAdam • twitter.com/UnmannedCode • github.com/cihanozhan Contact • [email protected]