Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Learning as a Service

Machine Learning as a Service

Often, the most convenient way to deploy a machine model is an API. It allows accessing it from various programming environments and also decouples the development and deployment of the models from its use.

However, building an good API is hard. It involves many nitty-gritties and many of them need to repeated every time an API is built. It requires understanding of some web framework, worrying about data validation, authentication and deploying etc. Also, it is very important to have a client library so that the API can be easily accessed. If you ever plan to use it from Javascript directly, then you need to worry about cross-origin-resource-sharing etc.

In this talk demonstrates how deploying machine learning models an APIs can be made fun by using right programming abstractions.

Presented at:

- PyCon UK 2018 - Sep 16, 2018
- PyCon DE 2018 - Oct 25, 2018

Anand Chitipothu

September 16, 2018
Tweet

More Decks by Anand Chitipothu

Other Decks in Technology

Transcript

  1. Machine Learning as a Service How to deploy ML Models

    as APIs without going nuts Anand Chitipothu @anandology rorodata 1
  2. Overview — Overview of Machine Learning — Why Machine Learning

    as a Service — Traditional Approach — Importance of Right Abstractions — Our Approach — Future work — Summary 3
  3. Firefly! Open-source tool built to turn Python functions into API.

    https://github.com/rorodata/firefly pip install firefly-python 14
  4. Step 1: Write your function # sq.py def square(n): """compute

    square of a number. """ return n*n You just need to make sure the arguments and return value are JSON-friendly. 15
  5. Step 2: Run it as an API $ firefly sq.square

    http://127.0.0.1:8000/ ... 16
  6. Step 3: Use it >>> import firefly >>> api =

    firefly.Client("http://127.0.0.1:8000/") >>> api.square(n=4) 16 Firefly comes with in-built client for accessing the API. 17
  7. Behind the Scenes It is a RESTful API! $ curl

    -d '{"n": 4}' http://127.0.0.1:8000/square 16 18
  8. What about ML models? Write your predict function and run

    it as API. # image_recognition.py import joblib model = joblib.load("model.pkl") def predict(image): ... 19
  9. Deployment Platform (1/2) Write a config file in the project

    specifying what services to run. project: image-recognition runtime: python3-keras services: - name: api function: image_recognition.predict 22
  10. Deployment Platform (2/2) $ roro deploy ... Started 1 service.

    api: https://image-recognition--api.rorocloud.io/ ... 23
  11. Behind the Scenes — It builts the docker images —

    Starts the specified services — Provides URL endpoints with HTTPS 24
  12. Custom Dependencies? Write a requirements.txt file! The platform borrows the

    repo2docker interface from the Binder project. 25
  13. Cross-Origin-Resource-Sharing Just add an additional field in your service description.

    services: - name: api function: image_recognition.predict cors_allow_origins: * 28
  14. Instance Size & Scale Specify instance size and scale to

    deploy. services: - name: api function: image_recognition.predict cors_allow_origins: * size: S2 scale: 4 29
  15. Open-Source Version Built a light version of the platform to

    deploy to your own server. http://github.com/rorodata/rorolite Same API! 30
  16. Firefly - Upcoming Features — Extensible API — Integration with

    Python 3 type hints — Automatic API documentation — Support for gRPC 31
  17. Firefly - New API (WIP) app = Firefly() @app.function def

    detect_outliers(df: pd.DataFrame) -> pd.DataFrame: ... @app.function def detect_faces(image: Image) -> List[Image]: ... if __name__ == "__main__": app.run() 32
  18. Summary — Deployment Machine Learning Models need not be very

    complex! — The key is to focus on reducing the complexity of the interface. — Our work is open-source, please build on top of it! 33