Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Systematic and recomputable comparison of multi-cloud management platforms

Systematic and recomputable comparison of multi-cloud management platforms

With the growth and evolution of cloud applications, more and more architectures use hybrid cloud bindings to optimally use virtual resources regarding pricing policies and performance. This process has led to the creation of multi-cloud management platforms as well as abstraction libraries. At the moment, many (multi-)cloud management platforms (CMPs) are designed to cover the functional requirements. Along with growing adoption and industrial impact of such solutions, there is a need for a comparison and test environment which automatically assesses and compares existing platforms and helps in choosing the optimal one. This paper focuses on the creation of a suitable testbed concept and an actual extensible software prototype which makes multi-cloud experiments repeatable and reusable by other researchers. The work is evaluated by an exemplary comparison of 4 CMPs bound to AWS, showcasing standardised output formats and evaluation criteria.

More Decks by Service Prototyping Research Slides

Other Decks in Research

Transcript

  1. Systematic and recomputable comparison of multi-cloud management platforms Oleksii Serhiienko,

    Josef Spillner 10th IEEE CloudCom Cyprus December 2018 Service Prototyping Lab @ Zurich University of Applied Sciences Switzerland
  2. Cloud management platform (CMP) - Growing needs of multi cloud

    application and hybrid cloud → CMPs gained much popularity - Many solutions supporting different needs and various of platforms - CMPs can be: - Standalone platform - Multi-cloud api library - Website based
  3. CMP evaluation - Evaluation criteria: - Time, sec - Memory

    consumption, KB - CPU, sec - One of each type is taken to make proof of concept - Flexible software: - Easy to add new platforms - Easy to add new evaluation criteria
  4. Related work - CloudCom 2015: An Empirical Study for Evaluating

    the Performance of Jclouds - Measuring Jclouds multi-cloud tool kit and compare to native AWS library - Download/upload file - Сritical evaluation on Jclouds and Cloudify abstract APIs against EC2, Azure and HP-Cloud - Create a prototype tool that will evaluate Jclouds andCloudify
  5. Requirements and approach - CoMParable CMPs (CMP²) requirements: - Comfort

    - Statistical correctness - Reproducibility - Extensibility - Set of decorators: - Timing - Docker consumption - Python consumption - Tagging
  6. Platforms choice - Web Platforms: - CloudcheckR - Libraries: -

    Libcloud - Containers: - Composed Containers: - MistIO - Single Container: - ManageIQ
  7. Experimental setup - Hardware - RAM: 4 GB - VCPUs:

    2 vCPUs clocked at 2500 Mhz - Disk: 40 GB - OS: Ubuntu 16.04.4 LTS - Software: - Mist.io: Cloud Management Platform version: 2.0 - ManageIQ: gaprindashvili-3 - CloudcheckR: last update May 21, 2018 - Apache Libcloud: version 2.3.0
  8. Results - During this work was developed testbed for CMPs

    for recomputable experiments - Easy to extend - Generates graphs and latex table
  9. Conclusion - Test environment and an architecture for multi-platform testing

    were created - The architecture is modular and very flexible which provides the possibility of its low-effort expansion. - All the data and findings are published as open source - Open data to keep the study reusable and repeatable one more thing...
  10. One more thing... Last-minute registration still possible… 140+ talks 190+

    attendees 11th IEEE/ACM International Conference on Utility and Cloud Computing Zurich | CH | Dec 17-20, 2018 http://www.ucc-conference.org/