Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Learning more about the ingredients of cloud applications

Learning more about the ingredients of cloud applications

On the intersection between software engineering, cloud computing and data
science, researchers have recently shown increased interest in tangible
software artefacts often used to implement microservice-based applications and
cloud applications in general. These encompass Lambda functions, Helm charts,
Docker containers and other pieces of code with associated metadata and
deployment configuration.

The talk is divided into two parts. First, it presents our plan for a global
observatory on these artefacts, and invites you to join the effort. Second, it
dives deeper on specifically Helm charts, their capabilities and quality
issues, and presents a developer-friendly (CI/CD-integrated) quality assurance
tool simply called HelmQA.

More Decks by Service Prototyping Research Slides

Other Decks in Research

Transcript

  1. Zürcher Fachhochschule Learning more about the ingredients of cloud applications

    Josef Spillner <[email protected]> Service Prototyping Lab (blog.zhaw.ch/splab) Feb 07, 2019 | CERN Computing Seminar
  2. 2 The Service Prototyping Lab: Activities http://blog.zhaw.ch/splab/ We • research

    • communicate • teach/ • qualify • organise around the main theme: How can we best develop lucrative software apps in the Internet, in the cloud and in post-cloud systems? with companies in Europe and in the region (R&D, innovation projects) at prestigious international conferences at the level of BSc, MSc, CAS, PhD open events on trend topics (e.g. Serverless Computing, CNA) = 9 active researchers (4 senior, permanent) (from 8 countries)
  3. 4 This talk Two parts with details on our work

    [slightly unusual but suitable choice] 1) “A Global Observatory for Small Pieces of Software“ (2019 material) 2) “Helm Charts Quality Analysis & HelmQA“ (2018/19 material)
  4. A Global Observatory for Small Pieces of Software Josef Spillner

    <[email protected]> Service Prototyping Lab Zurich University of Applied Sciences, CH CERN, CH, 07.02.2019 (previously: URV, ES + IBM, CH)
  5. 2/12 Josef Spillner 6.02.19 The necessity of research infrastructure A

    Global Observatory for Small Pieces of Software CERN Computing Seminar
  6. 3/12 Josef Spillner 6.02.19 The necessity of research infrastructure A

    Global Observatory for Small Pieces of Software CERN Computing Seminar
  7. 4/12 Josef Spillner 6.02.19 Research infrastructure in computer science, esp.

    cloud computing A Global Observatory for Small Pieces of Software CERN Computing Seminar (Distributed) Execution: • Planetlab • BOINC • CloudLab • EGI Federated Cloud • GENI • EOSC Data (→ DMPs): • singular • e.g. many MSR papers • continuous/streams • (few!) Other domains: • medical, archeology, smart cities, microorganisms + generic (e.g. GeRDI)... Emerging: • EURISE Network, RSE communities Experiments: • industry benchmarks (e.g. YCSB) • CloudSim, ... • mining/data science (e.g. “pull req dupes“) • DockerFinder/DockerAnalyser Subjects: • components • applications • services • repositories • code • artefacts
  8. 5/12 Josef Spillner 6.02.19 Could we do better? & Do

    we have to? A Global Observatory for Small Pieces of Software CERN Computing Seminar Artefact analysis as a service • continuously operated • more productive researchers • global & distributed/decentralised • backup/replication built-in • non-falsifiability • “bus factor“, longevity & sustainability • different funding regions • starting informal > dead-end projects Impact • setting up gold standards • becoming first point of interest in data on microservices • better comparability → reproducible research • reflection of evolution & change patterns
  9. 6/12 Josef Spillner 6.02.19 Racing against the clock! A Global

    Observatory for Small Pieces of Software CERN Computing Seminar Microservices timeline • where are we? at the beginning? Some original data are versioned... • but others are not - irreversible losses • valuable raw data, historic records, currently unknown mining & exploitation techniques In our case... • crawling and preservation first • infrastructure later
  10. 7/12 Josef Spillner 6.02.19 Introducing MAO-MAO A Global Observatory for

    Small Pieces of Software CERN Computing Seminar GOMIA: global observatory software sketch (at the moment, rather scattered scripts)
  11. 8/12 Josef Spillner 6.02.19 Possible repositories A Global Observatory for

    Small Pieces of Software CERN Computing Seminar • GitHub (e.g. for plain docker-compose) • CNAB.io bundles
  12. 9/12 Josef Spillner 6.02.19 Collaborators network A Global Observatory for

    Small Pieces of Software CERN Computing Seminar Informal collaboration (phase 2+3) • all collaborators, no beneficiaries • equal terms • i.e. you can invite others • light-weight coordination by SPLab • incl. some contributions (MSc, PhD) Future funding (phase 3) • subsets of all collaborators Roles • DEV • OPS • EXP
  13. 10/12 Josef Spillner 6.02.19 Current status A Global Observatory for

    Small Pieces of Software CERN Computing Seminar
  14. 11/12 Josef Spillner 6.02.19 Current status A Global Observatory for

    Small Pieces of Software CERN Computing Seminar AWS SAR results (phase 1) • ca. 400 functions • growth ca. 16/month (7%) • both SaaS- and PaaS-level (*) • metadata size: 7 MB • (code size several GB, e.g. precompiled functions, redundant repository specifications) *unconfirmed, just first impression Helm charts results (phase 1) • ca. 250 charts • growth ca. 17/month (7%) • more PaaS-level (*) • metadata+charts size: 56 MB Experiment: developer acceptance of auto- generated bug reports + improvements • no useful results thus far • limited uptake of idea with developers • not involving ethics board etc.
  15. 12/12 Josef Spillner 6.02.19 Thank you! Here are some pointers.

    A Global Observatory for Small Pieces of Software CERN Computing Seminar Work done • Helm charts & AWS Lambda function assessment (2x2 prototypes + data sets, see github.com/serviceprototy pinglab) • Some docker-compose & CNAB preliminary • Preprint on Helm available (arXiv:1901.00644) • Preprint on Lambda in preparation Website • https://mao-mao- research.github.io/
  16. Zürcher Fachhochschule Helm Charts Quality Analysis & HelmQA Josef Spillner

    <[email protected]> Service Prototyping Lab (blog.zhaw.ch/icclab) Feb 07, 2019 | CERN Computing Seminar
  17. 2 Artefact Repositories The backbone of today‘s (cloud) software development

    Digital Artefact Observatory 2179+ artefacts 381+ artefacts 261+ artefacts Monitoring repositories for tomorrow‘s critical software ecosystems
  18. 3 Quality Assurance in Repositories Example: Debian QA → Tracker

    (tracker.debian.org; pre-2016: packages.qa.debian.org)
  19. 4 The Idea: HelmQA Helm: Package manager for the cloud-native

    landscape Created in 2015 as a Deis (now MS) installer Out of K8s incubator in 2017 80 stable charts, 53 maints KubeApps UI/marketplace in 2017 Helm summit in February 2018 with 179 participants (out of 250) CNCF top-level (incubator) project since May 2018 177 stable charts, 142 maints 50k montly downloads >30k charts on Github/similar? Quality checks (v2.8): helm lint »This command takes a path to a chart and runs a series of tests to verify that the chart is well-formed.« sufficient?
  20. 8 HelmQA Implementation Research workflow: Technologies: • Python, Bash, Git

    (still plenty of scripts... ~1500 SLOC) • Docker, OpenShift, Helm KubeApps Hub Chart in Git repo Chart folder/tgz Evolution repo author sets change rates dupe stats Chartsub KB learn apply CSV JSON Dot PNG programmatic nudge scripts
  21. 12 HelmQA Implementation Revisited CI/CD workflow: Chart in Git repo

    Chart folder/tgz HelmQA SaaS HelmQA local (via Docker) JSON report + fail/succcess status
  22. 13 Statistics Revisited University of Zurich student project «Change patterns

    in cloud software engineering», on static 3-months dataset [NOTE: preliminary results, not fully reviewed] *req *req *req (maintainer quality gate effect)
  23. 14 What‘s Next (updated from June‘18) More feedback on auto-generated

    suggestions + diffs to maintainers Full publication with all details to be submitted around mid-August 2018 (preprint will be publicly available @ arXiv) Workflow integration (e.g. for APPUiO deployments) - API: e.g. /livecheck/github.com/appuio/charts - long-term: plugin to “helm lint“ and policy enforcement within “kubeapps“? More metrics, e.g. absence of source URL Going back to “similarity“ index Getting mandate/feedback/ideas from Swiss businesses or any community users! [arXiv:1901.00644] Website incl. videos: https://serviceprototypinglab.github.io/helmqa/