users/team included a service call to 3rd party provider to load profile pictures > Login was not able to scale gracefully to multiple teams > Tests failed to simulate the situation
are different to what profilers and tests can simulate either locally or in staging environments. Especially in the cloud, scalable infrastructure requires approaches to leverage information gathered at production runtime and provide feedback to software engineers during development.
leveraged for decision making in software development for the cloud? How can we use runtime information to support data-driven decision making for engineers during software development guides
Cloud Applications” - FSE’15 Adapted from https://xkcd.com/1423/ Interview with 25 software developers that deploy in the cloud Survey with 294 responses Developer Me
Cloud Applications” - FSE’15 Adapted from https://xkcd.com/1423/ Topic: Solving problems that have been detected in production Nah, I rather go by intuition? Do you look at any metrics?
https://peerj.com/preprints/985/ Blog Post on #themorningpaper https://blog.acolyer.org/2015/11/10/runtime-metric-meets-developer-building-better-cloud-applications-using-feedback/
process in the IDE Scaling Runtime Performance Matching Matching information to source code is part of the daily developer workflow It cannot be slow! 34
could impede the development workflow? Every build needs to: > Identify and extract relevant AST nodes > Retrieve information for each relevant node > Execute inference for new nodes with unknown properties > Propagate predicted entity 35
could impede the development workflow? Analysis needs to take into account different scopes > Single File Builds > Incremental Builds > Full Project Builds 36
Workstation Local Feedback Handler Local IDE Cache Datastore HTTP Infrastructure 1 Deployed Feedback Handler Deployed System Transform IDE Infrastructure 2 Deployed Feedback Handler Deployed System Transform Infrastructure n Deployed Feedback Handler Deployed System Transform …. HTTP Stream (Apache Kafka Broker) HTTP Specification Function Inference Function Registered Filters
> Between Subject Study > 20 software engineers (min. 1 year of experience) > Relevant study subject: Agilefant > 4 Tasks (both related and not related to performance bugs) Control Group (Kibana) Treatment Group (PerformanceHat) 41
maintenance task that would introduce a performance bug, software engineers using PerformanceHat are faster in detecting the performance bug H02: Given a maintenance task that would introduce a performance bug, software engineers using PerformanceHat are faster in finding the root cause of the performance bug H03: Given a maintenance task that is not relevant to performance, software engineers using PerformanceHat are not slower than the control group in solving the task [Metric: First Encounter (FE)] [Metric: Root-Cause Analysis (RCA)] [Metric: Development Time] 42
data by matching performance metrics to source code artefacts to support decision-making during software development For which (quality) attributes does it make sense to correlate with source code artefacts? How much feedback is too much feedback? http://sealuzh.github.io/PerformanceHat/