are different to what proﬁlers and tests can simulate either locally or in staging environments. Especially in the cloud, scalable infrastructure requires approaches to leverage information gathered at production runtime and provide feedback to software engineers during development.
could impede the development workﬂow? Every build needs to: > Identify and extract relevant AST nodes > Retrieve information for each relevant node > Execute inference for new nodes with unknown properties > Propagate predicted entity 35
Workstation Local Feedback Handler Local IDE Cache Datastore HTTP Infrastructure 1 Deployed Feedback Handler Deployed System Transform IDE Infrastructure 2 Deployed Feedback Handler Deployed System Transform Infrastructure n Deployed Feedback Handler Deployed System Transform …. HTTP Stream (Apache Kafka Broker) HTTP Specification Function Inference Function Registered Filters
> Between Subject Study > 20 software engineers (min. 1 year of experience) > Relevant study subject: Agilefant > 4 Tasks (both related and not related to performance bugs) Control Group (Kibana) Treatment Group (PerformanceHat) 41
maintenance task that would introduce a performance bug, software engineers using PerformanceHat are faster in detecting the performance bug H02: Given a maintenance task that would introduce a performance bug, software engineers using PerformanceHat are faster in ﬁnding the root cause of the performance bug H03: Given a maintenance task that is not relevant to performance, software engineers using PerformanceHat are not slower than the control group in solving the task [Metric: First Encounter (FE)] [Metric: Root-Cause Analysis (RCA)] [Metric: Development Time] 42
data by matching performance metrics to source code artefacts to support decision-making during software development For which (quality) attributes does it make sense to correlate with source code artefacts? How much feedback is too much feedback? http://sealuzh.github.io/PerformanceHat/