Slide 1

Slide 1 text

Towards Rapid Composition With Confidence in Robotics Software Neil Ernst’, Rick Kazman, Phil Bianco⸙ ’6OJWFSTJUZPG7JDUPSJB 6OJWFSTJUZPG)BXBJJ ⸙4PGUXBSF&OHJOFFSJOH*OTUJUVUF

Slide 2

Slide 2 text

Problem Modern software development is a world of rapid release and compositional development. This challenges developers to rapidly deploy trusted systems that include increasing numbers of untrusted components. Bad decisions are easy to make and have long reach. For example, decisions based on outdated documentation or decisions skewed to one criterion (e.g., performance).

Slide 3

Slide 3 text

AoA and Assurance Conformance Us Marketing material and vendor telecons Heuristic, off the cuff decisions Confidence Assessment Speed Project Goal

Slide 4

Slide 4 text

Context: ROS-M • ROS-M(ilitary) envisions a “component” approach to assembling DoD software for uncrewed vehicles. • Vision: special enclave with software tools, support services (e.g. docs and metadata), and a registry of available (approved) ROS components. Developer dilemma: Which components do I use? What information do I need to avoid a bad decision? How do I get reliable data?

Slide 5

Slide 5 text

Working Usage Scenario A developer is working on a UGV using ROS-M Sergey Demushkin, noun project

Slide 6

Slide 6 text

Approach 1. Determine typical indicators, plus tools and data for each indicator. 2. Score each component. 3. Aggregate indicators: use expert input for weights (e.g., peak load, design hotspots, vulnerability collection). 4. Validate on open source corpus and with stakeholders.

Slide 7

Slide 7 text

Sample Project Health Indicators

Slide 8

Slide 8 text

Quality Attributes

Slide 9

Slide 9 text

Sample Results

Slide 10

Slide 10 text

Evaluation Criteria Tool Correct • Internal tests against known component measures. • “Cross-fold”: pilot on 1 component, then test on new ROS components. Increased Confidence • Ask ROS experts for past problematic indicators and measures (e.g. use of O(n2) algorithm). • Inject these and measure detection rate. • Survey stakeholders in practice.

Slide 11

Slide 11 text

Evaluation Criteria (2) Reduced Decision Time • Baseline current approaches • Scorecard result (post-setup) achieved 1 SD faster than developer acting without our tool Operational Validity • Collaborator pilot of tool on their system with their sample components

Slide 12

Slide 12 text

Critiques and Open Questions • What indicators ought to be included? • We use existing architecture analyses • Iterative assessment clearly important • How to contextualize analysis for particular system? • Ignores mixed-criticality community, other open architecture standards (FACE) • More component selection than composition • Just pick the best component (it isn’t that hard?)

Slide 13

Slide 13 text

Neil Ernst [email protected] Introduced ROS-M, with different focus for building ROS based robots Argued for scoreboard approach to rating ROS components Scoreboard combines indicators and potential components to rank candidates Rick Kazman [email protected]

Slide 14

Slide 14 text

Aggregation of Values Possible choices: • equal weighting: sum all indicators, normalize to 0..1 • normalize to industry baselines and categorize as high, medium, low Approach: 1. Each indicator gets a weight w based on our model of stakeholder priority 2. The model is derived from interviews and short AHP prioritization exercises 3. Normalize as necessary for different denominators such as SLOC, programming language 4. We create customizable templates, based on stakeholder interviews. - E.g. (wM1 * wM2 ) + 2(wP1 * log wP2 ) + 3(wS 1 ) (some more important) (some highly correlated)