and compositional development. This challenges developers to rapidly deploy trusted systems that include increasing numbers of untrusted components. Bad decisions are easy to make and have long reach. For example, decisions based on outdated documentation or decisions skewed to one criterion (e.g., performance).
DoD software for uncrewed vehicles. • Vision: special enclave with software tools, support services (e.g. docs and metadata), and a registry of available (approved) ROS components. Developer dilemma: Which components do I use? What information do I need to avoid a bad decision? How do I get reliable data?
each indicator. 2. Score each component. 3. Aggregate indicators: use expert input for weights (e.g., peak load, design hotspots, vulnerability collection). 4. Validate on open source corpus and with stakeholders.
measures. • “Cross-fold”: pilot on 1 component, then test on new ROS components. Increased Confidence • Ask ROS experts for past problematic indicators and measures (e.g. use of O(n2) algorithm). • Inject these and measure detection rate. • Survey stakeholders in practice.
• Scorecard result (post-setup) achieved 1 SD faster than developer acting without our tool Operational Validity • Collaborator pilot of tool on their system with their sample components
included? • We use existing architecture analyses • Iterative assessment clearly important • How to contextualize analysis for particular system? • Ignores mixed-criticality community, other open architecture standards (FACE) • More component selection than composition • Just pick the best component (it isn’t that hard?)
ROS based robots Argued for scoreboard approach to rating ROS components Scoreboard combines indicators and potential components to rank candidates Rick Kazman [email protected]
indicators, normalize to 0..1 • normalize to industry baselines and categorize as high, medium, low Approach: 1. Each indicator gets a weight w based on our model of stakeholder priority 2. The model is derived from interviews and short AHP prioritization exercises 3. Normalize as necessary for different denominators such as SLOC, programming language 4. We create customizable templates, based on stakeholder interviews. - E.g. (wM1 * wM2 ) + 2(wP1 * log wP2 ) + 3(wS 1 ) (some more important) (some highly correlated)