ERA talk at Int Workshop on Robotics in Software Engineering at ICSE, 2018
Towards Rapid Composition With
Confidence in Robotics Software
Neil Ernst’, Rick Kazman, Phil Bianco⸙
Modern software development is a world of rapid
release and compositional development.
This challenges developers to rapidly deploy trusted
systems that include increasing numbers of untrusted
Bad decisions are easy to make and have long reach.
For example, decisions based on outdated
documentation or decisions skewed to one criterion
AoA and Assurance
and vendor telecons
• ROS-M(ilitary) envisions a “component”
approach to assembling DoD software for
• Vision: special enclave with software tools,
support services (e.g. docs and metadata), and a
registry of available (approved) ROS
Developer dilemma: Which components do I
use? What information do I need to avoid a bad
decision? How do I get reliable data?
Working Usage Scenario
A developer is working on a UGV using ROS-M
Sergey Demushkin, noun project
1. Determine typical indicators, plus tools and data
for each indicator.
2. Score each component.
3. Aggregate indicators: use expert input for
weights (e.g., peak load, design hotspots,
4. Validate on open source corpus and with
• Internal tests against known component
• “Cross-fold”: pilot on 1 component, then test on
new ROS components.
• Ask ROS experts for past problematic indicators
and measures (e.g. use of O(n2) algorithm).
• Inject these and measure detection rate.
• Survey stakeholders in practice.
Evaluation Criteria (2)
Reduced Decision Time
• Baseline current approaches
• Scorecard result (post-setup) achieved 1 SD
faster than developer acting without our tool
• Collaborator pilot of tool on their system with
their sample components
Critiques and Open Questions
• What indicators ought to be included?
• We use existing architecture analyses
• Iterative assessment clearly important
• How to contextualize analysis for particular
• Ignores mixed-criticality community, other open
architecture standards (FACE)
• More component selection than composition
• Just pick the best component (it isn’t that hard?)
Introduced ROS-M, with different focus for
building ROS based robots
Argued for scoreboard approach to rating ROS
Scoreboard combines indicators and potential
components to rank candidates
Aggregation of Values
• equal weighting: sum all indicators, normalize to 0..1
• normalize to industry baselines and categorize as high,
1. Each indicator gets a weight w based on our model of
2. The model is derived from interviews and short AHP
3. Normalize as necessary for different denominators such
as SLOC, programming language
4. We create customizable templates, based on
- E.g. (wM1
) + 2(wP1
* log wP2
) + 3(wS
(some more important)
(some highly correlated)