$30 off During Our Annual Pro Sale. View Details »

Towards Rapid Composition with Confidence in Robotics Software

Towards Rapid Composition with Confidence in Robotics Software

ERA talk at Int Workshop on Robotics in Software Engineering at ICSE, 2018

Neil Ernst

May 28, 2018
Tweet

More Decks by Neil Ernst

Other Decks in Science

Transcript

  1. Towards Rapid Composition With
    Confidence in Robotics Software
    Neil Ernst’, Rick Kazman, Phil Bianco⸙
    ’6OJWFSTJUZPG7JDUPSJB
    6OJWFSTJUZPG)BXBJJ
    ⸙4PGUXBSF&OHJOFFSJOH*OTUJUVUF

    View Slide

  2. Problem
    Modern software development is a world of rapid
    release and compositional development.
    This challenges developers to rapidly deploy trusted
    systems that include increasing numbers of untrusted
    components.
    Bad decisions are easy to make and have long reach.
    For example, decisions based on outdated
    documentation or decisions skewed to one criterion
    (e.g., performance).

    View Slide

  3. AoA and Assurance
    Conformance
    Us
    Marketing material
    and vendor telecons
    Heuristic, off
    the cuff
    decisions
    Confidence
    Assessment Speed
    Project Goal

    View Slide

  4. Context: ROS-M
    • ROS-M(ilitary) envisions a “component”
    approach to assembling DoD software for
    uncrewed vehicles.
    • Vision: special enclave with software tools,
    support services (e.g. docs and metadata), and a
    registry of available (approved) ROS
    components.
    Developer dilemma: Which components do I
    use? What information do I need to avoid a bad
    decision? How do I get reliable data?

    View Slide

  5. Working Usage Scenario
    A developer is working on a UGV using ROS-M
    Sergey Demushkin, noun project

    View Slide

  6. Approach
    1. Determine typical indicators, plus tools and data
    for each indicator.
    2. Score each component.
    3. Aggregate indicators: use expert input for
    weights (e.g., peak load, design hotspots,
    vulnerability collection).
    4. Validate on open source corpus and with
    stakeholders.

    View Slide

  7. Sample
    Project
    Health
    Indicators

    View Slide

  8. Quality
    Attributes

    View Slide

  9. Sample Results

    View Slide

  10. Evaluation Criteria
    Tool Correct
    • Internal tests against known component
    measures.
    • “Cross-fold”: pilot on 1 component, then test on
    new ROS components.
    Increased Confidence
    • Ask ROS experts for past problematic indicators
    and measures (e.g. use of O(n2) algorithm).
    • Inject these and measure detection rate.
    • Survey stakeholders in practice.

    View Slide

  11. Evaluation Criteria (2)
    Reduced Decision Time
    • Baseline current approaches
    • Scorecard result (post-setup) achieved 1 SD
    faster than developer acting without our tool
    Operational Validity
    • Collaborator pilot of tool on their system with
    their sample components

    View Slide

  12. Critiques and Open Questions
    • What indicators ought to be included?
    • We use existing architecture analyses
    • Iterative assessment clearly important
    • How to contextualize analysis for particular
    system?
    • Ignores mixed-criticality community, other open
    architecture standards (FACE)
    • More component selection than composition
    • Just pick the best component (it isn’t that hard?)

    View Slide

  13. Neil Ernst

    [email protected]

    Introduced ROS-M, with different focus for
    building ROS based robots
    Argued for scoreboard approach to rating ROS
    components
    Scoreboard combines indicators and potential
    components to rank candidates
    Rick Kazman

    [email protected]

    View Slide

  14. Aggregation of Values
    Possible choices:
    • equal weighting: sum all indicators, normalize to 0..1
    • normalize to industry baselines and categorize as high,
    medium, low
    Approach:
    1. Each indicator gets a weight w based on our model of
    stakeholder priority
    2. The model is derived from interviews and short AHP
    prioritization exercises
    3. Normalize as necessary for different denominators such
    as SLOC, programming language
    4. We create customizable templates, based on
    stakeholder interviews.
    - E.g. (wM1
    * wM2
    ) + 2(wP1
    * log wP2
    ) + 3(wS
    1
    )
    (some more important)
    (some highly correlated)

    View Slide