$30 off During Our Annual Pro Sale. View Details »

RASPBERRY SI: Resource Adaptive Software Purpose-Built for Extraordinary Robotic Research Yields - Science Instruments

RASPBERRY SI: Resource Adaptive Software Purpose-Built for Extraordinary Robotic Research Yields - Science Instruments

NASA and other governmental agencies have supported robotics research for decades, resulting in exciting advances and incredible demonstrations---but it is difficult to adapt robotics software in response to unknown environmental changes (e.g., severe weather change or radiation change in space). The underlying problem is the limited degree of autonomy to react to unexpected environmental changes in a timely fashion, requiring human operators on Earth to devise a plan to execute based on data that had been transmitted from the robot to Earth, transmit it to the robot in space, and hope that execution of the plan proceeds as expected and more importantly, this communication is limited (e.g., few times per day). Corrections to these plans, or reactions to unexpected circumstances, could only happen after the data describing the current situation had been transmitted back to Earth and analyzed. High-latency communications associated with remote robot operations in space are cumbersome, delay mission completion, and increases the danger of rendering robots unusable.

RASPBERRY SI (Resource Adaptive Software Purpose-Built for Extraordinary Robotic Research Yields - Science Instruments) leverages software and algorithms developed under the DARPA BRASS (Building Resource Adaptive Software Systems) program, which was successfully completed in December 2019. RASPBERRY SI will work with existing and/or planned science instruments to autonomously adapt lander and instrument software (and therefore its behaviors and actions) in response to newly discovered data on the planetary surface. As an example, if instruments detect an unexpected element or compound which would ordinarily lead scientists to perform a high-fidelity analysis in a certain spectrum, the system will analyze its existing resources and reconfigure itself to perform that analysis without waiting for round trip communication to Earth for a new set of commands from the ground station.

RASPBERRY SI will provide NASA and partner scientists with unprecedented, yet necessary, capabilities to autonomously respond to newly discovered data in real-time ``on the ground". Without the capabilities provided by RASPBERRY SI, the return of valuable science data will remain slow due to extremely long round trip transmission times especially in the outer solar system, and the lander system will rest in an idle state for a significant amount of its time on the surface. When missions include time constraints (e.g., observation of transient phenomena), RASPBERRY SI becomes even more critical, as the volume of scientific data that must be collected simply cannot be obtained within the available time window.
The aim of this project is to increase the autonomy of a mission on the surface of another planet without the need for round-trip control data for human supervision. We also aim to increase the autonomy of the spacecraft in unknown and uncertain environments. This project will also increase the speed of scientific exploration via accurate task prioritization and also by reducing the number of interruptions in missions required by dynamically and carefully adapting to environmental and system changes during operation. We will demonstrate the effectiveness of our methods by deploying and optimizing state-of-the-art machine learning on the NASA testbed.

Our team is in a unique position to undertake this project as we start this project based on the DARPA BRASS technology that
we have matured over 4 years (2016-2019). This technology will enable automated software adaptation with "learning-based autonomous planning and adaptation". Our approach will deal with a wide variety of changes including adding, removing, or updating sensors, actuators, and software components, protocols, and semantic incompatibilities. Our objectives include: enabling landers to automatically adapt software that fails to meet its objectives; to automatically incorporate functionality into the lander.

Pooyan Jamshidi

August 29, 2020
Tweet

More Decks by Pooyan Jamshidi

Other Decks in Research

Transcript

  1. RASPBERRY SI
    David Garlan
    CMU
    Co-I
    Bradley Schmerl
    CMU
    Co-I
    Pooyan Jamshidi
    USC
    PI
    Javier Camara
    York
    Collaborator
    Ellen Czaplinski
    Arkansas
    Consultant
    Katherine Dzurilla
    Arkansas
    Consultant
    Jianhai Su
    USC
    Graduate Student
    Matt DeMinico
    NASA
    Co-I
    AISR: Autonomous Robotics Research for Ocean Worlds (ARROW)
    Resource Adaptive Software Purpose-Built for Extraordinary Robotic
    Research Yields - Science Instruments

    View Slide

  2. Current Practice of Science Discovery in Remote Planets
    Problem: Slow science discovery due to lack of full autonomy
    2
    (v) Only deal with: 

    Known Knowns
    ~2.5 hours
    ~2.5 hours
    (iv) Does not scale
    (i) Delay in science discovery
    (iii) High risks
    Planners Spacecraft
    Engineers
    Command
    Sequences
    [uplink]
    Telemetry
    Image
    [downlink]
    State
    Science
    Activities
    Scientific Data [images, measurements]
    Postmortem
    Analysis
    (ii) High mission costs

    View Slide

  3. Autonomy
    Planning
    Perception
    Command
    Sequences
    [onboard]
    Telemetry
    Image
    [onboard]
    Perceived
    State
    Science
    Mission Mission
    Planning
    x
    f
    12 / 38
    Engineers
    Telemetry
    Image
    [downlink]
    Actual
    State
    Planners
    correction
    [uplink]
    Spacecraft
    Slow
    Fast
    Scientific Data [images, measurements]
    Postmortem
    Analysis
    Ideal Vision of Science Discovery in Remote Planets
    Solution: Fast science discovery with AI-based full autonomy
    3
    (v) Can deal with:

    Unknown Unknowns
    High Frequency
    Low Frequency
    (i) Fast science discovery

    (ii) Low mission costs
    (iv) Does scale
    (iii) Low risks

    View Slide

  4. Challenges and Opportunities
    • Large data to train an accurate and reliable model

    • Data collection on other planets is slow.

    • Data from previous explorations with similar physics and characteristics

    • Physics-based simulation data
    4

    View Slide

  5. Transfer Learning from Simulation to Ocean Worlds
    Sim2Real
    Transfer
    5
    Deployed Environment
    Autonomous 

    System
    Simulation Environment
    VxSIM – Virtual Exercise Framework
    Sensor Models
    Camera LIDAR
    GPS/IMU RADAR
    HD Virtual
    Environment
    Simulation
    Network
    Vehicle and
    Articulated Model
    Dynamics
    Autonomous, AI
    System
    Scenario Builder
    Exercise Management
    Process and
    Parameter
    Interface
    Status
    Commands
    Sensor Data

    View Slide

  6. Transfer Learning from the Earth to Ocean Worlds
    6
    Transfer Learning
    Well-known Physics
    Big Data
    Limited known Physics
    Small Data
    Earth2Europa
    Causal Invariances
    Causal AI
    Earth Ocean Worlds 

    (Europa, Enceladus, and Titan)

    View Slide

  7. Simulation using OceanWATERS
    7
    Component

    Test
    Integration

    Test
    Model
    Learning
    Transfer
    Learning
    Model
    Compression
    Online
    Learning
    A B C D
    Quantitative
    Planning
    E
    Learning
    A E
    Case (Baseline)
    A E
    B A E
    B C A E
    B C D
    Case 2 (Transfer) Case 3 (Compress) Case 4 (Online)
    Test 1
    Expected
    Performance
    Case 1 < Case 2 < Case 3 < Case 4
    OWLAT Code: https://github.com/nasa/ow_simulator
    Physical Autonomy Testbed: https://www1.grc.nasa.gov/wp-content/uploads/2020_ASCE_OWLAT_20191028.pdf

    View Slide

  8. Real-World Experiments using OWLAT
    • Models learned from simulation

    • Adaptive System (Learning + Planning)

    • Sets of tests
    Adaptive
    System
    Machine
    learning
    Models
    Mission
    Environment
    Continual Learning:
    refining models
    Log Mission
    Reports Local Machine

    Cloud Storage

    View Slide

  9. Test Coverage
    • Mission Types: landing and scientific explorations -> sampling

    • Mission Difficulty:

    • Rough regions for landing

    • Number of locations where a sample needs to be fetched

    • Unexpected events:
    • Changes in the environments: e.g., uneven terrain and weather

    • Changes to the lander capabilities: e.g., deploy new sensors

    • Faults (power, instruments, etc)

    View Slide

  10. Success Criteria of Evaluation
    • Correctness/Safety: all operating parameters of the spacecraft are within
    tolerances of their expected values.

    • Accuracy: the state estimates generated by the onboard learning algorithms
    to ground truth values.

    • Efficiency: computation times, energy consumption, and data bandwidth
    consumed.

    • Quality of Mission Completion
    • Major metric: mission goal (land successfully or collect expected amount of materials)

    • Minor metric: efficiency of the mission

    • If the mission violates correctness requirements, then set it to failure.

    View Slide

  11. Evaluation Infrastructure
    Test
    Generator
    Autonomy
    Module
    Test 1
    Test Harness

    Mission
    Configuration
    Testbed

    Monitoring

    & Logging
    Communication
    Logging
    Logs
    Log
    Analysis
    Evaluation Report
    Environment
    & Lander
    Simulation
    Adapter
    Interface
    Learning

    & Planning
    Plan
    Executive

    View Slide

  12. Discussions (Virtual Testbed)
    • Virtual testbed capabilities: Battery charge; Position, pose,
    time, orientation?
    • Is the power model parametric (discharge rate)?

    • Telecommunication in virtual testbed – bandwidth
    consumption?

    • New features in virtual testbed?

    View Slide

  13. Discussions (Physical Testbed)
    • Will the physical testbed have the lander move around?

    • Concurrent actions are common during mission? Language support both
    sequential and concurrent actions?

    • Time constraints for reconfigurations: how strict are the constraints? Impact
    on planning?

    • The gap between physical lander testbed and virtual testbed

    • Physical testbed provides a physical area for the lander to move around? How
    will be landing simulated in physical testbed?

    • Any way to implement a saw blade into testbed (like in Europa Lander mission)?

    • For bulk excavation to depth (10 cm or greater)

    • Largely agnostic to local surface topography

    View Slide

  14. Discussions (Test Case Design)
    • Challenge problems for the lander to guide research and evaluation.

    • Test case design in agile way from Day-1!

    • Mission Types: landing and science instrument

    • Test Scenarios:

    • Are there any guidelines for creating test cases?

    • How would you engage in the design of test scenarios?

    • We would like an agile approach from Day-1 to design realistic
    test cases.

    • Transfer learning scenarios?

    View Slide

  15. Test Cases: Surface scenarios/events
    • Would radiation or light from a distant supernova affect surface operations?

    • Thermal/power conservation during eclipses, which occur frequently on Europa

    • Virtual testbed darkens scene evenly for eclipses, but doesn’t simulate subtle gradation of a
    planet’s penumbra

    • Big unknowns in near-field features

    • Good estimates from Death Valley and Atacama desert research

    • But how to account for unknown surface features

    • “Europa-quakes”

    • Europa’s plate tectonics and icy shell provides opportunities to study quakes from the surface -
    also need to account for this type of event and how lander would respond if vibrations caused a
    key instrument to fail

    • Nearby Europa plumes

    • Exciting event for life detection and sampling material from subsurface ocean

    • How would lander respond?

    View Slide

  16. Test Cases: Surface scenarios/events
    • From Europa lander report

    • “Tal” used to represent orbital period of the carrier/relayer spacecraft (24 Earth hours)

    • 5 sampling tals planned over a 20 day mission

    • Sample acquisition ~5 hours

    • Sample cycle is expected to be fully autonomous sequence - how might this be autonomously
    adjusted if the lander has to account for an “unknown event” (e.g., intense quakes or plume)

    • Will testbeds simulate deorbit, descent, and landing (DDL)?

    • If so, it’s possible that the hydrazine exhaust could deposit material on the surface near the
    landing site

    • How to implement this into testbed, if applicable?

    • How would instruments differentiate between Europa-native species or hydrazine-native
    species of nitrogen, ammonia, hydrogen, water, carbon dioxide, chloride, for example?

    • Europa lander mission has plans to reduce amount of exhaust contamination on surface

    View Slide

  17. Test Cases: Surface scenarios/events
    • Radiation
    • Instruments protected via radiation shields/radiation vault

    • Could radiation affect comms relay between lander-orbiter or
    orbiter-Earth?

    • Will we have access to the carrier/relayer spacecraft in either
    testbed?

    View Slide

  18. Discussions (Collaboration Infrastructure)
    • System requirements for using the two testbeds

    • ROS (Melodic), Gazebo 9.13+, Ubuntu 18.04

    • Plan execution: YAML file + PLEXIL language vs Instruction
    Graph

    • To facilitate third-party evaluation, Dockerize Test Harness,
    Testbed and Adaptive Lander System.

    • GitFlow to facilitate collaboration during the project.
    PLEXIL: Plan Execution Interchange Language

    View Slide

  19. Physical Space Lander Testbed at JPL
    Physical Autonomy Testbed: https://www1.grc.nasa.gov/wp-content/uploads/2020_ASCE_OWLAT_20191028.pdf
    E2M Technologies six DOF Stewart Platform representing spacecraft lander
    Barrett WAM seven DOF manipulator arm mounted to lander with wrist FTS and tool changer
    Modular instruments to be mounted on robot arm
    Testbed setup and major components
    HITL simulator of lander and manipulator

    View Slide

  20. Computing and Software Architecture of
    the Physical Lander Testbed
    Physical Autonomy Testbed: https://www1.grc.nasa.gov/wp-content/uploads/2020_ASCE_OWLAT_20191028.pdf
    Emulation of Ocean World body dynamics within testbed
    Operator Interface used as a stand-in for the autonomy software
    Computing and Software Architecture

    View Slide

  21. Important Test Cases: OWLAT Pressure-
    sinkage Test and Scooping Operation
    Physical Autonomy Testbed: https://www1.grc.nasa.gov/wp-content/uploads/2020_ASCE_OWLAT_20191028.pdf

    View Slide

  22. Program Information
    • Program Manager: Carolyn Mercer @ NASA

    • Physical testbed contact: Hari Nayar @ NASA JPL

    • Virtual testbed contact: Mike Dalal @ NASA Ames

    • Selected Projects (out of 17 submissions):

    • Project 1: RASPBERRY SI: Resource Adaptive Software Purpose-Built for Extraordinary
    Robotic Research Yields - Science Instruments (USC)

    • PI: Pooyan Jamshidi (University of South Carolina)

    • Project 2: Robust Autonomy for Planetary Sampling

    • PI: Jonathan Bohren (Honeybee Robotics, Ltd)

    • Further Info: https://nspires.nasaprs.com/external/viewrepositorydocument/
    cmdocumentid=773394/
    solicitationId=%7B6FD283AF-7FD6-7A9F-1546-0FBFD722B6C2%7D/
    viewSolicitationDocument=1/AISR19%20Abstracts.pdf

    View Slide

  23. Discussions
    August 26th

    View Slide

  24. Notes
    • The AI and Autonomy Technologies of this project will be
    potentially used in Ocean Worlds missions

    • Develop something cool, demonstrate feasibility, evaluate and
    demonstrate, NASA would love to take

    • Infusion of technology

    • We can tap into technologies outside of NASA

    • Community of practice
    • Formed collaboration them, each other hardware, program

    • Synergies, worthwhile to propose solicitation

    View Slide

  25. Notes
    • Remote testing and evaluation for physical testbed

    • Encourage physical presence and stay at JPL to work closely
    with the JPL team

    • Resident at JPL

    • Co-developing be able to refine improve the functionality

    • Schedule work

    • Start with the virtual testbed

    • Physical after initial approval

    View Slide

  26. Notes
    • Virtual testbed still under development

    • Changes to virtual testbed

    • Maintain some level of compatibility Command on virtual side
    and be able to test to physical

    • Stay with some standards

    View Slide

  27. Notes
    • Simple operation first and make it more complicated

    • Adequate to get things done

    • Fault injection of you know

    • Bounds on what the system can do and what you may not
    expect

    • Unknown Unknowns are important

    • Characterize the nominal, software, autonomy software,
    unexpected things that can happen

    View Slide

  28. Notes
    • Extending PLEXIL itself
    • Orienting logistical contractual fault injection
    • Planned feature
    • Basic fault injection model
    • Prioritize feature
    • Basic model: inject faults as ROS parameters
    • Fault spaces
    • Open source collaboration
    • Draft open source contribution

    View Slide

  29. Notes
    • Physical will be ready end of November
    • Priori to that time Interface details
    • Capabilities in the system
    • Doing our development the simulator
    • Low-level autonomy command when we have the whole infrastructure, command actuators and sensors
    • Setup simulator low level capabilities with the software simulators
    • Virtual motors and tested out Completely platforms on a computer without physical
    • Full capability without driving the system
    • Different virtual testbed, all the capabilities of the physical testbed in virtual — hardware in the loop
    • Acceptance test, pass through the test in simulated version
    • Driving the physical Remote login to computer, remotely send command
    • Safety someone in the lab Model

    View Slide

  30. Action items
    1. Sending technical draft to testbed contacts.

    2. Setting up regular meetings with testbed contacts.

    View Slide

  31. Goal: Our Innovations in AI and Autonomy to be used in the Europa and other Ocean Worlds missions.

    This is a once in a lifetime chance to make a difference. Thanks, NASA, for giving us this opportunity!

    We are hiring under-represented groups in STEM: Female, Blacks, Hispanics, Native Americans.

    Contact: Pooyan Jamshidi (University of South Carolina)

    https://pooyanjamshidi.github.io/

    View Slide