Slide 1

Slide 1 text

LLNL-PRES-702741 This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC Developing Open Source in Service to Na3onal Security GitHub Universe Ian Lee Lawrence Livermore Na5onal Laboratory September 14, 2016

Slide 2

Slide 2 text

LLNL-PRES-702741 2 so9ware.llnl.gov

Slide 3

Slide 3 text

LLNL-PRES-702741 3 so9ware.llnl.gov

Slide 4

Slide 4 text

LLNL-PRES-702741 4 so9ware.llnl.gov https://upload.wikimedia.org/wikipedia/commons/a/a8/U.S._National_labs_map.jpg

Slide 5

Slide 5 text

LLNL-PRES-702741 5 so9ware.llnl.gov http://www.ex-astris-scientia.org/articles/new_enterprise/enterprise-warpcore.jpg

Slide 6

Slide 6 text

LLNL-PRES-702741 6 so9ware.llnl.gov https://pixabay.com/get/e833b10d2af4083ed1534705fb0938c9bd22ffd41db612439df7c17ba0/silos-1602209_1920.jpg

Slide 7

Slide 7 text

LLNL-PRES-702741 7 so9ware.llnl.gov 1960s 1970s 1980s 1990s 2000s 2010s Pioneering simulations of particle tracking CDC 3600 CDC 7600 Ozone mixing models CRAY 1 ASCI Blue- Pacific Helping the medical community plan radiation treatment Unprecedented dislocation dynamics simulations BlueGene Breakthrough visualizations of mixing fluids Dynamics in three dimensions Global climate modeling Detailed predictions of ecosystems Petascale and exascale computing

Slide 8

Slide 8 text

LLNL-PRES-702741 8 so9ware.llnl.gov §  3 out of 16 #1 systems over last 20 years Top500.org ASCI White Nov 2000 – Nov 2001 BlueGene/L Nov 2004 – Nov 2007 Sequoia June 2012 https://www.top500.org/resources/top-systems/

Slide 9

Slide 9 text

LLNL-PRES-702741 9 so9ware.llnl.gov TOSS – Tri-Lab Opera3ng System So@ware §  Built on Red Hat Enterprise Linux —  Not an HPC distribu5on §  Adds LLNL developed addi5ons and patches to support HPC —  Low Latency Interconnect: Infiniband —  Parallel File System: Lustre —  Resource Manager: SLURM §  Work closely with open communi5es Components not in TOSS Supported Linux Commodity Hardware Platform Kernel, Infiniband, Message Passing Interface Batch Scheduler (MOAB) User Environment Lustre File Systems Compiler & Development Tools Resource Manager (SLURM) TOSS Components HPSS Hopper LLNL-PRES-550311 TOSS is a software stack for HPC – large, interconnected clusters!

Slide 10

Slide 10 text

LLNL-PRES-702741 10 so9ware.llnl.gov §  Began as simple resource manager —  Now scalable to 1.6M+ cores (sequoia) §  Launch and manage parallel jobs —  Large, parallel jobs, o9en MPI §  Queuing and scheduling of jobs —  Much more work than resources hCp://slurm.schedmd.com http://www.ibm.com/developerworks/library/l-slurm-utility/figure3.gif

Slide 11

Slide 11 text

LLNL-PRES-702741 11 so9ware.llnl.gov hCp://flux-framework.github.io §  Family of projects used to build site-customized resource management systems §  flux-core —  Implements the communica5on layer and lowest level services and interfaces §  flux-sched —  Consists of an engine that handles all the func5onality common to scheduling §  capacitor —  A bulk execu5on manager using flux-core, handles running and monitoring 1000’s of jobs

Slide 12

Slide 12 text

LLNL-PRES-702741 12 so9ware.llnl.gov ZFS on Linux hCp://zfsonlinux.org §  ZFS is an open source filesystem and volume manager designed to address the limita5ons of exis5ng storage solu5ons §  2011: Available for Linux §  Ten LLNL filesystems, totaling ~ 100PB §  Ships in Ubuntu 16.04

Slide 13

Slide 13 text

LLNL-PRES-702741 13 so9ware.llnl.gov §  Handles combinatorial explosion of ABI-incompa5ble packages §  All versions coexist, binaries work regardless of user’s environment §  Familiar syntax, reminiscent of brew, yum, etc ! ! ! $ spack install mpileaks unconstrained! $ spack install [email protected] @ custom version! $ spack install [email protected] %[email protected] % custom compiler! $ spack install [email protected] %[email protected] +threads +/- build option! $ spack install [email protected] os=SuSE11 os=! $ spack install [email protected] os=CNL10 os=! $ spack install [email protected] os=CNL10 target=haswell target=! SPACK hCp://[email protected]/spack

Slide 14

Slide 14 text

LLNL-PRES-702741 14 so9ware.llnl.gov §  Manages the first-ever decentralized database for handling climate science data §  Mul5ple petabytes of data at dozens of federated sites worldwide §  International collaboration for the software that powers most global climate change research hCps://github.com/ESGF hCp://esgf.llnl.gov

Slide 15

Slide 15 text

LLNL-PRES-702741 15 so9ware.llnl.gov VisIt §  Originally developed to visualize and analyze the results of terascale simula5ons §  Interac5ve, scalable, visualiza5on, anima5on and analysis tool §  Powerful, easy to use GUI §  Distributed and parallel architecture allows handling extremely large data sets interac5vely hCp://visit.llnl.gov

Slide 16

Slide 16 text

LLNL-PRES-702741 16 so9ware.llnl.gov hCp://computa3on.llnl.gov/casc

Slide 17

Slide 17 text

LLNL-PRES-702741 17 so9ware.llnl.gov GitHub.com/LLNL – Repositories

Slide 18

Slide 18 text

LLNL-PRES-702741 18 so9ware.llnl.gov GitHub.com/LLNL – Languages

Slide 19

Slide 19 text

LLNL-PRES-702741 19 so9ware.llnl.gov GitHub.com/LLNL – Stargazers

Slide 20

Slide 20 text

LLNL-PRES-702741 20 so9ware.llnl.gov [email protected]

Slide 21

Slide 21 text

LLNL-PRES-702741 21 so9ware.llnl.gov §  “Federal Source Code Policy: Achieving Efficiency, Transparency, and Innova5on through Reuseable and Open Source So9ware” —  “Agencies shall make custom-developed code available for Government-wide reuse and make their code inventories discoverable at hmps://www.code.gov (“Code.gov”) […]” —  “[…] establishes a pilot program that requires agencies, when commissioning new custom so9ware, to release at least 20 percent of new custom-developed code as Open Source So9ware (OSS) […]” Federal Source Code Policy

Slide 22

Slide 22 text

LLNL-PRES-702741 22 so9ware.llnl.gov US Government Organiza3ons on GitHub https://government.github.com/community/

Slide 23

Slide 23 text

Thank You! @IanLee1521 // @LLNL_OpenSource [email protected]