Protein-Protein Docking Calculations 6th Asian Conference, Supercomputing Frontiers Asia 2020 Singapore, February 26th, 2020 Kento Aoyama 1,2 Hiroki Watanabe 1,2 Masahito Ohue 1 Yutaka Akiyama 1 1 Department of Computer Science, School of Computing, Tokyo Institute of Technology, Japan 2 AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory, National Institute of Advanced Industrial Science and Technology, Japan
being widely used in the fields of computational science. • Today, many researchers run containers in various computing environments such as laptops, clouds, and supercomputers. • Container technology is becoming essential for retaining our scientific reproducibility and impact. • E.g.) Container-related sessions in SC19 @ Denver • [Workshop] 1st International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC) • [Tutorial] Containers in HPC • [Tutorial] Container Computing for HPC and Scientific Workflows • [BoF] Containers in HPC … and other exhibits, posters, etc. Introduction | Containers in HPC 3
the configuration of container images for high performance computing (HPC) applications that run in multiple HPC environments. • This requires users to understand the container’s know-how … • host-system hardware specifications (CPU, GPU, Interconnects, …) • host-system software specifications (OS, libraries, binaries, …) • container runtimes (Docker, Singularity, Chariecloud, Shifter, Sarus, …) • container image formats (Dockerfile, Singularity Def., …) • compatibility in each of those used in HPC environments • HPC Container deployment is still NOT easy. • These problems are a major obstacle for the further spreads in the use of the present container technology in HPC environments. Introduction | Containers in HPC 4
we proposed a custom container image configuration workflow for our HPC application, called MEGADOCK-HPCCM. • The workflow is based on the HPC-Container-Maker framework  and is to give users easier ways to make containers when considering the specification differences between the hosts and containers in multiple HPC environments. • We confirmed that its parallel performance achieved over 0.95 in strong- scaling in several target HPC environments • 1/2 of the ABCI system (512 nodes with 2,048 NVIDIA V100 GPUs) • 1/3 of the TSUBAME 3.0 system (180 nodes with 720 NVIDIA P100 GPUs) Introduction | Purpose of study 5  McMillan, S.: Making containers easier with HPC container maker. In: Proceedings of the SIGHPC Systems Professionals Workshop (HPCSYSPROS'18), Dallas, TX, USA (2018).
to be prepared for each variety of local libraries for using high-speed interconnects equipped in the HPC target system. B. ABI Compatibility for MPI Library • If a process in a container uses the MPI library to communicate with a process outside of the container, Application Binary Interface (ABI) must be compatible between the host MPI library and the container's one. Current HPC Workflow Problems: 8
which local libraries are necessary for using high-speed interconnects equipped in the HPC target system. These must be installed in the containers. • For example, openib , ucx  or a similar library needs to be installed in the container if it is running on the system with InfiniBand. • On the other hand, the psm2  library is required when it runs on the system with Intel Omni-Path. • Technically, it is possible to install almost all of the libraries in one container. However, it is generally not recommended in the best-practice of container image configuration. • Because most of the advantages of the containers originated from its light-weightiness, the containers must be as simple as possible. A. Preparation Costs for Containers 9  https://www.openfabrics.org/  https://www.openucx.org/  https://github.com/intel/opa-psm2
uses the MPI library to communicate with a process outside of the container, then the Application Binary Interface (ABI) must be compatible between the host MPI library and the container's one. • For instance, it is necessary to install the exact same (major and minor) version of the library when old OpenMPI versions are used. • The ABI compatibility problem can be avoided when using the latest releases of the MPI libraries. • E.g.) MPICH v3.1 or newer or IntelMPI v5.0 or newer. • E.g.) OpenMPI v3.0 or newer. • However, we have to know what version of the MPI libraries are supported in the host HPC systems and in the container images. • ABI Compatibility is still a troublesome cost for users in deploying the containerized MPI application to HPC systems. B. ABI Compatibility for MPI Library 10  Kurtzer, G.M., Sochat, V., Bauer, M.W.: Singularity: scientific containers for mobility of compute. PLoS One 12(5), pp. 1-20, (2017).
large-scale computing environments. • Implemented in C++/CUDA, and entire source code is available on GitHub. https://github.com/akiyamalab/MEGADOCK • The internal process is mainly based on Fast Fourier Transform (FFT) calculations for grid-based protein-protein docking using FFT libraries (e.g. FFTW, CUFFT). • Hybrid-parallelization using MPI/GPU/OpenMP • In the next major release of MEGADOCK 5.0 (under-development), its task-distribution strategy will be more optimized for latest multi-GPU environments. MEGADOCK: A PPI prediction application for HPC environment 11  Ohue, M., Shimoda, T., Suzuki, S., Matsuzaki, Y., Ishida, T., Akiyama, Y.: MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers. Bioinformatics 30(22), 3281-3283 (2014).
is distributed by the master to workers under the control by the original master-worker framework implemented in C++ using MPI library. • Each calculation of a docking pair is independently assigned to an OpenMP thread with CUDA streams.
application as well as the portability in multiple environments. • Currently, Docker  images and its container specifications in the Dockerfile format for GPU-enabled environments are provided to users on its GitHub repository. • We obtained a well-scalable performance of it on a cloud environment with Microsoft Azure . • However, it is required to solve the container configuration difficulties that are presented in previous sections when we assume the MEGADOCK application with Singularity containers on HPC systems. • We proposed an HPC container deployment workflow which supports a more wide variety of computing environments and solves the deployment problems in HPC systems. MEGADOCK meets HPC Containers 13  https://www.docker.com/  Aoyama, K., Yamamoto, Y., Ohue, M., Akiyama, Y.: Performance evaluation of MEGADOCK protein-protein interaction prediction system implemented with distributed containers on a cloud computing environment. In: Proceedings of the 25th International Conference on Parallel and Distributed Processing Techniques and Application (PDPTA'19), pp. 175-181, Las Vegas, NV (2019).
to generate container specification files for HPC environments. • https://github.com/NVIDIA/hpc-container-maker • HPCCM is a meta-container-recipe framework that provides the following useful features: • Support to generate both Dockerfile and Singularity definition files from a high-level Python recipe. • Python-based recipe can branch, validate user arguments, etc. • The same recipe can generate multiple container specifications. • A library of HPC “building blocks” which transparently provides simple descriptions to install the specific components commonly used in the HPC community. • By using the HPCCM framework, the cost of container recipe preparation can be reduced by implementing one Python recipe and setting parameters of container specifications for the HPC environments. • We use this HPCCM framework as a base of the proposed container deployment workflow for the target HPC environments. HPC Container Maker (HPCCM) 15  McMillan, S.: Making containers easier with HPC container maker. In: Proceedings of the SIGHPC Systems Professionals Workshop (HPCSYSPROS'18), Dallas, TX, USA (2018).
The workflow supports to configure the container specifications for different environments by setting parameters, and also both specification formats of Docker and Singularity. • reduction of management costs for container specification files • helpful for continuous integration (CI) of container images B) Avoiding Library Compatibility Problems • Explicit and easy specifications for the library versions make users easily to solve library compatibility problems between hosts/containers. • This is especially true in a case where it is needed to match the exact version of MPI libraries between the host HPC system and the inside of the container due to the ABI compatibility issue. Advantages of proposed workflow with HPCCM 18
230 files of the PDB (protein 3-D coordinates) format data labeled as unbound. Experiment 1. • We computed the all-to-all docking for protein-protein pairs in the dataset • The number of total pairs was 52,900 (230 x 230). Experiment 2. • To validate the large-scale application performance, we simply amplified the set of docking pairs to 25 times larger than the whole of the original dataset and created a virtual large-scale benchmark dataset. • We computed 1,322,500 pairs (= 1.3 M) of protein-protein docking calculations in total. Dataset for Million-scale “Interactome” PPI predictions 22  Vreven, T., Moal, I.H., Vangone, A., Pierce, B.G., Kastritis, P.L., Torchala, M., Chaleil, R., Jimenez-Garcia, B., Bates, P.A., Fernandez-Recio, J., Bonvin, A.M.J.J. Weng, Z.: Updates to the integrated protein-protein interaction benchmarks: docking benchmark version 5 and affinity benchmark version 2. Journal of Molecular Biology 427(19), pp. 3031-3041, (2015).
in a virtually distributed shared file system, called BeeGFS On Demand (BeeOND), which is temporarily constructed on the set of non-volatile memory express (NVMe) storages in computing nodes. • Output files are generated for each local NVMe storage when each docking calculation for a protein pair is finished. • After all of the calculations are completed, the output files are compressed as a .tar archive and are moved to the global storage. Measurements • The measured execution time is obtained by the task distribution framework in MEGADOCK. This shows the duration time from the start of task processing to the end of all tasks. • The data point in the plot means that each execution time is chosen from a median of three executions for the same calculations. Computational Details 23  https://www.beegfs.io/
“Grand Challenge” programs, which are open recruitment programs for researchers, coordinated by AIST and Tokyo Tech, respectively. • 1/2 of the ABCI system (512 nodes with 2,048 GPUs) • 1/3 of the TSUBAME 3.0 system (180 nodes with 720 GPUs) Computational Resources 24
the docking calculations on ABCI was 1.65 times faster than on TSUBAME 3.0 in average of each point. • The parallel performance in strong-scaling was 0.964 (ABCI) and 0.948 (TSUBAME 3.0). • There are no significant differences between two environments in terms of scalability.
to directly compare two performances, however, ABCI clearly showed better performance as a whole. • The execution time was 1,657 s when using 1/2 ABCI system (512 nodes, 2,048 V100 GPUs). • It was 7,682 s when using 1/3 TSUBAME 3.0 system (180 nodes , 720 P100 GPUs). • The parallel performance in strong-scaling* was 0.964 (ABCI) and 0.985 (TSUBAME 3.0). *It is based on the minimum and maximum set of computing nodes on each environment.
MEGADOCK had taken about half a day to run a million protein-protein pairs of the docking calculations for the entire TSUBAME 2.5 system . • However, the latest MEGADOCK is feasible to complete over a million protein-protein pairs of the docking calculations within 30 min on the latest HPC environment (half of the ABCI system). • Both of these performed over 0.95 in strong-scaling on the large-scale experiment. This indicated that our container application workflow can perform good scalability on the actual target HPC environments.  Ohue, M., Shimoda, T., Suzuki, S., Matsuzaki, Y., Ishida, T., Akiyama, Y.: MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers. Bioinformatics 30(22), 3281-3283 (2014).
similar architectural concepts but different specifications for both hardware and software. • It is sufficient as a proof-of-concept of our workflow design at the starting point. • However, the proposed workflow does not cover other gaps as … • binary optimizations for CPU/GPU architectural differences, • MPI communication optimization for network architectures, • and other performance optimization approaches. • These features should be included in future implementations to enhance the availability of the proposed workflow. Discussion 30
a large-scale all-to-all protein-protein docking application called MEGADOCK, to integrate the container deployment workflow over multiple HPC systems with different specifications. • The proposed workflow provided users an easier way to configure the containers for different systems and covered both Docker and Singularity container format. • This helped us in successfully avoiding the container difficulties in the HPC system such as the host-dependent libraries and the ABI compatibility of MPI libraries. • We measured the parallel performance of the container execution on both ABCI and TSUBAME 3.0 system using a small benchmark dataset and a virtual large-scale dataset which contains over a million protein-protein pairs. • The result showed that the parallel performance achieved over 0.95 in strong-scaling both on half of the ABCI system (512 nodes with 2,048 GPUs) and one-third of the TSUBAME 3.0 system (180 nodes with 720 GPUs). • That demonstrated that the latest HPC environment is feasible to complete over a million protein-protein docking calculations within half an hour. • We believe that the performance results contribute to accelerate the large-scale exhaustive “interactome” analysis for understanding the principles of biological systems. Conclusion 31
workflow and manual instructions are available at the following GitHub repository • https://github.com/akiyamalab/megadock_hpccm MEGADOCK • https://github.com/akiyamalab/megadock HPCCM • https://github.com/NVIDIA/hpc-container-maker References 32
Grand Challenge Program, National Institute of Advanced Industrial Science and Technology (AIST) • TSUBAME 3.0: • TSUBAME Grand Challenge Program, Tokyo Institute of Technology This work was supported by following projects: • KAKENHI (Grant No. 17H01814 and 18K18149), Japan Society for the Promotion of Science (JSPS), • Program for Building Regional Innovation Ecosystems “Program to Industrialize an Innovative Middle Molecule Drug Discovery Flow through Fusion of Computational Drug Design and Chemical Synthesis Technology”, Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), • Research Complex Program “Wellbeing Research Campus: Creating new values through technological and social innovation”, Japan Science and Technology Agency (JST), • AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL). Acknowledgements 33