Welcome to our second Ray meetup, where we focus on Ray’s native libraries for scaling machine learning workloads.
We'll discuss Ray Train, a production-ready distributed training library for deep learning workloads. And will present TorchX and Ray Integration. Through this integration, PyTorch developers can submit PyTorch-based scripts and workloads to a Ray Cluster using TorchX’s SDK and CLI via its new Ray Scheduler.