Apache Spark is a powerful distributed distributed data processing framework whose API allows to create connectors to read to and write from any type of data. In this session we'll see the challenges we faced while developing the official Neo4j Connector for Apache Spark, the enables the user to achieve an easy bidirectional communication between these two tools, and we'll also learn how to leverage the Neo4j's potential with the distributed processing capability of Apache Spark.