You have heard about how functional programming is all about (the absence of) mutations. That is, you write your functions without modifying any states. Since your code does not directly modify any data, it is much easier for the language and runtime to figure out distribution of data and parallelizing your computations automagically for you. Coming from an imperative programming background as a Fortran/C/C++ programmer, you wonder if this is a total BS, if it is not, how you can also harness this power for your own scientific computing needs.
Your confusion will end with this talk. I will show you, using code for a particle-mesh simulation as an example, how to apply functional programming principles to scientific computing. You will learn how to use immutable data structures like Spark RDDs to generate and organize data and how to express computations using the usual purely functional operations and the higher-order, almighty, join operation. You will learn how the language and runtime help you, rather than work against you, on distributing data and parallelizing computations. You will see with your own eyes that it is no BS. You will believe in functional programming as well.
The absence of mutations gives rise to a new way of doing scientific computing. It is elegant and productive. It is parallelized and Cloud-ready the very first moment you start developing it. It is different, it is a New Species.