Maxime Beauchemin wrote an influential article, Functional Data Engineering — a modern paradigm for batch data processing. It is a significant step to bring Software Engineering concepts into Data Engineering. The principle utilizes the advancement from Hadoop.
Cloud object storage like S3 makes the storage a commodity.
The separate Storage & Compute, so both can scale independently. Yes, human life is too short for scaling storage and computing simultaneously.
Functional data engineering follows two key principles.
Reproducibility - Every task in the data pipeline should be deterministic and idempotent.
Re-Computability - Business logic changes over time, and bugs happen. The data pipeline should be able to recompute the desired state.