TARDIS: Optimal Execution of Scientific Workflows in Apache Spark
The success of using workflows for modeling large-scale scientific applications has fostered the research on parallel execution of scientific workflows in shared-nothing clusters, in which large volumes of scientific data may be stored and processed in parallel using ordinary machines.
However, most of the current scientific workflow management systems do not handle the memory and data locality appropriately. Apache Spark deals with these issues by chaining activities that should be executed in a specific node, among other optimizations such as the in-memory storage of intermediate data in RDDs (Resilient Distributed Datasets).
Notwithstanding these optimizations, to take advantage of the RDDs, Spark requires existing workflows to be described using its own API, which forces the activities to be reimplemented in Python, Java, Scala or R, and this demands a big effort from the workflow programmers.
In this work, we propose a parallel scientific workflow engine called TARDIS, whose objective is to run existing workflows inside a Spark cluster, using RDDs and smart caching, in a completely transparent way for the user, i.e., without needing to reimplement the workflows in the Spark API. We evaluated our system through experiments and compared its performance with Swift/K. The results show that TARDIS performs better (up to 138% improvement) than Swift/K for parallel scientific workflow execution.
Antônio Tadeu Azevedo Gomes - Laboratório Nacional de Computação Científica - email@example.com
Artur Ziviani - Laboratório Nacional de Computação Científica - firstname.lastname@example.org
Daniel Gaspar Gonçalves de Souza - Universidade Católica de Petrópolis - email@example.com
Fabio André Machado Porto - Laboratório Nacional de Computação Científica
Luiz M. R. Gadelha Jr. - Laboratório Nacional de Computação Científica - firstname.lastname@example.org