Robust workflows for science and engineering

Abramson, David, Bethwaite, Blair, Enticott, Colin, Garic, Slavisa, Peachey, Tom, Michailova, Anushka, Amirriazi, Saleh and Chitters, Ramya (2009). Robust workflows for science and engineering. In: Ioan Raicu, Ian Foster and Yong Zhao, Proceedings of the 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers. 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers 2009, MTAGS '09, Portland, OR, United States, (). 16 November 2009. doi:10.1145/1646468.1646469


Author Abramson, David
Bethwaite, Blair
Enticott, Colin
Garic, Slavisa
Peachey, Tom
Michailova, Anushka
Amirriazi, Saleh
Chitters, Ramya
Title of paper Robust workflows for science and engineering
Conference name 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers 2009, MTAGS '09
Conference location Portland, OR, United States
Conference dates 16 November 2009
Proceedings title Proceedings of the 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers
Place of Publication New York, NY, United States
Publisher ACM Press
Publication Year 2009
Year available 2009
Sub-type Fully published paper
DOI 10.1145/1646468.1646469
ISBN 9781605587141
Editor Ioan Raicu
Ian Foster
Yong Zhao
Total pages 9
Collection year 2010
Language eng
Abstract/Summary Scientific workflow tools allow users to specify complex computational experiments and provide a good framework for robust science and engineering. Workflows consist of pipelines of tasks that can be used to explore the behaviour of some system, involving computations that are either performed locally or on remote computers. Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel on distributed resources), and may involve complete state space exploration, experimental design or numerical optimization techniques. Whilst workflow engines provide an overall framework, they have not been developed with these concepts in mind, and in general, don't provide the necessary components to implement robust workflows. In this paper we discuss Nimrod/K - a set of add in components and a new run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged dataflow concepts developed in 1980's for highly parallel machines. This is embodied in a new Kepler 'Director' that orchestrates the execution on clusters, Grids and Clouds using many-task computing. Nimrod/K also provides a set of 'Actors' that facilitate the various modes of parameter exploration discussed above. We demonstrate the power of Nimrod/K to solve real problems in cardiac science.
Subjects 1708 Hardware and Architecture
1712 Software
Q-Index Code E1
Q-Index Status Provisional Code
Institutional Status Non-UQ

 
Versions
Version Filter Type
Citation counts: Scopus Citation Count Cited 0 times in Scopus Article
Google Scholar Search Google Scholar
Created: Tue, 26 Nov 2013, 11:05:26 EST by Ms Diana Cassidy on behalf of School of Information Technol and Elec Engineering