In recent years workflows have emerged as a key technology that enables large-scale computations and service management on distributed resources. Workflows enable scientists to design complex analysis that are composed of individual application components or services. Often times these components and services are designed, developed, and tested collaboratively. The size of the data and the complexity of the analysis often lead to large amounts of shared resources, such as clusters and storage systems, being used to store the data sets and execute the workflows. The process of workflow design and execution in a distributed environment can be very complex and can involve multiple stages including their textual or graphical specification, the mapping of the high-level workflow descriptions onto the available resources, as well as monitoring and debugging of the subsequent execution. Further, since computations and data access operations are performed on shared resources, there is an increased interest in managing the fair allocation and management of those resources at the workflow level.
The list of accepted papers can be found here: