The Pegasus Workflow Manager and the Discovery of Gravitational Waves

We have all heard so much about the wonderful discovery of Gravitational Waves – and with just cause! In today’s post, I want to give a shout-out to the Pegasus Workflow Manager, one of the crucial pieces of software used in analyzing the LIGO data. Processing these data requires complex workflows involving transferring and managing large data sets, and performing thousands of tasks. Among other things, the software managing these  workflows must be automated and portable across distributed platforms; be able to manage dependencies between jobs, and be highly fault tolerant – if jobs fail, then they must be restarted automatically without losing data already processed. The Pegasus Workflow Manager manager performs these functions on behalf of LIGO.

Specifically, Pegasus managed the workflow for the Compact Binary Coalescence Group, which aims to find inspiral signals from compact binaries. The figure below shows the workflow:

2016-02-19_16-28-46Each of these workflows has (to quote from the Pegasus web site):

  • 60,000 compute tasks
  • Input Data: 5000 files (10GB total)
  • Output Data: 60,000 files (60GB total)

and using Pegasus in the production pipeline gave LIGO the following capabilities (again, to quote from the website).

  • “Run an analysis workflows across sites.  Analysis workflows are launched to execute on XSEDE and OSG resources with post processing steps running on LIGO Data Grid.
  • Monitor and share workflows using the Pegasus Workflow Dashboard.
  • Easier debugging of their workflows.
  • Separate their workflow logs directories from the execution directories. Their earlier pipeline required the logs to be the shared filesystem of the clusters. This resulted in scalability issues as the load on the NFS increased drastically when large workflows were launched.
  • Ability to re-run analysis later on without running all the sub workflows from start. This leverages the data reuse capabilities of Pegasus. LIGO data may need to be analyzed several times due to changed in e.g. detector calibration or data-quality flags. Complete re-analysis of the data is a very computationally intensive task. By using the workflow reduction capabilities of Pegasus, the LSC and Virgo have been able to re-use existing data products from previous runs, when those data products are suitable.”

At-scale workflows have applicability across all disciplines these days, and Pegasus has been successfully used in many disciplines, including astronomy, per the graphic below; learn more at the Pegasus applications showcase page:




This entry was posted in astroinformatics, Astronomy, Black Holes, Computing, cyberinfrastructure, Gravitational waves, High performance computing, informatics, LIGO, Operations, Parallelization, programming, Scientific computing, software engineering, software maintenance, workflows and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s