CCL Home


Software Community Operations

Recent News in the CCL

Virtual Wind Tunnel in IEEE CiSE

Some of our recent work on a system for collaborative engineering design was recently featured in the September issue of IEEE Computing in Science and Engineering focused on "Open Simulation Laboratories"  This project was part of a collaboration between faculty in the computer science and civil engineering departments, Open Sourcing the Design of Civil Infrastructure.

CCL grad student Peter Sempolinski led the design and implementation of an online service enabling collaborative design and evaluation of structures, known as the "Virtual Wind Tunnel".  This service enables structural designs to be uploaded and shared, then evaluated for performance via the OpenFOAM CFD package.  The entire process is similar to that of collaborative code development, where the source (i.e. a building design) is kept in a versioned repository, automated builds (i.e. building simulation) are performed in a consistent and reproducible way, and test results (i.e. simulation metrics) are used to evaluate the initial design.  Designs and results can be shared, annotated, and re-used, making it easy for one engineer to build upon the work of another.

The prototype system has been used in a variety of contexts, most notably to demonstrate the feasibility of crowdsourcing design and evaluation work via Amazon Turk.

Wed, 09 Sep 2015 13:27:00 +0000

Three Papers at IEEE Cluster in Chicago

This week, at the IEEE Cluster Computing conference in Chicago, Ben Tovar will present some of our work on automated application monitoring:

(PDF)Gideon Juve, Benjamin Tovar, Rafael Ferreira da Silva, Dariusz Krol, Douglas Thain, Ewa Deelman, William Allcock, and Miron Livny, Practical Resource Monitoring for Robust High Throughput Computing, Workshop on Monitoring and Analysis for High Performance Computing Systems Plus Applications at IEEE Cluster Computing, September, 2015. 

Matthias Wolf will present our work on the Lobster large scale data management system:

(PDF)Anna Woodard, Matthias Wolf, Charles Mueller, Nil Valls, Ben Tovar, Patrick Donnelly, Peter Ivie, Kenyi Hurtado Anampa, Paul Brenner, Douglas Thain, Kevin Lannon and Michael Hildreth,
Scaling Data Intensive Physics Applications to 10k Cores on Non-Dedicated Clusters with Lobster, IEEE Conference on Cluster Computing, September, 2015.

Olivia Choudhury will present some work on modelling concurrent applications, trading off thread-level parallelism against task-level parallelism at scale:

(PDF)Olivia Choudhury, Dinesh Rajan, Nicholas Hazekamp, Sandra Gesing, Douglas Thain, and Scott Emrich,
Balancing Thread-level and Task-level Parallelism for Data-Intensive Workloads on Clusters and Clouds,
IEEE Conference on Cluster Computing, September, 2015.

Mon, 07 Sep 2015 19:07:00 +0000

CCTools 5.2.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.2.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:

This minor release considers the following issues from version 5.1.0:

  • [Chirp]     Fix mkdir python binding. (Ben Tovar)
  • [Chirp]     Adds 'ln' for file links. (Nate Kremer-Herman)
  • [Chirp/Confuga] Kill a job even on failure. (Patrick Donnelly)
  • [Debug]     Fix log rotation with multiple processes. (Patrick Donnelly)
  • [Makeflow]  Better support for Torque and SLURM for XSEDE. (Nick Hazekamp)
  • [Parrot]    Fix bug where cvmfs alien cache access was sequential. (Ben Tovar)
  • [Parrot]    Allow compilation with iRODS 4.1. (Ben Tovar)
  • [WorkQueue] Improvements to statistics when using foremen. (Ben Tovar)
  • [WorkQueue] Fix bug related to exporting environment variables. (Ben Tovar)
  • [WorkQueue] Task sandboxes where not being deleted at workers. (Ben Tovar)

Thanks goes to our contributors:

Patrick Donnelly
Nathaniel Kremer-Herman
Nicholas Hazekamp
Ben Tovar

Please send any feedback to the CCTools discussion mailing list:

Enjoy! Wed, 19 Aug 2015 12:05:00 +0000

Recent CCL Grads Take Faculty Positions

Peter Bui is returning to Notre Dame this fall, where he will be a member of the teaching faculty and will be teaching undergraduate core classes like data structures, discrete math, and more.  Welcome back, Prof. Bui!

Hoang Bui completed a postdoc position at Rutgers University with Prof. Manish Parashar, and is starting as an assistant professor at Western Illinois University.  Congratulations, Prof. Bui!

Tue, 18 Aug 2015 15:16:00 +0000

CMS Analysis on 10K Cores Using Lobster

We have been working closely with the CMS physics group at Notre Dame for the last year to build Lobster, a data analysis system that runs on O(10K) cores to process data produced by the CMS experiment at the LHC.  At peak, Lobster at ND delivers capacity equal to that of a dedicated CMS Tier-2 facility!

Existing data analysis systems for CMS generally require that the user be running in a cluster that has been set up just so for the purpose: exactly the right operating system, certain software installed, various user identities present, and so on. This is fine for the various clusters dedicated to the CMS experiment, but it leaves unused the enormous amount of computing power that can be found at university computing centers (like the ND CRC), national computing resources (like XSEDE or the Open Science Grid), and public cloud systems.

Lobster is designed to harness clusters that are not dedicated to CMS.  This requires solving two problems:
  1. The required software and data are not available on every node.  Instead, Lobster must bring them in at runtime and create the necessary execution system on the fly.
  2. A given machine may only be available for a short interval of time before it is taken away and assigned to another user, so Lobster must be efficient at getting things set up, and handy at dealing with disconnections and failures.
To do this, we build upon a variety of technologies for distributed computing.  Lobster uses Work Queue to dispatch tasks to thousands of machines, Parrot with CVMFS to deliver the complex software stack from CERN, XRootD to deliver the LHC data, and Chirp and Hadoop to manage the output data.

Lobster runs effectively on O(10K) cores so far, depending on the CPU/IO ratio of the jobs.  These two graphs show the behavior of a production run on top of HTCondor at Notre Dame hitting up to 10K cores over the course of a 48-hour run.  The top graph shows the number of tasks running simultaneously, while the bottom shows the number of tasks completed or failed in each 10-minute interval.  Note that about two thirds of the way through, there is a big hiccup, due to an external network outages.  Lobster accepts the failures and keeps on going.

Lobster has been a team effort between Physics, Computer Science, and the Center for Research Computing: Anna Woodard and Matthias Wolf have taken the lead in developing the core software; Ben Tovar, Patrick Donnelly, and Peter Ivie have improved and debugged Work Queue, Parrot, and Chirp along the way; Charles Mueller, Nil Valls, Kenyi Anampa, and Paul Brenner have all worked to deploy the system at scale in production; Kevin Lannon, Michael Hildreth, and Douglas Thain provide the project leadership.

Anna Woodard, Matthias Wolf, Charles Nicholas Mueller, Ben Tovar, Patrick Donnelly, Kenyi Hurtado Anampa, Paul Brenner, Kevin Lannon, and Michael Hildreth, Exploiting Volatile Opportunistic Computing Resources with Lobster, Computing in High Energy Physics, January, 2015.

Anna Woodard, Matthias Wolf, Charles Mueller, Nil Valls, Ben Tovar, Patrick Donnelly, Peter Ivie, Kenyi Hurtado Anampa, Paul Brenner, Douglas Thain, Kevin Lannon and Michael Hildreth, Scaling Data Intensive Physics Applications to 10k Cores on Non-Dedicated Clusters with Lobster, IEEE Conference on Cluster Computing, September, 2015.

Fri, 14 Aug 2015 15:19:00 +0000

Haipeng Cai Defends Ph.D.

Haipeng Cai successfully defended his dissertation, "Cost-effective Dependence Analyses for Reliable Software Evolution", which studied methods for efficiently determining the scope of complex software system that is affected by a given change.

Haipeng will be taking a postdoctoral research position at Virginia Tech under the supervision of Prof. Barbara Ryder.

Congratulations to Dr. Haipeng Cai!

Thu, 16 Jul 2015 17:42:00 +0000

CCTools 5.1.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.1.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:

This minor release adds a couple of small features, and fixes the following
issues of version 5.0.0:

  • [Prune]     Fix installation issue. (Haiyan Meng)
  • [Umbrella]  Fix installation issue. (Haiyan Meng)
  • [WorkQueue] Worker's --wall-time to specify maximum period of time a worker may be active. (Andrey Tovchigrechko, Ben Tovar)
  • [WorkQueue] work_queue_status's --M to show the status of masters by name. (Names may be regular expressions). (Ben Tovar)
  • [WorkQueue] Fix missing priority python binding.
  • [WorkQueue] Fix incorrect reset of workers when connecting to different masters. (Ben Tovar)
  • [WorkQueue] Fix segmentation fault when cloning tasks. (Ben Tovar)
  • [WQ_Maker]  Cleanup, and small fixes. (Nick Hazekamp)

Thanks goes to our contributors:

Nicholas Hazekamp
Haiyan Meng
Ben Tovar
Andrey Tovchigrechko

Please send any feedback to the CCTools discussion mailing list:

mailing list


~ Thu, 16 Jul 2015 16:42:00 +0000

CCTools 5.0.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.0.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, SAND, All-Pairs, Weaver, and other software.
The software may be downloaded here: CCTools download
This is a major release that incorporates the preview of three new tools:
  • [Confuga] An active storage cluster file system built on top of Chirp. It is used as a collaborative distributed file system and as a platform for execution of scientific workflows with full data locality for all job dependencies. (Patrick Donnelly)
  • [Umbrella] A tool for specifying and materializing comprehensive execution environments. Once a task is specified, Umbrella determines the minimum mechanism necessary to run it such as, direct execution, a system container, a local virtual machine, or submission to a cloud or grid environment. (Haiyan Meng).
  • [Prune] A system for executing and precisely preserving scientific workflows. Collaborators can verify research results and easily extend them at a granularity determined by the user. (Peter Ivie)
This release adds several features and several bug fixes. Among them:
  • [AllPairs] Support for symmetric matrices. (Haiyan Meng)
  • [Chirp] Perl and python bindings. (Ben Tovar)
  • [Chirp] Improvements to the job interface. (Patrick Donnelly)
  • [Makeflow] Improved Graphviz's dot output. (Nate Kremer-Herman)
  • [Makeflow] Support for command wrappers. (Douglas Thain)
  • [Parrot] Several bug fixes for CVMFS-based applications. (Jakob Blomer, Patrick Donnelly)
  • [Parrot] Valgrind support. (Patrick Donnelly)
  • [Resource Monitor] Library for polling resources. (Ben Tovar)
  • [WorkQueue] Signal handling bug fixes. (Andrey Tovchigrechko)
  • [WorkQueue] Log visualizer. (Ryan Boccabella)
  • [WorkQueue] work_queue_worker support for Docker. (Charles Zheng)
  • [WorkQueue] Improvements to perl bindings. (Ben Tovar)
  • [WorkQueue] Support to blacklist workers. (Nick Hazekamp)
Incompatibility warnings: Workers from 5.0 do not work with masters pre 5.0.
Thanks goes to the contributors for many features and bug fixes: Matthew Astley, Jakob Blomer, Ryan Boccabella, Peter Bui, Patrick Donnelly, Nathaniel Kremer-Herman, Victor Hawley, Nicholas Hazekamp, Peter Ivie, Kangkang Li, Haiyan Meng, Douglas Thain, Ben Tovar, Andrey Tovchigrechko, and Charles Zheng.
Please send any feedback to the CCTools discussion mailing list: mailing list

Tue, 07 Jul 2015 17:21:00 +0000

Preservation Framework for Computational Reproducibility at ICCS 2015

Haiyan Meng presented our work on Preservation Framework for Computational Reproducibility at the International Conference on Computational Science (ICCS) in Reykjavik, Iceland. This is a collaborative work between University of Notre Dame and University of Chicago for the DASPOS project both of these two universities are working on.

The preservation framework proposed in this paper includes three parts: 
  • First, how to use light-weight application-level virtualization techniques to create a reduced package which only includes all the necessary dependencies; 
  • Second, how to organize the data storage archive to preserve these packages; 
  • Third, how to distribute applications through standard software delivery mechanisms like Docker and deploy applications through flexible deployment mechanisms such as Parrot, PTU, Docker, and chroot.


Wed, 01 Jul 2015 16:04:00 +0000

Umbrella and Containers at VTDC 2015

Two CCL students presented their latest work at the Virtualization Technologies in Distributed Computing (VTDC) at the Symposium on High Performance Distributed Computing (HPDC) in Portland, Oregon.

Haiyan Meng presented her work on Umbrella, a system for specifying and materializing execution environments in a portable and reproducible way.  Umbrella accepts a declarative specification for an application, and then determines the minimum technology needed to deploy it.   The application will be run natively if the local execution environment is compatible, but if not, Umbrella will deploy a container, a virtual machine, or make use of a public cloud if necessary.

(PDF)Haiyan Meng and Douglas Thain,
Umbrella: A Portable Environment Creator for Reproducible Computing on Clusters, Clouds, and Grids,
Workshop on Virtualization Technologies in Distributed Computing (VTDC) at HPDC, June, 2015. DOI: 10.1145/2755979.2755982

Charles Zheng presented his work on integrating Docker containers into the Makeflow workflow engine and the Work Queue runtime system, each with different tradeoffs in performance and isolation.  These capabilities will be included in the upcoming 5.0 release of CCTools.

(PDF) Charles Zheng and Douglas Thain,
Integrating Containers into Workflows: A Case Study Using Makeflow, Work Queue, and Docker,
Workshop on Virtualization Technologies in Distributed Computing (VTDC), June, 2015. DOI: 10.1145/2755979.2755984
Fri, 19 Jun 2015 18:21:00 +0000

Lobster Talk at Condor Week 2015

Ben Tovar gave an overview of Lobster in the talk High-Energy Physics workloads on 10k non-dedicated opportunistic cores with Lobster. The talk was part of Condor Week 2015, at the University of Wisconsin-Madison.

Lobster is a system for deploying data intensive high-throughput science applications on non-dedicated resources. It is build on top Work Queue, Parrot, and Chirp, which are part of CCTools.

Wed, 27 May 2015 18:47:00 +0000

Parrot and Lobster at CHEP 2015

CCL students gave two poster presentations at the annual Computing in High Energy Physics (CHEP) conference in Japan.  Both represent our close collaboration with the CMS HEP group at Notre Dame:

Haiyan Meng presented A Case Study in Preserving a High Energy Physics Application.  This poster describes the complexity of preserving a non-trivial application, the shows how Parrot packaging technology can be used to capture a program's
dependencies, and then re-execute it using a variety of technologies.

Anna Woodard and Matthias Wolf won the best poster presentation award for Exploiting Volatile Opportunistic Computing Resources with Lobster, which was rewarded with a lightning plenary talk.  Lobster is an analysis workload management system which has been able to harness 10-20K opportunistic cores at a time for large workloads at Notre Dame, making the facility comparable in size to the dedicated Tier-2 facilities of the WLCG!

Tue, 19 May 2015 14:57:00 +0000

Peter Sempolinski Defends Ph.D.

Dr. Peter Sempolinski successfully defended his PhD thesis titled "An Extensible System for Facilitating Collaboration for Structural Engineering Applications"

While at Notre Dame, Peter created a Virtual Wind Tunnel which enabled the crowdsourcing of structural design and evaluation by combining online building design with Google Sketchup and CFD simulation with OpenFoam.  The system was used in a variety of contexts, ranging from virtual engineering classes to managing work crowdsourced via Mechanical Turk. his work was recently accepted for publication in IEEE CiSE and PLOS1.

Congratulations to Dr. Sempolinski!

Mon, 04 May 2015 12:59:00 +0000

CMS Analysis on 10K Cores with Lobster

The CMS physics group at Notre Dame has created Lobster, a data analysis system that runs on O(10K) cores to process data produced by the CMS experiment at the LHC.  Lobster uses Work Queue to dispatch tasks to thousands of machines, Parrot with CVMFS to deliver the complex software stack from CERN, XRootD to deliver the LHC data, and Chirp and Hadoop to manage the output data. By using these technologies, Lobster is able to harness arbitrary machines and bring along the CMS computing environment wherever it goes.   At peak, Lobster at ND delivers capacity equal to that of a dedicated CMS Tier-2 facility!     (read more here) Fri, 01 May 2015 16:41:00 +0000

Dinesh Rajan Defends Ph.D.

Dr. Dinesh Rajan successfully defended his PhD thesis titled "Principles for the Design and Operating of Elastic Scientific Applications on Distributed Systems"  He is currently an engineer at Amazon Web Services.

While at Notre Dame, hd made significant contributions to the development of Work Queue and worked closely with scientists in biology and molecular dynamics to build highly scalable elastic applications such as the Accelerated Weighted Ensemble.  His most recent journal paper in IEEE TCC describes how to design self-tuning cloud applications.

Congratulations to Dr. Rajan!

Fri, 10 Apr 2015 19:30:00 +0000

Confuga: Scalable Data Intensive Computing for POSIX Workflows

Patrick Donnely will present his work on the Confuga distributed filesystem at  CCGrid 2015 in China:

Patrick Donnelly, Nicholas Hazekamp, Douglas Thain,Confuga: Scalable Data Intensive Computing for POSIX Workflows, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May, 2015.  
Confuga is a new active storage cluster file system designed for executing regular POSIX workflows. Users may store extremely large datasets on Confuga in a regular file system layout, with whole files replicated across the cluster. You may then operate on your dataset using regular POSIX applications, with defined inputs and outputs.

Confuga handles the details of placing jobs near data and minimizing network load so that the cluster's disk and network resources are used efficiently. Each job executes with all of its input file dependencies local to its execution, within a sandbox.

For those familiar with CCTools, Confuga operates as a cluster of Chirp servers with a single Chirp server operating as the head node. You may use the Chirp library, Chirp CLI toolset, FUSE, or even Parrot to upload and manipulate the data on Confuga.

For running a workflow on Confuga, we encourage you to use Makeflow. Makeflow will submit the jobs to Confuga using the Chirp job protocol and take care of ordering the jobs based on their dependencies.

Fri, 27 Mar 2015 20:46:00 +0000

Makeflow Visualization with Cytoscape

We have created a new Makeflow visualization module which exports a workflow into an xgmml file compatible with Cytoscape.  Cytoscape is a powerful network graphing application with support for custom styles, layouts, annotations, and more. While this program is known more for visualizing molecular networks in biology, it can be used for any purpose, and we believe it is a powerful tool for visualizing makeflow tasks.  Our visualization module was designed for and tested on Cytoscape 3.2. The following picture is a Cytoscape visualization of the example makeflow script provided in the User’s Manual (

To generate a Cytoscape graph from your makeflow script, simply run:

makeflow_viz –D cytoscape > workflow.xgmml
 workflow.xgmml can then be opened in Cytoscape through File -> Import -> Network -> File.  We have created a clean style named specifically for visualizing makeflow tasks named style.xml, which is generated in the present working directory when you run makeflow_viz. To apply the style in Cytoscape, select File -> Import -> Style, and select the style.xml file.  Next, right-click the imported network and select “Apply Style…”.  Select “makeflow” from the dropdown menu and our style will be applied.  This will add the proper colors, edges, arrows, and shapes for processes and files.

Cytoscape also has a built in layout function which can be used to automatically rearrange nodes according to their hierarchy.   To access this, select Layout àSettings, and a new window will pop up.  Simply select “Hierarchical Layout” from the dropdown menu, change the settings for that layout to your liking, and select “Execute Layout.”  There is a caveat with this function.  With larger makeflow tasks, this auto layout function can take long to complete.   This is due to Cytoscape being designed for all types of graphs, and they do not appear to implement algorithms specifically for dags to take advantage of faster time complexities.  We have tested the auto-layout function with the following test cases:

Number of nodes
Number of edges
Time to layout nodes
20-30 seconds
2.5 hours
23 hours

After the layout completes, the graph should be visible in a clean fashion, and you can customize the display further to your liking with the various options available in Cytoscape.  For more information about Cytoscape, visit

Tue, 24 Mar 2015 20:28:00 +0000

Creating Better Force Fields on Distributed GPUs with Work Queue

ForceBalance is an open source software tool for creating accurate force fields for molecular mechanics simulation using flexible combinations of reference data from experimental measurements and theoretical calculations. These force fields are used to simulate the dynamics and physical properties of molecules in chemistry and biochemistry.

The Work Queue framework gives ForceBalance the ability to distribute computationally intensive components of a force field optimization calculation in a highly flexible way. For example, each optimization cycle launched by ForceBalance may require running 50 molecular dynamics simulations, each of which may take 10-20 hours on a high end NVIDIA GPU. While GPU computing resources are available, it is rare to find 50 available GPU nodes on any single supercomputer or HPC cluster. With Work Queue, it is possible to distribute the simulations across several HPC clusters, including the Certainty HPC cluster at Stanford, the Keeneland GPU cluster managed by Georgia Tech and Oak Ridge National Laboratories, and the Stampede supercomputer managed by the University of Texas. This makes it possible to run many simulations in parallel and complete the high level optimization in weeks instead of years.

 - Lee-Ping Wang, Stanford University Wed, 10 Dec 2014 15:15:00 +0000

CCTools 4.3 released

The Cooperative Computing Lab is pleased to announce the release of version 4.3.0 of the Cooperative Computing Tools, including Parrot, Chirp, Makeflow, WorkQueue, Weaver, DeltaDB, SAND, All-Pairs, and other software. This release has some important changes:
  • Peter Bui's Weaver is included. Weaver is a high level interface to Makeflow which allows to describe workflows using python. For more information see cctools/doc/man/weaver.1 and cctools/weaver/examples in the distribution.
  • This is also the first release to include DeltaDB, written by Peter Ivie and Douglas Thain. DeltaDB implements a model for time-varying schema-free data and underlies the query engine for the CCTools catalog server.
  • Backwards compatibility of master and workers pre-4.3 is broken. Workers from 4.3 cannot connect to masters pre-4.3, and masters from 4.3 will not accept connection from workers pre-4.3. The API did not change, thus unless you want to take advantage of new features, you should not need to modify your code.
  • The interface to work_queue_pool has been simplified, and all options have to be specified at the command line. Please see cctools/doc/man/work_queue_pool.1 for more information.
  • Undefined environment variables used in Makeflow are no longer allowed by the parser.
  • Binaries for 32bit architectures are not being distributed as part of this release. Please let us know if you need them.

Other highlights

  • [WorkQueue] Perl object oriented bindings have been added. See perldoc Work_Queue::Queue [B. Tovar]
  • [WorkQueue] A priority per task can now be specified. [D. Thain, B. Tovar]
  • [WorkQueue] --single-shot option added to workers to exit quickly after the master disconnects [D. Thain].
  • [WorkQueue] Hierarchy statistics when using foremen are now available. [B. Tovar, M. Wolf]
  • [WorkQueue] work_queue_pool code cleanup. [D. Thain, B. Tovar]
  • [Makeflow] New lexer and parser with cleaner semantics and error reporting. [B. Tovar]
  • [Parrot] Bug fix that allows parrot's temp-dir to be on GPFS. [P. Donnelly]
  • [Parrot] Several fixes to better support executables with threads. [P. Donnelly]
  • [Parrot] Update to use the newer ptrace API. [P. Donnelly]
  • [Parrot] Several updates to parrot_package_run. See cctools/doc/man/parrot_package_run. [H. Meng]
  • [Parrot] iRODS 4.x support. [D. Thain]

You can download the software here: cctools download

Thanks goes to the contributors and testers for this release: Peter Bui, Patrick Donnelly, Nick Hazekamp, Peter Ivie, Kangkang Li, Haiyan Meng, Peter Sempolinski, Douglas Thain, Ben Tovar, Lee-Ping Wang, Matthias Wolf, Anna Woodard, and Charles Zheng

Enjoy! Thu, 04 Dec 2014 19:37:00 +0000

Work Queue Powers Nanoreactor Simulations

Lee-Ping Wang at Stanford University, recently published a paper in Nature Chemistry describing his work in fundamental molecular dynamics.

The paper demonstrates the "nanoreactor" technique in which simple molecules are simulated over a long time scale to observe their reaction paths into more complex molecules.  For example, the picture below shows 39 Acetylene molecules merging into a variety of hydrocarbons over the course of 500ps simulated time.  This technique can be used to computationally predict reaction networks in historical or inaccessible environments, such as the early Earth or the upper atmosphere.

To compute the final reaction network for this figure, the team used the Work Queue framework to harness over 300K node-hours of CPU time on the Blue Waters supercomputer at NCSA.

Mon, 17 Nov 2014 20:18:00 +0000

Open Sourcing Civil Engineering with a Virtual Wind Tunnel

In addition to the CCL tools themselves, members of the CCL lab often collaborate with other research groups to help them solve their scientific problems, using collaborative computing. Often, such collaborative projects drive the development and debugging of our tools.

An uploaded design in the Virtual Wind Tunnel

One such project is a Virtual Wind Tunnel, which was created in collaboration with the Notre Dame Civil Engineering Department, as part of a larger project to explore collaboration in civil design. On the surface, this is a fairly simple idea. A user uploads a building shape for analysis to a web portal. Then, the user can run wind flow simulations upon horizontal cross sections of the building. Once complete, the results of these simulations can be viewed and downloaded.

Making all of this work, however, requires a large number of interlocking components. For now, I would just like to describe how the CCL tools play a role in this system. When simulations are to be run, one very simple way to deliver simulation tasks to available computing resources is to run a Work Queue worker on those machines. The front-end of the system runs a Work Queue master, which queues up tasks.

Viewing Results of a Simulation
This has several advantages, but the most important is that we can be flexible about the resources which we use at any given time, even using computing resources from multiple sources at the same time. For example, we have a small private cloud which we use for experimental purposes. We also have access to an on-campus SGE grid, but must share with many other customers. Our current approach is to set up a handful of VM on the private cloud, which run workers. If demand for simulations is high enough, we ask for more workers from the SGE.

By using Work Queue as a means of distributing tasks, we can be more flexible about the backend upon which those tasks are run. This allows us to tailor our resource usage to our actual needs and, as needed, to adjust our resource usage when appropriate. Mon, 01 Sep 2014 18:21:00 +0000

DeltaDB - A Scalable Database Design for Time-Varying Schema-Free Data

DeltaDB is a log-structure database and query model designed for time-varying and schema-free data. The following video gives a high level overview of DeltaDB and describes how the model is scalable using MapReduce.

This database design is implemented within CCTools in two parts. Part 1 (data storage) has been available for over a year and is called the catalog server. Part 2 (data analysis) has recently been implemented and is not yet in a release, but is available in the following commit:

The data model is designed to handle schema-free status reports from various services. And while the reports can be schema-free, most of the fields will normally remain the same between subsequent reports from the same instance of a service.

The first status report is saved in it's entirety, and then the subsequent reports are saved as changes (or "deltas") on the original report. Snapshots of the status of all services and instances are stored on a daily basis. This allows a query for analysis based on a given time frame to jump more quickly to the start of the time frame, rather than have to start at the very beginning of the life of the catalog server.

A query is performed by applying a series of operators to the data. For a distributed system, spatial distribution is when the data is distributed such that a given instance always ends up on the same node. In this situation, all but the last of the operators can be performed in the map stage of the MapReduce model. This allows for better scalability because less work has to be performed by a single node in the reduce stage.

Much more detail is provided in a paper which was published at IEEE Bigdata 2014, and is available at the following URL:

For further inquiries, please email

Mon, 18 Aug 2014 18:21:00 +0000

Packaging Applications with Parrot 4.2.0

CCTools 4.2.0 includes a new feature in Parrot that allows you to automatically observe all of the files used by a given application, and then collect them up into a self-contained package.  The package can then be moved to another machine -- even a different variant of Linux -- and then run correctly with all of its dependencies present. The created package does not depend upon Parrot and can be re-run in a variety of ways.
This article explains how to generate a self-contained package and then share it so that others can verify can repeat your applications. The whole process involves three steps: running the original application, creating the self-contained package, and the running the package itself.

Figure 1 Packaging Procedure
Step 1: Run the original program

Run your program under parrot_run and record the filename list and environment variables by using --name-list and --env-list parameters.

parrot_run --name-list namelist --env-list envlist /bin/bash
After the execution of this command, you can run your program inside parrot_run.  At the end of step 1, one file named namelist containing all the accessed file names and one file named envlist containing environment variables will be generated.  After everything is done, simple exit the shell.

Step 2: Generate a self-contained package

Use parrot_package_create to generate a package based on the namelist and envlist generated in step 1.

parrot_package_create --name-list namelist --env-path envlist --package-path /tmp/package
This command causes all of the files given in the name list to be copied into the package directory /tmp/package.  You may customize the contents of the package by editing the namelist or the package directory by hand.

Step 3: Repeat the program using the package

The newly created package is simply a complete filesystem tree that can be moved to any convenient location.  It can be re-run by any method that treats the package as a self-contained root filesystem.  This can be done by using Parrot again, by setting up a chroot environment, by setting up a Linux container, or by creating a virtual machine.

To run the package using Parrot, do this:

parrot_package_run --package-path /tmp/package /bin/bash 

To run the package using chroot, do this:

chroot_package_run --package-path /tmp/package /bin/bash

In both cases, you will be dropped into a shell in the preserved environment, where all the files used by the original command will be present.  You will definitely be able to run the original command -- whether you can run other programs depends upon the quantity of data preserved.

For more information, see these man pages:

Fri, 01 Aug 2014 19:18:00 +0000

CCTools 4.2.0 released

We are pleased to announce the release of version 4.2.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, SAND, All-Pairs, and other software.
The software may be downloaded here: Download CCTools 4.2.0
This release is mostly a bug fix release, but introduces changes to the Work Queue protocol. Thus, workers from 4.2 do not work with masters pre 4.2.
Among the bug fixes and added capabilities are: Among the bug fixes and added capabilities are:
  • [General] Support for systemd log journal. (Patrick Donelly)
  • [WorkQueue] Several bug fixes (Douglas Thain. Dinesh Rajan, Ben Tovar)
  • [WorkQueue] Improvements to resource accounting. (Ben Tovar)
  • [WorkQueue] work_queue_graph_log, a script to plot Work Queue's log. (Ben Tovar)
  • [WorkQueue] Autosize option for workers to fill Condor slots. (Douglas Thain)
  • [WorkQueue] Added several example applications in apps/ (Dinesh Rajan)
  • [Chirp] Several bug fixes. (Patrick Donelly)
  • [Parrot] Package creation of accessed files for execution repeatability. (Haiyan Meng)
  • [Parrot] Correct mmap handling. (Patrick Donelly)
  • [Parrot] Fix linking to iRODS. (Patrick Donelly)
  • [Parrot] Option to disable CVMFS alien cache. (Ben Tovar)
  • [Parrot] Bug fixes targeting CVMFS. (Ben Tovar)
Thanks goes to the contributors for many features and bug fixes:
  • Jakob Blomer
  • Dan Bradley
  • Peter Bui
  • Patrick Donnelly
  • Nicholas Hazekamp
  • Peter Ivie
  • Haiyan Meng
  • Dinesh Rajan
  • Casey Robinson
  • Peter Sempolinski
  • Douglas Thain
  • Ben Tovar
  • Matthias Wolf
Please send any feedback to the CCTools discussion mailing list .

Thu, 31 Jul 2014 12:33:00 +0000

DeltaDB at IEEE BigData 2014

Peter Ivie will be presenting his work on the DeltaDB database model for time-varying schema-free data at the IEEE International Congress on Big Data in Anchorage. The DeltaDB concept is what underlies the query engine for the cctools catalog server.

Tue, 03 Jun 2014 16:50:00 +0000