|CCL HomeSoftware Community Operations||
Recent News in the CCL
Haiyan Meng presented A Case Study in Preserving a High Energy Physics Application. This poster describes the complexity of preserving a non-trivial application, the shows how Parrot packaging technology can be used to capture a program's
dependencies, and then re-execute it using a variety of technologies.
Anna Woodard and Matthias Wolf won the best poster presentation award for Exploiting Volatile Opportunistic Computing Resources with Lobster, which was rewarded with a lightning plenary talk. Lobster is an analysis workload management system which has been able to harness 10-20K opportunistic cores at a time for large workloads at Notre Dame, making the facility comparable in size to the dedicated Tier-2 facilities of the WLCG!
Tue, 19 May 2015 14:57:00 +0000
Dr. Peter Sempolinski successfully defended his PhD thesis titled "An Extensible System for Facilitating Collaboration for Structural Engineering Applications"
While at Notre Dame, Peter created a Virtual Wind Tunnel which enabled the crowdsourcing of structural design and evaluation by combining online building design with Google Sketchup and CFD simulation with OpenFoam. The system was used in a variety of contexts, ranging from virtual engineering classes to managing work crowdsourced via Mechanical Turk. his work was recently accepted for publication in IEEE CiSE and PLOS1.
Congratulations to Dr. Sempolinski!
Mon, 04 May 2015 12:59:00 +0000
Dr. Dinesh Rajan successfully defended his PhD thesis titled "Principles for the Design and Operating of Elastic Scientific Applications on Distributed Systems" He is currently an engineer at Amazon Web Services.
While at Notre Dame, hd made significant contributions to the development of Work Queue and worked closely with scientists in biology and molecular dynamics to build highly scalable elastic applications such as the Accelerated Weighted Ensemble. His most recent journal paper in IEEE TCC describes how to design self-tuning cloud applications.
Congratulations to Dr. Rajan!
Fri, 10 Apr 2015 19:30:00 +0000
CCGrid 2015 in China:
Patrick Donnelly, Nicholas Hazekamp, Douglas Thain,Confuga: Scalable Data Intensive Computing for POSIX Workflows, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May, 2015.Confuga is a new active storage cluster file system designed for executing regular POSIX workflows. Users may store extremely large datasets on Confuga in a regular file system layout, with whole files replicated across the cluster. You may then operate on your dataset using regular POSIX applications, with defined inputs and outputs.
Confuga handles the details of placing jobs near data and minimizing network load so that the cluster's disk and network resources are used efficiently. Each job executes with all of its input file dependencies local to its execution, within a sandbox.
For those familiar with CCTools, Confuga operates as a cluster of Chirp servers with a single Chirp server operating as the head node. You may use the Chirp library, Chirp CLI toolset, FUSE, or even Parrot to upload and manipulate the data on Confuga.
For running a workflow on Confuga, we encourage you to use Makeflow. Makeflow will submit the jobs to Confuga using the Chirp job protocol and take care of ordering the jobs based on their dependencies.
Fri, 27 Mar 2015 20:46:00 +0000
We have created a new Makeflow visualization module which exports a workflow into an xgmml file compatible with Cytoscape. Cytoscape is a powerful network graphing application with support for custom styles, layouts, annotations, and more. While this program is known more for visualizing molecular networks in biology, it can be used for any purpose, and we believe it is a powerful tool for visualizing makeflow tasks. Our visualization module was designed for and tested on Cytoscape 3.2. The following picture is a Cytoscape visualization of the example makeflow script provided in the User’s Manual (http://ccl.cse.nd.edu/software/manuals/makeflow.html):
To generate a Cytoscape graph from your makeflow script, simply run:
workflow.xgmml can then be opened in Cytoscape through File -> Import -> Network -> File. We have created a clean style named specifically for visualizing makeflow tasks named style.xml, which is generated in the present working directory when you run makeflow_viz. To apply the style in Cytoscape, select File -> Import -> Style, and select the style.xml file. Next, right-click the imported network and select “Apply Style…”. Select “makeflow” from the dropdown menu and our style will be applied. This will add the proper colors, edges, arrows, and shapes for processes and files.
Cytoscape also has a built in layout function which can be used to automatically rearrange nodes according to their hierarchy. To access this, select Layout àSettings, and a new window will pop up. Simply select “Hierarchical Layout” from the dropdown menu, change the settings for that layout to your liking, and select “Execute Layout.” There is a caveat with this function. With larger makeflow tasks, this auto layout function can take long to complete. This is due to Cytoscape being designed for all types of graphs, and they do not appear to implement algorithms specifically for dags to take advantage of faster time complexities. We have tested the auto-layout function with the following test cases:
Tue, 24 Mar 2015 20:28:00 +0000
ForceBalance is an open source software tool for creating accurate force fields for molecular mechanics simulation using flexible combinations of reference data from experimental measurements and theoretical calculations. These force fields are used to simulate the dynamics and physical properties of molecules in chemistry and biochemistry.
The Work Queue framework gives ForceBalance the ability to distribute computationally intensive components of a force field optimization calculation in a highly flexible way. For example, each optimization cycle launched by ForceBalance may require running 50 molecular dynamics simulations, each of which may take 10-20 hours on a high end NVIDIA GPU. While GPU computing resources are available, it is rare to find 50 available GPU nodes on any single supercomputer or HPC cluster. With Work Queue, it is possible to distribute the simulations across several HPC clusters, including the Certainty HPC cluster at Stanford, the Keeneland GPU cluster managed by Georgia Tech and Oak Ridge National Laboratories, and the Stampede supercomputer managed by the University of Texas. This makes it possible to run many simulations in parallel and complete the high level optimization in weeks instead of years.
- Lee-Ping Wang, Stanford University Wed, 10 Dec 2014 15:15:00 +0000
You can download the software here: cctools download
Thanks goes to the contributors and testers for this release: Peter Bui, Patrick Donnelly, Nick Hazekamp, Peter Ivie, Kangkang Li, Haiyan Meng, Peter Sempolinski, Douglas Thain, Ben Tovar, Lee-Ping Wang, Matthias Wolf, Anna Woodard, and Charles Zheng
Enjoy! Thu, 04 Dec 2014 19:37:00 +0000
Lee-Ping Wang at Stanford University, recently published a paper in Nature Chemistry describing his work in fundamental molecular dynamics.
The paper demonstrates the "nanoreactor" technique in which simple molecules are simulated over a long time scale to observe their reaction paths into more complex molecules. For example, the picture below shows 39 Acetylene molecules merging into a variety of hydrocarbons over the course of 500ps simulated time. This technique can be used to computationally predict reaction networks in historical or inaccessible environments, such as the early Earth or the upper atmosphere.
To compute the final reaction network for this figure, the team used the Work Queue framework to harness over 300K node-hours of CPU time on the Blue Waters supercomputer at NCSA.
Mon, 17 Nov 2014 20:18:00 +0000
One such project is a Virtual Wind Tunnel, which was created in collaboration with the Notre Dame Civil Engineering Department, as part of a larger project to explore collaboration in civil design. On the surface, this is a fairly simple idea. A user uploads a building shape for analysis to a web portal. Then, the user can run wind flow simulations upon horizontal cross sections of the building. Once complete, the results of these simulations can be viewed and downloaded.
Making all of this work, however, requires a large number of interlocking components. For now, I would just like to describe how the CCL tools play a role in this system. When simulations are to be run, one very simple way to deliver simulation tasks to available computing resources is to run a Work Queue worker on those machines. The front-end of the system runs a Work Queue master, which queues up tasks.
By using Work Queue as a means of distributing tasks, we can be more flexible about the backend upon which those tasks are run. This allows us to tailor our resource usage to our actual needs and, as needed, to adjust our resource usage when appropriate. Mon, 01 Sep 2014 18:21:00 +0000
This database design is implemented within CCTools in two parts. Part 1 (data storage) has been available for over a year and is called the catalog server. Part 2 (data analysis) has recently been implemented and is not yet in a release, but is available in the following commit:
The data model is designed to handle schema-free status reports from various services. And while the reports can be schema-free, most of the fields will normally remain the same between subsequent reports from the same instance of a service.
The first status report is saved in it's entirety, and then the subsequent reports are saved as changes (or "deltas") on the original report. Snapshots of the status of all services and instances are stored on a daily basis. This allows a query for analysis based on a given time frame to jump more quickly to the start of the time frame, rather than have to start at the very beginning of the life of the catalog server.
A query is performed by applying a series of operators to the data. For a distributed system, spatial distribution is when the data is distributed such that a given instance always ends up on the same node. In this situation, all but the last of the operators can be performed in the map stage of the MapReduce model. This allows for better scalability because less work has to be performed by a single node in the reduce stage.
Much more detail is provided in a paper which was published at IEEE Bigdata 2014, and is available at the following URL:
For further inquiries, please email firstname.lastname@example.org.
Mon, 18 Aug 2014 18:21:00 +0000
CCTools 4.2.0 includes a new feature in Parrot that allows you to automatically observe all of the files used by a given application, and then collect them up into a self-contained package. The package can then be moved to another machine -- even a different variant of Linux -- and then run correctly with all of its dependencies present. The created package does not depend upon Parrot and can be re-run in a variety of ways.
This article explains how to generate a self-contained package and then share it so that others can verify can repeat your applications. The whole process involves three steps: running the original application, creating the self-contained package, and the running the package itself.
Run your program under parrot_run and record the filename list and environment variables by using --name-list and --env-list parameters.
parrot_run --name-list namelist --env-list envlist /bin/bash
After the execution of this command, you can run your program inside parrot_run. At the end of step 1, one file named namelist containing all the accessed file names and one file named envlist containing environment variables will be generated. After everything is done, simple exit the shell.
Step 2: Generate a self-contained package
Use parrot_package_create to generate a package based on the namelist and envlist generated in step 1.
parrot_package_create --name-list namelist --env-path envlist --package-path /tmp/package
This command causes all of the files given in the name list to be copied into the package directory /tmp/package. You may customize the contents of the package by editing the namelist or the package directory by hand.
Step 3: Repeat the program using the package
The newly created package is simply a complete filesystem tree that can be moved to any convenient location. It can be re-run by any method that treats the package as a self-contained root filesystem. This can be done by using Parrot again, by setting up a chroot environment, by setting up a Linux container, or by creating a virtual machine.
To run the package using Parrot, do this:
parrot_package_run --package-path /tmp/package /bin/bash
To run the package using chroot, do this:
chroot_package_run --package-path /tmp/package /bin/bash
In both cases, you will be dropped into a shell in the preserved environment, where all the files used by the original command will be present. You will definitely be able to run the original command -- whether you can run other programs depends upon the quantity of data preserved.
For more information, see these man pages:
Fri, 01 Aug 2014 19:18:00 +0000
The software may be downloaded here: Download CCTools 4.2.0
This release is mostly a bug fix release, but introduces changes to the Work Queue protocol. Thus, workers from 4.2 do not work with masters pre 4.2.
Among the bug fixes and added capabilities are: Among the bug fixes and added capabilities are:
Thu, 31 Jul 2014 12:33:00 +0000
DeltaDB database model for time-varying schema-free data at the IEEE International Congress on Big Data in Anchorage. The DeltaDB concept is what underlies the query engine for the cctools catalog server.
Tue, 03 Jun 2014 16:50:00 +0000
paper on converting the MAKER bioinformatics analysis from MPI to Work Queue, done in collaboration with the Notre Dame Bioinformatics Laboratory was recently accepted for publication in the International Journal of Bioinformatics Research and Applications.
Tue, 03 Jun 2014 16:47:00 +0000
See the slides here.)
Historically, highly concurrent programming has been closely associated with high performance computing. Two programming models have been dominant: shared memory machines in which concurrency was expressed via multiple threads, and distributed memory machines in which concurrency was expressed via explicit message passing. It is widely agreed that both of these programming models are very challenging, even for the veteran programmer. In both cases, the programmer is directly responsible for designing the program from top to bottom and handling all of the issues of granularity, consistency, and locality necessary to achieve acceptable performance, with very little help from the runtime or operating systems.
However, a new approach to concurrent programming has been emerging over the last several years, in which the user programs in a much higher level language and relies upon the system to handle many of the challenging underlying details. To achieve this, the program is successively decomposed into simpler representations, such that each layer of the system can gradually adapt it to the hardware available.
The layers can be described as follows:
A (very incomplete) selection of systems that follow this model:
Mon, 19 May 2014 19:14:00 +0000
Each layer of the system fulfills a distinct need. The declarative language (DL) at the top is compact, expressive, and easy for end users, but is intractable to analyze in the general case because it may have a high order of complexity, possibly Turing-complete. The DL can be used to generate a (large) directed acyclic graph (DAG) that represents every single task to be executed. The DAG is not a great user-interface language, but it is much more suitable for a system to perform capacity management and optimization because it is a finite structure with discrete components. A DAG executor then plays the graph, dispatching individual tasks as their dependencies are satisfied. The BOT consists of all the tasks that are ready to run, and is then scheduled onto the underlying computer, using the data dependencies made available from the higher levels.
Why bother with this sort of model? It allows us to compare the fundamental capabilities and expressiveness of different kinds of systems. For example, in the realm of compilers, everyone knows that a proper compiler consists of a scanner, a parser, an optimizer, and a code generator. Through these stages, the input program is transformed from a series of tokens to an abstract syntax tree, an intermediate representation, and eventually to assembly code. Not every compiler uses all of these stages, much less the same code, but by using a common language, it is helpful to understand, compare, and design new systems.
We have tried a number of approaches at visualizing this complex system over the years. Our latest tool, the Condor Matrix Display started as a summer project by Nick Jaeger, a student from the University of Wisconsin at Eau Claire. The display shows a colored bar for each slot in the pool, where the width is proportional to the number of cores.
With a quick glance, you can see how many users are busy and whether they are running "thin" (1 core) or "fat" (many core) jobs. Sorting by the machine name gives you sense of how each sub-cluster in the pool is used:
While sorting by users gives you a sense of what users are dominating the pool:
The display is always a nice way of viewing the relatively new feature of "dynamic slot" in Condor. A large multi-core machine is now represented as a single slot with multiple resources. For example, this bit of the display shows a cluster of 8-core machines where some of the machines are unclaimed (green), some are running 4-core jobs (blue), and some are running 1-core jobs (green):
Fri, 14 Feb 2014 16:20:00 +0000
Mon, 18 Nov 2013 18:34:00 +0000
The Annual CCL Workshop was held on October 10-11 at the University of Notre Dame. The CCL team ran beginning and advanced tutorials and gave highlights of the many new features and capabilities in the software. Community members spoke about the many ways in which Makeflow, Work Queue, Parrot, and Chirp are used to acclerate discoveries in fields such as bioinformatics, high energy physics, weather prediction, image analysis, and molecular dynamics. Everyone got together to share a meal, solve problems, and generate new ideas. Thanks to everyone who participated, and see you next year!
Mon, 14 Oct 2013 16:00:00 +0000
The workshop is an opportunity for beginners and experts alike to learn about the latest software release, get to know the CCL team, meet other people engaged in large scale scientific computing, and influence the direction of our research and development.
Thursday, October 10th
Afternoon: Introduction and Software Tutorials
Friday, October 11th
All Day: New Technology, Scientific Talks, and Discussion
There is no registration fee, however, space is limited, so please register in advance to reserve your place. We hope to see you there!
Fri, 11 Oct 2013 12:00:00 +0000
Making Work Queue Cluster Friendly for Data Intensive Scientific Applications.
In the original design of Work Queue, each worker was a sequential process that executed one task at a time. This paper describes the extension of Work Queue into two respects:
The effect of these two changes is to dramatically reduce the network footprint at the master process, and at each execution site. The resulting system is more 'friendly' to local clusters, and is capable of scaling to even greater sizes.
Wed, 21 Aug 2013 12:55:00 +0000
This is the first release of the 4.0 series, with some major changes:
Please refer to the doc/ directory in the distribution for the usage of this new features. You can download the software here:
Thanks goes to the contributors for this release: Michael Albrecht, DeVonte Applewhite, Peter Bui, Patrick Donnelly, Brenden Kokoszka, Kyle Mulholland, Francesco Prelz, Dinesh Rajan, Casey Robinson, Peter Sempolinski, Douglas Thain, Ben Tovar, and Li Yu.
Mon, 29 Jul 2013 12:11:00 +0000
Right-sizing Resource Allocations for Scientific Applications in Clusters, Grids, and Clouds!
Fri, 26 Jul 2013 12:05:00 +0000
Building Scalable Scientific Applications using Makeflow and Work Queue as part of XSEDE 2013 in San Diego on July 22. Mon, 22 Jul 2013 12:00:00 +0000
Computational protein folding has historically relied on long-running simulations of single molecules. Although many such simulations can run be at once, they are statistically likely to sample the same common configurations of the molecule, rather than exploring the many possible states it may have. To address this, a team of researchers from the University of Notre Dame and Stanford University designed a system that combined the Adaptive Weighted Ensemble technique to run thousands of short Gromacs and Protomol simulations in parallel with periodic resampling to explore the rich state space of a molecule. Using the Work Queue framework, these simulations were distributed across thousands of CPUs and GPUs drawn from the Notre Dame, Stanford, and commercial cloud providers. The resulting system effectively simulates the behavior of a protein at 500 ns/hour, covering a wide range of behavior in days rather than years.
- Jesus Izaguirre, University of Notre Dame and Eric Darve, Stanford University Sat, 01 Jun 2013 18:07:00 +0000