CCL | Software | Download | Manuals | Forum | Papers
CCL Home

Research

Software Community Operations

Recent News in the CCL


2017 DISC Summer Session Wraps Up

Congratulations to our first class of summer students participating in the Data Intensive Scientific Computing research experience!  Twelve students from around the country came to Notre Dame to learn how computing drives research in high energy physics, climatology, bioinformatics, astrophysics, and molecular dynamics.

At our closing poster session in Jordan hall (along with several other REU programs) students presented their work and results to faculty and guests across campus.

If you are excited to work at the intersection of scientific research and advanced computing, we invite you to apply to the 2017 DISC summer program at Notre Dame! Fri, 29 Jul 2016 18:00:00 +0000

New Work Queue Visualization

Nate Kremer-Herman has created a new, convenient way to lookup information of Work Queue masters. This new visualization tool provides real-time updates on the status of each Work Queue master that contacts our catalog server. We hope that this new tool will serve to both facilitate our users' understanding of what their Work Queue masters are doing and assist the user in determining when it may be time to take corrective action.

A Comparative View


A Specific Master


In part, this tool provides our users with measurements on their tasks currently running, the number of tasks waiting to be run, and the total capacity of tasks that could be running. As an example, a user could find that they have a large number of tasks waiting, a small number of tasks running, and a task capacity that is somewhere in between. A recommendation we could make to a user who is seeing something like this would be to ask for more workers. Our hope is that users will take advantage of this new way to view and manage their work.

Wed, 22 Jun 2016 18:57:00 +0000

Work Queue from Raspberry Pi to Azure at SPU

"At Seattle Pacific University we have used Work Queue in the CSC/CPE 4760 Advanced Computer Architecture course in Spring 2014 and Spring 2016.  Work Queue serves as our primary example of a distributed system in our “Distributed and Cloud Computing” unit for the course.  Work Queue was chosen because it is easy to deploy, and undergraduate students can quickly get started working on projects that harness the power of distributed resources."

The main project in this unit had the students obtain benchmark results for three systems: a high performance workstation; a cluster of 12 Raspberry Pi 2 boards, and a cluster of A1 instances in Microsoft Azure.  The task for each benchmark used Dr. Peter Bui’s Work Queue MapReduce framework; the students tested both a Word Count and Inverted Index on the Linux kernel source. In testing the three systems the students were exposed to the principles of distributed computing and the MapReduce model as they investigated tradeoffs in price, performance, and overhead.
 - Prof. Aaron Dingler, Seattle Pacific University. 

Thu, 26 May 2016 12:52:00 +0000

Lifemapper analyzes biodiversity using Makeflow and Work Queue

Lifemapper is a high-throughput, webservice-based, single- and multi-species modeling and analysis system designed at the Biodiversity Institute and Natural History Museum, University of Kansas. Lifemapper was created to compute and web publish, species distribution models using available online species occurrence data.  Using the Lifemapper platform, known species localities georeferenced from museum specimens are combined with climate models to predict a species’ “niche” or potential habitat availability, under current day and future climate change scenarios. By assembling large numbers of known or predicted species distributions, along with phylogenetic and biogeographic data, Lifemapper can analyze biodiversity, species communities, and evolutionary influences at the landscape level.

Lifemapper has had difficulty scaling recently as our projects and analyses are growing exponentially.  For a large proof-of-concept project we deployed on the XSEDE resource Stampede at TACC, we integrated Makeflow and Work Queue into the job workflow.  Makeflow simplified job dependency management and reduced job-scheduling overhead, while Work Queue scaled our computation capacity from hundreds of simultaneous CPU cores to thousands.  This allowed us to perform a sweep of computations with various parameters and high-resolution inputs producing a plethora of outputs to be analyzed and compared.  The experiment worked so well that we are now integrating Makeflow and Work Queue into our core infrastructure.  Lifemapper benefits not only from the increased speed and efficiency of computations, but the reduced complexity of the data management code, allowing developers to focus on new analyses and leaving the logistics of job dependencies and resource allocation to these tools.


Information from C.J. Grady, Biodiversity Institute and Natural History Museum, University of Kansas.

Tue, 24 May 2016 15:30:00 +0000

Condor Week 2016 presentation

We presented in Condor Week 2016 our approach to create a comprehensive resource feedback loop to execute tasks of unknown size. In this feedback look, tasks are monitored and measured in user-space as they run; the resource usage is collected into an online archive; and further instances are provisioned according to the application's historical data to avoid resource starvation and minimize resource waste. We present physics and bioinformatics case studies consisting of more than 600,000 tasks running on 26,000 cores (96% of them from opportunistic resources), where the proposed solution leads to an overall increase in throughput (from 10% to 400% across different workflows), and a decrease in resource waste compared to workflow executions without the resource feedback-loop.

condor week 2016 slides Tue, 24 May 2016 12:32:00 +0000

Containers, Workflows, and Reproducibility

The DASPOS project hosted a workshop on Container Strategies for Data and Software Preservation that Promote Open Science at Notre Dame on May 19-20, 2016.  We had a very interesting collection of researchers and practitioners, all working on problems related to reproducibility, but presenting different approaches and technologies.

Prof. Thain presented recent work by CCL grad students Haiyan Meng and Peter Ivie on Combining Containers and Workflow Systems for Reproducible Execution.


The Umbrella tool created by Haiyan Meng allows for a simple, compact, archival representation of a computation, taking into account hardware, operating system, software, and data dependencies.  This allows one to accurately perform computational experiments and give each one a DOI that can be shared, downloaded, and executed.


The PRUNE tool created by Peter Ivie allows one to construct dynamic workflows of connected tasks, each one precisely specified by execution environment.  Provenance and data are tracked precisely, so that the specification of a workflow (and its results) can be exported, imported, and shared with other people.  Think of it like git for workflows.



Mon, 23 May 2016 18:44:00 +0000

Balancing Push and Pull in Confuga, an Active Storage Cluster File System for Scientific Workflows

Patrick Donnelly has published a journal article in Concurrency and Computation: Practice and Experience on the Confuga active cluster file system. The journal article presents the use of controlled transfers to distribute data within the cluster without destabilizing the resources of storage nodes:


Patrick Donnelly and Douglas Thain, Balancing Push and Pull in Confuga, an Active Storage Cluster File System for Scientific Workflows, Concurrency and Computation: Practice and Experience, May 2016.

Confuga is a new active storage cluster file system designed for executing regular POSIX workflows. Users may store extremely large datasets on Confuga in a regular file system layout, with whole files replicated across the cluster. You may then operate on your dataset using regular POSIX applications, with defined inputs and outputs.


Jobs execute with full data locality with all whole-file dependencies available in its own private sandbox. For this reason, Confuga will first copy a job's missing data to the target storage node prior to dispatching the job. This journal article examines two transfer mechanisms used in Confuga to manage this data movement: push and pull.

A push transfer is used to direct a storage node to copy a file to another storage node. Pushes are centrally managed by the head node which allows it to schedule transfers in a way that avoids destabilizing the cluster or individual storage nodes. To avoid some potential inefficiencies with centrally managed transfers, Confuga also uses pull transfers which resemble file access in a typical distributed file system. A job will pull its missing data dependencies from other storage nodes prior to execution.

This journal article examines the trade-offs of the two approaches and settles on a balanced approach where pulls are used for transfers of smaller files and pushes are used for larger files. This results in significant performance improvements for scientific workflows with large data dependencies. For example, two bioinformatics workflows we studied, a Burrows-Wheeler Aligner (BWA) workflow and an Iterative Alignments of Long Reads (IALR) workflow, achieved 48% and 77% reductions in execution time compared to using either an only push or only pull strategy.

For further details, please check out our journal article here. Confuga is available as part of the Cooperative Computing Toolset distributed under the GNU General Public License. For usage instructions, see the Confuga manual and man page.

See also our last blog post on Confuga which introduced Confuga. Mon, 09 May 2016 14:20:00 +0000

Interships at Red Hat and CERN

Two CCL graduate students will be off on internships in summer 2016:

Haiyan Meng will be interning at Red Hat, working on container technologies.


Tim Shaffer will be a summer visitor at CERN, working with the CVMFS group on distributed filesystems for HPC and high energy physics.
Tue, 03 May 2016 14:36:00 +0000

Ph.D. Defense: Patrick Donnelly

Patrick Donnelly successfully defended his Ph.D. titled "Data Locality Techniques in an Active Cluster Filesystem for Scientific Workflows". 

Congratulations to Dr. Donnelly!


The continued exponential growth of storage capacity has catalyzed the broad acquisition of scientific data which must be processed. While today's large data analysis systems are highly effective at establishing data locality and eliminating inter-dependencies, they are not so easily incorporated into scientific workflows that are often complex and irregular graphs of sequential programs with multiple dependencies. To address the needs of scientific computing, I propose the design of an active storage cluster file system which allows for execution of regular unmodified applications with full data
locality.

This dissertation analyzes the potential benefits of exploiting the structural information already available in scientific workflows -- the explicit dependencies -- to achieve a scalable and stable system. I begin with an outline of the design of the Confuga active storage cluster file system and its applicability to scientific computing. The remainder of the dissertation examines the techniques used to achieve a scalable and stable system. First, file system access by jobs is scoped to explicitly defined dependencies resolved at job dispatch. Second, workflow's structural information is harnessed to direct and control necessary file transfers to enforce cluster stability and maintain performance. Third, control of transfers is selectively relaxed to improve performance by limiting any negative effects of centralized transfer management.

This work benefits users by providing a complete batch execution platform joined with a cluster file system. The user does not need to redesign their workflow or provide additional consideration to the management of data dependencies. System stability and performance is managed by the cluster file
system while providing jobs with complete data locality. - See more at: http://cse.nd.edu/events/phd-defense-patrick-donnelly#sthash.4BLYgJYM.dpuf
DATA LOCALITY TECHNIQUES IN AN ACTIVE CLUSTER FILE SYSTEM DESIGNED FOR
SCIENTIFIC WORKFLOWS - See more at: http://cse.nd.edu/events/phd-defense-patrick-donnelly#sthash.4BLYgJYM.dpuf
DATA LOCALITY TECHNIQUES IN AN ACTIVE CLUSTER FILE SYSTEM DESIGNED FOR
SCIENTIFIC WORKFLOWS - See more at: http://cse.nd.edu/events/phd-defense-patrick-donnelly#sthash.4BLYgJYM.dpuf
Wed, 06 Apr 2016 18:38:00 +0000

Searching for Exo-Planets with Makeflow and Work Queue

Students at the University of Arizona made use of Makeflow and Work Queue to build an image processing pipeline on the Chameleon cloud testbed at TACC.

The course project was to build an image processing pipeline to accelerate the research of astronomer Jared Males, who designs instruments to search for exo-planets by observing the changes in appearance of a star.  This results in hundreds of thousands of images of a single star, which must then be processed in batch to eliminate noise and align the images.

The students built a solution (Find-R) which consumed over 100K CPU-hours on Chameleon, distributed using Makeflow and Work Queue.

Read more here:
https://www.tacc.utexas.edu/-/in-search-of-a-planet
Tue, 22 Mar 2016 19:14:00 +0000

Parrot talk at OSG All-hands meeting 2016

Ben Tovar gave a talk on using parrot to access CVMFS as part of the Open Science Grid (OSG) all-hands meeting in Clemson, SC.

Software access with parrot and CVMFS Fri, 18 Mar 2016 22:03:00 +0000

CCTools 5.4.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.4.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, Umbrella, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:

http://www.cse.nd.edu/~ccl/software/download

This minor release adds several features and bug fixes. Among them:

  • [Catalog]  Catalog server communication is now done using JSON encoded queries and replies. (Douglas Thain)
  • [Makeflow] --skip-file-check added to mitigate overhead on network file systems. (Nick Hazekamp)
  • [Makeflow] Added amazon batch job interface. (Charles Shinaver)
  • [Resource Monitor] Network bandwidth, bytes received, and sent are now recorded. (Ben Tovar)
  • [Work Queue] Tasks may be grouped into categories, for resource control and fast abort. (Ben Tovar)
  • [Work Queue] work_queue_pool was renamed to work_queue_factory. (Douglas Thain)
  • [Work Queue] --condor-requirements to specify arbitrary HTCondor requirements in worker_queue_factory. (Chao Zheng)
  • [Work Queue] --factory-timeout to terminate worker_queue_factory when no master is active. (Neil Butcher)
  • [Work Queue] Compile-time option to specify default local settings in    sge_submit_workers. (Ben Tovar)
  • [Umbrella]   Several bugfixes. (Haiyan Meng, Alexander Vyushkov)
  • [Umbrella]   Added OSF, and S3 communication. (Haiyan Meng)
  • [Umbrella]   Added EC2 execution engine. (Haiyan Meng)
  • [Parrot] Several bug-fixes for memory mappings. (Patrick Donnelly)
  • [Parrot] All compiled services are shown under / (Tim Shaffer)
  • [Parrot] POSIX directory semantics. (Tim Shaffer)
  • [Parrot] Added new syscalls from Linux kernel 4.3. (Patrick Donnelly)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Jakob Blomer
  • Neil Butcher
  • Patrick Donnelly
  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Peter Ivie
  • Kevin Lannon
  • Haiyan Meng
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Alexander VyushkovRodney Walker
  • Mathias Wolf
  • Anna Woodard
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/software/help/

Enjoy!

Tue, 16 Feb 2016 20:24:00 +0000

Preservation Talk at Grid-5000

Prof. Thain gave a talk titled Preservation and Portability in Distributed Scientific Applications at the Grid-5000 Winter School on distributed computing in Grenoble, France.  I gave a broad overview of our recent efforts, including distributing software with Parrot and CVMFS, preserving high energy physics applications with Parrot, specifying environments with Umbrella, and preserving workflows with PRUNE.


Wed, 03 Feb 2016 11:31:00 +0000

Summer REU in DISC at Notre Dame

REU in Data Intensive Scientific Computing (DISC) at the University of Notre Dame
DISC combines big data, big science, and big computers at the University of Notre Dame. The only thing missing is you!


We invite outstanding undergraduates to apply for a summer research experience in DISC at the University of Notre Dame.  Students will spend ten weeks learning how to use high performance computing and big data technologies to attack scientific problems.  Majors in computer science, physics, biology, and related topics are encouraged to apply.

For more information:
http://disc.crc.nd.edu

Tue, 12 Jan 2016 19:21:00 +0000

CCTools 5.3.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.3.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, Umbrella, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://www.cse.nd.edu/~ccl/software/download

This minor release adds several features and bug fixes. Among them:

  • [Makeflow]  Several enhancements in garbage collection. (Nick Hazekamp)
  • [Makeflow]  Better task state handling when recovering execution log. (Nick Hazekamp)
  • [Parrot]    Correct handling of multi-threaded programs. (Patrick Donnelly)
  • [Parrot]    Adds parrot_mount, to set arbitrary mount points while parrot is executing. (Douglas Thain)
  • [Parrot]    Add --fake-setuid option, for executables that request setuid. (Tim Shaffer)
  • [Parrot]    Update cvmfs uri to new convention. (Jakob Blomer)
  • [Parrot]    Add --whitelist to restrict filesystem access. (Tim Shaffer)
  • [Parrot]    Make special file descriptors invisible to tracee. (Patrick Donnelly)
  • [Resource Monitor] Compute approximations of shared resident memory. (Ben Tovar)
  • [Resource Monitor] Remove resource_monitorv for static binaries. resource_monitor now handles all cases.  (Ben Tovar)
  • [Resource Monitor] Working directories are not tracked by default anymore. Use --follow-chdir instead.  (Ben Tovar)
  • [Umbrella]  Support for curateND. (Haiyan Meng)
  • [Umbrella]  Support for installing software from package managers. (Haiyan Meng)
  • [Work Queue] Adds option -C to read a JSON configuration file. (Ben Tovar)
  • [Work Queue] Several bug fixes regarding task/workflow statistics. (Ben Tovar)
  • [Work Queue] Master's shutdown option now correctly terminates  workers. (Ben Tovar)
  • [Work Queue] Adds --sge-paremeter to ./configure script to personalize the sge_submit_workers script. (Ben Tovar)
  • [Work Queue] Adds the executable disk_allocator, to restrict disk usage at the workers. (Nate Kremer-Herman)


Incompatibility warnings:

  • Workers from this release do not work correctly with masters from previous releases.


Thanks goes to the contributors for many features, bug fixes, and tests:

  • Jakob Blomer
  • Neil Butcher
  • Patrick Donnelly
  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Peter Ivie
  • Kevin Lannon
  • Haiyan Meng
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Mathias Wolf
  • Anna Woodard

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/software/help/

Enjoy!
Mon, 23 Nov 2015 17:24:00 +0000

Analyzing LHC Data on 10K Cores with Lobster

Prof. Thain gave a talk titled Analyzing LHC Data on 10K Cores with Lobster at the Workshop on Data Intensive Computing in the Clouds at Supercomputing 2015.  The talk gave an overview of our collaboration with members of the CMS experiment at Notre Dame.  Together, we have built a data analysis system which can deploy the complex CMS computing environment on large clusters of non-dedicated machines.





Mon, 16 Nov 2015 14:47:00 +0000

Global Filesystems Paper in IEEE CiSE

Our latest paper, in collaboration with Jakob Blomer and the CVMFS team at CERN, describes the evolution of global-scale filesystems to serve the needs of the world-wide LHC experiment collaborations:



Delivering complex software across a worldwide distributed system is a major challenge in high-throughput scientific computing. The problem arises at different scales for many scientific communities that use grids, clouds, and distributed clusters to satisfy their computing needs. For high-energy physics (HEP) collaborations dealing with large amounts of data that rely on hundreds of thousands of cores spread around the world for data processing, the challenge is particularly acute. To serve the needs of the HEP community, several iterations were made to create a scalable, user-level filesystem that delivers software worldwide on a daily basis. The implementation was designed in 2006 to serve the needs of one experiment running on thousands of machines. Since that time, this idea evolved into a new production global-scale filesystem serving the needs of multiple science communities on hundreds of thousands of machines around the world. 

Mon, 09 Nov 2015 19:27:00 +0000

Preservation Talk at iPres 2015

Prof. Thain gave a talk titled "Preserving Scientific Software Executions: Preserve the Mess or Encourage Cleanliness" at the 2015 Conference on Digital Preservation.  This talk gives a high level overview of our work on preservation, encompassing packaging with Parrot, environment specification with Umbrella, and workflow preservation with Prune.

Tue, 03 Nov 2015 19:16:00 +0000

CMS Case Study Paper at CHEP

Our case study work on how to preserve and reproduce a high energy physics (HEP) application with Parrot has been accepted by Journal of Physics: Conference Series (JPCS 2015).

The HEP application under investigation is called TauRoast, and authored by our physics collaborator, Matthias. TauRoast is a complex single-machine application having lots of implicit and explicit dependencies: CVS, github, PyYAML websites, personal websites, CVMFS, AFS, HDFS, NFS, and PanFS. The total size of these dependencies is about 166.8TB.

To make TauRoast reproducible, we propose one fine-grained dependency management toolkit based on Parrot to track the really used data and create a reduced package which gets rid of all the unused data. By doing so, the original execution environment with the size of 166.8TB is reduced into a package with the size of 21GB. The correctness of the preserved package is demonstrated in three different environments - the original machine, one virtual machine from the Notre Dame Cloud Platform and one virtual machine from the Amazon EC2 Platform.

 

Tue, 20 Oct 2015 16:15:00 +0000

OpenMalaria Preservation with Umbrella

Haiyan worked together with Alexander from CRC, successfully preserved and reproduced a C++ application, openMalaria, using Umbrella. The data dependencies of openMalaria include packages from yum repositories, OS images from the CCL websites, software and data dependencies from curateND. Through a JSON-format specification, Umbrella allows the user to specify the complete execution environment for his application: hardware, kernel, OS, software, data, packages supported by package managers, environment variables, and command. Each dependency in an Umbrella specification also comes with its metadata information - size, format, checksum, downloading urls, and mountpoints. During runtime, Umbrella tries to construct the execution environment specified in the specification with the help of different sandboxing techniques such as Parrot and Docker, and run the user's task. Mon, 19 Oct 2015 20:54:00 +0000

DAGVz Paper at Visual Performance Analysis Workshop

An Huynh will be presenting a paper on the visualization of task-parallel programs at the Visual Performance Analysis workshop at Supercomputing 2015.  (He is a student of Kenjiro Taura at the University of Tokyo who spent a semester working with us in the Cooperative Computing Lab.)

An developed DAGViz, a tool for exploring the rich structure of task parallel programs, which requires associating DAG structure with performance measures at multiple levels of detail.  This allows the analyst to zoom in to trouble spots and view exactly where and how each basic block of a program ran on a multi-core machine.




Tue, 13 Oct 2015 15:01:00 +0000

Virtual Wind Tunnel in IEEE CiSE

Some of our recent work on a system for collaborative engineering design was recently featured in the September issue of IEEE Computing in Science and Engineering focused on "Open Simulation Laboratories"  This project was part of a collaboration between faculty in the computer science and civil engineering departments, Open Sourcing the Design of Civil Infrastructure.

CCL grad student Peter Sempolinski led the design and implementation of an online service enabling collaborative design and evaluation of structures, known as the "Virtual Wind Tunnel".  This service enables structural designs to be uploaded and shared, then evaluated for performance via the OpenFOAM CFD package.  The entire process is similar to that of collaborative code development, where the source (i.e. a building design) is kept in a versioned repository, automated builds (i.e. building simulation) are performed in a consistent and reproducible way, and test results (i.e. simulation metrics) are used to evaluate the initial design.  Designs and results can be shared, annotated, and re-used, making it easy for one engineer to build upon the work of another.

The prototype system has been used in a variety of contexts, most notably to demonstrate the feasibility of crowdsourcing design and evaluation work via Amazon Turk.








Wed, 09 Sep 2015 13:27:00 +0000

Three Papers at IEEE Cluster in Chicago

This week, at the IEEE Cluster Computing conference in Chicago, Ben Tovar will present some of our work on automated application monitoring:

(PDF)Gideon Juve, Benjamin Tovar, Rafael Ferreira da Silva, Dariusz Krol, Douglas Thain, Ewa Deelman, William Allcock, and Miron Livny, Practical Resource Monitoring for Robust High Throughput Computing, Workshop on Monitoring and Analysis for High Performance Computing Systems Plus Applications at IEEE Cluster Computing, September, 2015. 

Matthias Wolf will present our work on the Lobster large scale data management system:

(PDF)Anna Woodard, Matthias Wolf, Charles Mueller, Nil Valls, Ben Tovar, Patrick Donnelly, Peter Ivie, Kenyi Hurtado Anampa, Paul Brenner, Douglas Thain, Kevin Lannon and Michael Hildreth,
Scaling Data Intensive Physics Applications to 10k Cores on Non-Dedicated Clusters with Lobster, IEEE Conference on Cluster Computing, September, 2015.

Olivia Choudhury will present some work on modelling concurrent applications, trading off thread-level parallelism against task-level parallelism at scale:

(PDF)Olivia Choudhury, Dinesh Rajan, Nicholas Hazekamp, Sandra Gesing, Douglas Thain, and Scott Emrich,
Balancing Thread-level and Task-level Parallelism for Data-Intensive Workloads on Clusters and Clouds,
IEEE Conference on Cluster Computing, September, 2015.


Mon, 07 Sep 2015 19:07:00 +0000

CCTools 5.2.0 released

The Cooperative Computing Lab is pleased to announce the release of version 5.2.0 of the Cooperative Computing Tools including Parrot, Chirp, Makeflow, WorkQueue, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:

http://www.cse.nd.edu/~ccl/software/download

This minor release considers the following issues from version 5.1.0:

  • [Chirp]     Fix mkdir python binding. (Ben Tovar)
  • [Chirp]     Adds 'ln' for file links. (Nate Kremer-Herman)
  • [Chirp/Confuga] Kill a job even on failure. (Patrick Donnelly)
  • [Debug]     Fix log rotation with multiple processes. (Patrick Donnelly)
  • [Makeflow]  Better support for Torque and SLURM for XSEDE. (Nick Hazekamp)
  • [Parrot]    Fix bug where cvmfs alien cache access was sequential. (Ben Tovar)
  • [Parrot]    Allow compilation with iRODS 4.1. (Ben Tovar)
  • [WorkQueue] Improvements to statistics when using foremen. (Ben Tovar)
  • [WorkQueue] Fix bug related to exporting environment variables. (Ben Tovar)
  • [WorkQueue] Task sandboxes where not being deleted at workers. (Ben Tovar)

Thanks goes to our contributors:

Patrick Donnelly
Nathaniel Kremer-Herman
Nicholas Hazekamp
Ben Tovar

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/software/help/

Enjoy! Wed, 19 Aug 2015 12:05:00 +0000

Recent CCL Grads Take Faculty Positions

Peter Bui is returning to Notre Dame this fall, where he will be a member of the teaching faculty and will be teaching undergraduate core classes like data structures, discrete math, and more.  Welcome back, Prof. Bui!






Hoang Bui completed a postdoc position at Rutgers University with Prof. Manish Parashar, and is starting as an assistant professor at Western Illinois University.  Congratulations, Prof. Bui!

Tue, 18 Aug 2015 15:16:00 +0000