cctools
Public Member Functions
work_queue.WorkQueue Class Reference

Python Work Queue object. More...

Inheritance diagram for work_queue.WorkQueue:

Public Member Functions

def __init__
 Create a new work queue. More...
 
def name
 Get the project name of the queue. More...
 
def port
 Get the listening port of the queue. More...
 
def stats
 Get queue statistics. More...
 
def stats_hierarchy
 Get worker hierarchy statistics. More...
 
def stats_category
 Get the task statistics for the given category. More...
 
def status
 Get queue information as list of dictionaries. More...
 
def workers_summary
 Get resource statistics of workers connected. More...
 
def specify_category_mode
 Turn on or off first-allocation labeling for a given category. More...
 
def specify_category_autolabel_resource
 Turn on or off first-allocation labeling for a given category and resource. More...
 
def task_state
 Get current task state. More...
 
def enable_monitoring
 Enables resource monitoring of tasks in the queue, and writes a summary per task to the directory given. More...
 
def enable_monitoring_full
 As enable_monitoring, but it also generates a time series and a debug file. More...
 
def activate_fast_abort
 Turn on or off fast abort functionality for a given queue for tasks in the "default" category, and for task which category does not set an explicit multiplier. More...
 
def activate_fast_abort_category
 Turn on or off fast abort functionality for a given queue. More...
 
def specify_draining_by_hostname
 Turn on or off draining mode for workers at hostname. More...
 
def empty
 Determine whether there are any known tasks queued, running, or waiting to be collected. More...
 
def hungry
 Determine whether the queue can support more tasks. More...
 
def specify_algorithm
 Set the worker selection algorithm for queue. More...
 
def specify_task_order
 Set the order for dispatching submitted tasks in the queue. More...
 
def specify_name
 Change the project name for the given queue. More...
 
def specify_manager_preferred_connection
 Set the preference for using hostname over IP address to connect. More...
 
def specify_master_preferred_connection
 See specify_manager_preferred_connection. More...
 
def specify_min_taskid
 Set the minimum taskid of future submitted tasks. More...
 
def specify_priority
 Change the project priority for the given queue. More...
 
def specify_num_tasks_left
 Specify the number of tasks not yet submitted to the queue. More...
 
def specify_manager_mode
 Specify the manager mode for the given queue. More...
 
def specify_master_mode
 See specify_manager_mode. More...
 
def specify_catalog_server
 Specify the catalog server the manager should report to. More...
 
def specify_log
 Specify a log file that records the cummulative stats of connected workers and submitted tasks. More...
 
def specify_transactions_log
 Specify a log file that records the states of tasks. More...
 
def specify_password
 Add a mandatory password that each worker must present. More...
 
def specify_password_file
 Add a mandatory password file that each worker must present. More...
 
def specify_max_resources
 Specifies the maximum resources allowed for the default category. More...
 
def specify_min_resources
 Specifies the minimum resources allowed for the default category. More...
 
def specify_category_max_resources
 Specifies the maximum resources allowed for the given category. More...
 
def specify_category_min_resources
 Specifies the minimum resources allowed for the given category. More...
 
def specify_category_first_allocation_guess
 Specifies the first-allocation guess for the given category. More...
 
def initialize_categories
 Initialize first value of categories. More...
 
def cancel_by_taskid
 Cancel task identified by its taskid and remove from the given queue. More...
 
def cancel_by_tasktag
 Cancel task identified by its tag and remove from the given queue. More...
 
def cancel_by_category
 Cancel all tasks of the given category and remove them from the queue. More...
 
def shutdown_workers
 Shutdown workers connected to queue. More...
 
def block_host
 Block workers running on host from working for the manager. More...
 
def blacklist
 Replaced by block_host. More...
 
def block_host_with_timeout
 Block workers running on host for the duration of the given timeout. More...
 
def blacklist_with_timeout
 See block_host_with_timeout. More...
 
def unblock_host
 Unblock given host, of all hosts if host not given. More...
 
def blacklist_clear
 See unblock_host. More...
 
def invalidate_cache_file
 Delete file from workers's caches. More...
 
def specify_keepalive_interval
 Change keepalive interval for a given queue. More...
 
def specify_keepalive_timeout
 Change keepalive timeout for a given queue. More...
 
def estimate_capacity
 Turn on manager capacity measurements. More...
 
def tune
 Tune advanced parameters for work queue. More...
 
def submit
 Submit a task to the queue. More...
 
def wait
 Wait for tasks to complete. More...
 
def wait_for_tag
 Similar to wait, but guarantees that the returned task has the specified tag. More...
 
def map
 Maps a function to elements in a sequence using work_queue. More...
 
def pair
 Returns the values for a function of each pair from 2 sequences. More...
 
def tree_reduce
 Reduces a sequence until only one value is left, and then returns that value. More...
 
def application_info
 Should return a dictionary with information for the status display. More...
 

Detailed Description

Python Work Queue object.

This class uses a dictionary to map between the task pointer objects and the work_queue.Task.

Constructor & Destructor Documentation

def work_queue.WorkQueue.__init__ (   self,
  port = WORK_QUEUE_DEFAULT_PORT,
  name = None,
  shutdown = False,
  stats_log = None,
  transactions_log = None,
  debug_log = None,
  ssl = None,
  status_display_interval = None 
)

Create a new work queue.

Parameters
selfReference to the current work queue object.
portThe port number to listen on. If zero, then a random port is chosen. A range of possible ports (low, hight) can be also specified instead of a single integer.
nameThe project name to use.
stats_logThe name of a file to write the queue's statistics log.
transactions_logThe name of a file to write the queue's transactions log.
debug_logThe name of a file to write the queue's debug log.
shutdownAutomatically shutdown workers when queue is finished. Disabled by default.
sslA tuple of filenames (ssl_key, ssl_cert) in pem format, or True. If not given, then TSL is not activated. If True, a self-signed temporary key and cert are generated.
status_display_intervalNumber of seconds between updates to the jupyter status display. None, or less than 1 disables it.
See Also
work_queue_create - For more information about environmental variables that affect the behavior this method.

References work_queue.WorkQueue._free_queue(), work_queue.WorkQueue._info_widget, work_queue.WorkQueue._setup_ssl(), work_queue.WorkQueue._shutdown, work_queue.WorkQueue._stats, work_queue.WorkQueue._stats_hierarchy, work_queue.WorkQueue._task_table, work_queue.WorkQueue._update_status_display(), work_queue.WorkQueue._work_queue, work_queue.WorkQueue.shutdown_workers(), work_queue.WorkQueue.specify_log(), work_queue.WorkQueue.specify_transactions_log(), work_queue_delete(), work_queue_specify_name(), and work_queue_ssl_create().

Member Function Documentation

def work_queue.WorkQueue.name (   self)

Get the project name of the queue.

1 >>> print(q.name)

References work_queue.WorkQueue._work_queue, and work_queue_name().

def work_queue.WorkQueue.port (   self)

Get the listening port of the queue.

1 >>> print(q.port)

References work_queue.WorkQueue._work_queue, and work_queue_port().

def work_queue.WorkQueue.stats (   self)

Get queue statistics.

1 >>> print(q.stats)

The fields in stats can also be individually accessed through this call. For example:

1 >>> print(q.stats.workers_busy)

References work_queue.WorkQueue._stats, work_queue.WorkQueue._work_queue, and work_queue_get_stats().

def work_queue.WorkQueue.stats_hierarchy (   self)

Get worker hierarchy statistics.

1 >>> print(q.stats_hierarchy)

The fields in stats_hierarchy can also be individually accessed through this call. For example:

1 >>> print(q.stats_hierarchy.workers_busy)

References work_queue.WorkQueue._stats_hierarchy, work_queue.WorkQueue._work_queue, and work_queue_get_stats_hierarchy().

def work_queue.WorkQueue.stats_category (   self,
  category 
)

Get the task statistics for the given category.

Parameters
selfReference to the current work queue object.
categoryA category name. For example:
1 s = q.stats_category("my_category")
2 >>> print(s)
The fields in work_queue_stats can also be individually accessed through this call. For example:
1 >>> print(s.tasks_waiting)

References work_queue.WorkQueue._work_queue, and work_queue_get_stats_category().

def work_queue.WorkQueue.status (   self,
  request 
)

Get queue information as list of dictionaries.

Parameters
selfReference to the current work queue object
requestOne of: "queue", "tasks", "workers", or "categories" For example:
1 import json
2 tasks_info = q.status("tasks")

References work_queue.WorkQueue._work_queue, and work_queue_status().

def work_queue.WorkQueue.workers_summary (   self)

Get resource statistics of workers connected.

Parameters
selfReference to the current work queue object.
Returns
A list of dictionaries that indicate how many .workers connected with a certain number of .cores, .memory, and disk. For example:
1 workers = q.worker_summary()
2 >>> for w in workers:
3 >>> print("{} workers with: {} cores, {} MB memory, {} MB disk".format(w.workers, w.cores, w.memory, w.disk)

References work_queue.WorkQueue._work_queue, and work_queue_workers_summary().

def work_queue.WorkQueue.specify_category_mode (   self,
  category,
  mode 
)

Turn on or off first-allocation labeling for a given category.

By default, only cores, memory, and disk are labeled, and gpus are unlabeled. NOTE: autolabeling is only meaningfull when task monitoring is enabled (enable_monitoring). When monitoring is enabled and a task exhausts resources in a worker, mode dictates how work queue handles the exhaustion:

Parameters
selfReference to the current work queue object.
categoryA category name. If None, sets the mode by default for newly created categories.
modeOne of:
  • WORK_QUEUE_ALLOCATION_MODE_FIXED Task fails (default).
  • WORK_QUEUE_ALLOCATION_MODE_MAX If maximum values are specified for cores, memory, disk, and gpus (e.g. via specify_category_max_resources or Task.specify_memory), and one of those resources is exceeded, the task fails. Otherwise it is retried until a large enough worker connects to the manager, using the maximum values specified, and the maximum values so far seen for resources not specified. Use Task.specify_max_retries to set a limit on the number of times work queue attemps to complete the task.
  • WORK_QUEUE_ALLOCATION_MODE_MIN_WASTE As above, but work queue tries allocations to minimize resource waste.
  • WORK_QUEUE_ALLOCATION_MODE_MAX_THROUGHPUT As above, but work queue tries allocations to maximize throughput.

References work_queue.WorkQueue._work_queue, and work_queue_specify_category_mode().

def work_queue.WorkQueue.specify_category_autolabel_resource (   self,
  category,
  resource,
  autolabel 
)

Turn on or off first-allocation labeling for a given category and resource.

This function should be use to fine-tune the defaults from specify_category_mode.

Parameters
selfReference to the current work queue object.
categoryA category name.
resourceA resource name.
autolabelTrue/False for on/off.
Returns
1 if resource is valid, 0 otherwise.

References work_queue.WorkQueue._work_queue, and work_queue_enable_category_resource().

def work_queue.WorkQueue.task_state (   self,
  taskid 
)

Get current task state.

See work_queue_task_state_t for possible values.

1 >>> print(q.task_state(taskid))

References work_queue.WorkQueue._work_queue, and work_queue_task_state().

def work_queue.WorkQueue.enable_monitoring (   self,
  dirname = None,
  watchdog = True 
)

Enables resource monitoring of tasks in the queue, and writes a summary per task to the directory given.

Additionally, all summaries are consolidate into the file all_summaries-PID.log

Returns 1 on success, 0 on failure (i.e., monitoring was not enabled).

Parameters
selfReference to the current work queue object.
dirnameDirectory name for the monitor output.
watchdogIf True (default), kill tasks that exhaust their declared resources.

References work_queue.WorkQueue._work_queue, and work_queue_enable_monitoring().

def work_queue.WorkQueue.enable_monitoring_full (   self,
  dirname = None,
  watchdog = True 
)

As enable_monitoring, but it also generates a time series and a debug file.

WARNING: Such files may reach gigabyte sizes for long running tasks.

Returns 1 on success, 0 on failure (i.e., monitoring was not enabled).

Parameters
selfReference to the current work queue object.
dirnameDirectory name for the monitor output.
watchdogIf True (default), kill tasks that exhaust their declared resources.

References work_queue.WorkQueue._work_queue, and work_queue_enable_monitoring_full().

def work_queue.WorkQueue.activate_fast_abort (   self,
  multiplier 
)

Turn on or off fast abort functionality for a given queue for tasks in the "default" category, and for task which category does not set an explicit multiplier.

Parameters
selfReference to the current work queue object.
multiplierThe multiplier of the average task time at which point to abort; if negative (the default) fast_abort is deactivated.

References work_queue.WorkQueue._work_queue, and work_queue_activate_fast_abort().

def work_queue.WorkQueue.activate_fast_abort_category (   self,
  name,
  multiplier 
)

Turn on or off fast abort functionality for a given queue.

Parameters
selfReference to the current work queue object.
nameName of the category.
multiplierThe multiplier of the average task time at which point to abort; if zero, deacticate for the category, negative (the default), use the one for the "default" category (see activate_fast_abort)

References work_queue.WorkQueue._work_queue, and work_queue_activate_fast_abort_category().

def work_queue.WorkQueue.specify_draining_by_hostname (   self,
  hostname,
  drain_mode = True 
)

Turn on or off draining mode for workers at hostname.

Parameters
selfReference to the current work queue object.
hostnameThe hostname the host running the workers.
drain_modeIf True, no new tasks are dispatched to workers at hostname, and empty workers are shutdown. Else, workers works as usual.

References work_queue.WorkQueue._work_queue, and work_queue_specify_draining_by_hostname().

def work_queue.WorkQueue.empty (   self)

Determine whether there are any known tasks queued, running, or waiting to be collected.

Returns 0 if there are tasks remaining in the system, 1 if the system is "empty".

Parameters
selfReference to the current work queue object.

References work_queue.WorkQueue._work_queue, and work_queue_empty().

Referenced by work_queue.WorkQueue.map(), work_queue.WorkQueue.pair(), work_queue.WorkQueue.tree_reduce(), and work_queue.WorkQueue.wait_for_tag().

def work_queue.WorkQueue.hungry (   self)

Determine whether the queue can support more tasks.

Returns the number of additional tasks it can support if "hungry" and 0 if "sated".

Parameters
selfReference to the current work queue object.

References work_queue.WorkQueue._work_queue, and work_queue_hungry().

def work_queue.WorkQueue.specify_algorithm (   self,
  algorithm 
)

Set the worker selection algorithm for queue.

Parameters
selfReference to the current work queue object.
algorithmOne of the following algorithms to use in assigning a task to a worker. See work_queue_schedule_t for possible values.

References work_queue.WorkQueue._work_queue, and work_queue_specify_algorithm().

def work_queue.WorkQueue.specify_task_order (   self,
  order 
)

Set the order for dispatching submitted tasks in the queue.

Parameters
selfReference to the current work queue object.
orderOne of the following algorithms to use in dispatching submitted tasks to workers:

References work_queue.WorkQueue._work_queue, and work_queue_specify_task_order().

def work_queue.WorkQueue.specify_name (   self,
  name 
)

Change the project name for the given queue.

Parameters
selfReference to the current work queue object.
nameThe new project name.

References work_queue.WorkQueue._work_queue, and work_queue_specify_name().

def work_queue.WorkQueue.specify_manager_preferred_connection (   self,
  mode 
)

Set the preference for using hostname over IP address to connect.

'by_ip' uses IP addresses from the network interfaces of the manager (standard behavior), 'by_hostname' to use the hostname at the manager, or 'by_apparent_ip' to use the address of the manager as seen by the catalog server.

Parameters
selfReference to the current work queue object.
modeAn string to indicate using 'by_ip', 'by_hostname' or 'by_apparent_ip'.

References work_queue.WorkQueue._work_queue, and work_queue_manager_preferred_connection().

def work_queue.WorkQueue.specify_master_preferred_connection (   self,
  mode 
)

See specify_manager_preferred_connection.

References work_queue.WorkQueue._work_queue, and work_queue_manager_preferred_connection().

def work_queue.WorkQueue.specify_min_taskid (   self,
  minid 
)

Set the minimum taskid of future submitted tasks.

Further submitted tasks are guaranteed to have a taskid larger or equal to minid. This function is useful to make taskids consistent in a workflow that consists of sequential managers. (Note: This function is rarely used). If the minimum id provided is smaller than the last taskid computed, the minimum id provided is ignored.

Parameters
selfReference to the current work queue object.
minidMinimum desired taskid
Returns
Returns the actual minimum taskid for future tasks.

References work_queue.WorkQueue._work_queue, and work_queue_specify_min_taskid().

def work_queue.WorkQueue.specify_priority (   self,
  priority 
)

Change the project priority for the given queue.

Parameters
selfReference to the current work queue object.
priorityAn integer that presents the priorty of this work queue manager. The higher the value, the higher the priority.

References work_queue.WorkQueue._work_queue, and work_queue_specify_priority().

def work_queue.WorkQueue.specify_num_tasks_left (   self,
  ntasks 
)

Specify the number of tasks not yet submitted to the queue.

It is used by work_queue_factory to determine the number of workers to launch. If not specified, it defaults to 0. work_queue_factory considers the number of tasks as: num tasks left + num tasks running + num tasks read.

Parameters
selfReference to the current work queue object.
ntasksNumber of tasks yet to be submitted.

References work_queue.WorkQueue._work_queue, and work_queue_specify_num_tasks_left().

def work_queue.WorkQueue.specify_manager_mode (   self,
  mode 
)

Specify the manager mode for the given queue.

(Kept for compatibility. It is no-op.)

Parameters
selfReference to the current work queue object.
modeThis may be one of the following values: WORK_QUEUE_MASTER_MODE_STANDALONE or WORK_QUEUE_MASTER_MODE_CATALOG.

References work_queue.WorkQueue._work_queue, and work_queue_specify_manager_mode().

def work_queue.WorkQueue.specify_master_mode (   self,
  mode 
)

See specify_manager_mode.

References work_queue.WorkQueue._work_queue, and work_queue_specify_manager_mode().

def work_queue.WorkQueue.specify_catalog_server (   self,
  hostname,
  port 
)

Specify the catalog server the manager should report to.

Parameters
selfReference to the current work queue object.
hostnameThe hostname of the catalog server.
portThe port the catalog server is listening on.

References work_queue.WorkQueue._work_queue, and work_queue_specify_catalog_server().

def work_queue.WorkQueue.specify_log (   self,
  logfile 
)

Specify a log file that records the cummulative stats of connected workers and submitted tasks.

Parameters
selfReference to the current work queue object.
logfileFilename.

References work_queue.WorkQueue._work_queue, and work_queue_specify_log().

Referenced by work_queue.WorkQueue.__init__().

def work_queue.WorkQueue.specify_transactions_log (   self,
  logfile 
)

Specify a log file that records the states of tasks.

Parameters
selfReference to the current work queue object.
logfileFilename.

References work_queue.WorkQueue._work_queue, and work_queue_specify_transactions_log().

Referenced by work_queue.WorkQueue.__init__().

def work_queue.WorkQueue.specify_password (   self,
  password 
)

Add a mandatory password that each worker must present.

Parameters
selfReference to the current work queue object.
passwordThe password.

References work_queue.WorkQueue._work_queue, and work_queue_specify_password().

def work_queue.WorkQueue.specify_password_file (   self,
  file 
)

Add a mandatory password file that each worker must present.

Parameters
selfReference to the current work queue object.
fileName of the file containing the password.

References work_queue.WorkQueue._work_queue, and work_queue_specify_password_file().

def work_queue.WorkQueue.specify_max_resources (   self,
  rmd 
)

Specifies the maximum resources allowed for the default category.

Parameters
selfReference to the current work queue object.
rmdDictionary indicating maximum values. See Task.resources_measured for possible fields. For example:
1 >>> # A maximum of 4 cores is found on any worker:
2 >>> q.specify_max_resources({'cores': 4})
3 >>> # A maximum of 8 cores, 1GB of memory, and 10GB disk are found on any worker:
4 >>> q.specify_max_resources({'cores': 8, 'memory': 1024, 'disk': 10240})

References work_queue.WorkQueue._work_queue, and work_queue_specify_max_resources().

def work_queue.WorkQueue.specify_min_resources (   self,
  rmd 
)

Specifies the minimum resources allowed for the default category.

Parameters
selfReference to the current work queue object.
rmdDictionary indicating minimum values. See Task.resources_measured for possible fields. For example:
1 >>> # A minimum of 2 cores is found on any worker:
2 >>> q.specify_min_resources({'cores': 2})
3 >>> # A minimum of 4 cores, 512MB of memory, and 1GB disk are found on any worker:
4 >>> q.specify_min_resources({'cores': 4, 'memory': 512, 'disk': 1024})

References work_queue.WorkQueue._work_queue, and work_queue_specify_min_resources().

def work_queue.WorkQueue.specify_category_max_resources (   self,
  category,
  rmd 
)

Specifies the maximum resources allowed for the given category.

Parameters
selfReference to the current work queue object.
categoryName of the category.
rmdDictionary indicating maximum values. See Task.resources_measured for possible fields. For example:
1 >>> # A maximum of 4 cores may be used by a task in the category:
2 >>> q.specify_category_max_resources("my_category", {'cores': 4})
3 >>> # A maximum of 8 cores, 1GB of memory, and 10GB may be used by a task:
4 >>> q.specify_category_max_resources("my_category", {'cores': 8, 'memory': 1024, 'disk': 10240})

References work_queue.WorkQueue._work_queue, and work_queue_specify_category_max_resources().

def work_queue.WorkQueue.specify_category_min_resources (   self,
  category,
  rmd 
)

Specifies the minimum resources allowed for the given category.

Parameters
selfReference to the current work queue object.
categoryName of the category.
rmdDictionary indicating minimum values. See Task.resources_measured for possible fields. For example:
1 >>> # A minimum of 2 cores is found on any worker:
2 >>> q.specify_category_min_resources("my_category", {'cores': 2})
3 >>> # A minimum of 4 cores, 512MB of memory, and 1GB disk are found on any worker:
4 >>> q.specify_category_min_resources("my_category", {'cores': 4, 'memory': 512, 'disk': 1024})

References work_queue.WorkQueue._work_queue, and work_queue_specify_category_min_resources().

def work_queue.WorkQueue.specify_category_first_allocation_guess (   self,
  category,
  rmd 
)

Specifies the first-allocation guess for the given category.

Parameters
selfReference to the current work queue object.
categoryName of the category.
rmdDictionary indicating maximum values. See Task.resources_measured for possible fields. For example:
1 >>> # Tasks are first tried with 4 cores:
2 >>> q.specify_category_first_allocation_guess("my_category", {'cores': 4})
3 >>> # Tasks are first tried with 8 cores, 1GB of memory, and 10GB:
4 >>> q.specify_category_first_allocation_guess("my_category", {'cores': 8, 'memory': 1024, 'disk': 10240})

References work_queue.WorkQueue._work_queue, and work_queue_specify_category_first_allocation_guess().

def work_queue.WorkQueue.initialize_categories (   self,
  filename,
  rm 
)

Initialize first value of categories.

Parameters
selfReference to the current work queue object.
rmDictionary indicating maximum values. See Task.resources_measured for possible fields.
filenameJSON file with resource summaries.

References work_queue.WorkQueue._work_queue, and work_queue_initialize_categories().

def work_queue.WorkQueue.cancel_by_taskid (   self,
  id 
)

Cancel task identified by its taskid and remove from the given queue.

Parameters
selfReference to the current work queue object.
idThe taskid returned from submit.

References work_queue.WorkQueue._work_queue, and work_queue_cancel_by_taskid().

Referenced by work_queue.WorkQueue.cancel_by_category().

def work_queue.WorkQueue.cancel_by_tasktag (   self,
  tag 
)

Cancel task identified by its tag and remove from the given queue.

Parameters
selfReference to the current work queue object.
tagThe tag assigned to task using specify_tag.

References work_queue.WorkQueue._work_queue, and work_queue_cancel_by_tasktag().

def work_queue.WorkQueue.cancel_by_category (   self,
  category 
)

Cancel all tasks of the given category and remove them from the queue.

Parameters
selfReference to the current work queue object.
tagThe tag assigned to task using specify_tag.

References work_queue.WorkQueue.cancel_by_taskid().

def work_queue.WorkQueue.shutdown_workers (   self,
  n 
)

Shutdown workers connected to queue.

Gives a best effort and then returns the number of workers given the shutdown order.

Parameters
selfReference to the current work queue object.
nThe number to shutdown. To shut down all workers, specify "0".

References work_queue.WorkQueue._work_queue, and work_queue_shut_down_workers().

Referenced by work_queue.WorkQueue.__init__().

def work_queue.WorkQueue.block_host (   self,
  host 
)

Block workers running on host from working for the manager.

Parameters
selfReference to the current work queue object.
hostThe hostname the host running the workers.

References work_queue.WorkQueue._work_queue, and work_queue_block_host().

Referenced by work_queue.WorkQueue.blacklist().

def work_queue.WorkQueue.blacklist (   self,
  host 
)
def work_queue.WorkQueue.block_host_with_timeout (   self,
  host,
  timeout 
)

Block workers running on host for the duration of the given timeout.

Parameters
selfReference to the current work queue object.
hostThe hostname the host running the workers.
timeoutHow long this block entry lasts (in seconds). If less than 1, block indefinitely.

References work_queue.WorkQueue._work_queue, and work_queue_block_host_with_timeout().

Referenced by work_queue.WorkQueue.blacklist_with_timeout().

def work_queue.WorkQueue.blacklist_with_timeout (   self,
  host,
  timeout 
)
def work_queue.WorkQueue.unblock_host (   self,
  host = None 
)

Unblock given host, of all hosts if host not given.

Parameters
selfReference to the current work queue object.
hostThe of the hostname the host.

References work_queue.WorkQueue._work_queue, work_queue_unblock_all(), and work_queue_unblock_host().

Referenced by work_queue.WorkQueue.blacklist_clear().

def work_queue.WorkQueue.blacklist_clear (   self,
  host = None 
)
def work_queue.WorkQueue.invalidate_cache_file (   self,
  local_name 
)

Delete file from workers's caches.

Parameters
selfReference to the current work queue object.
local_nameName of the file as seen by the manager.

References work_queue.WorkQueue._work_queue, and work_queue_invalidate_cached_file().

def work_queue.WorkQueue.specify_keepalive_interval (   self,
  interval 
)

Change keepalive interval for a given queue.

Parameters
selfReference to the current work queue object.
intervalMinimum number of seconds to wait before sending new keepalive checks to workers.

References work_queue.WorkQueue._work_queue, and work_queue_specify_keepalive_interval().

def work_queue.WorkQueue.specify_keepalive_timeout (   self,
  timeout 
)

Change keepalive timeout for a given queue.

Parameters
selfReference to the current work queue object.
timeoutMinimum number of seconds to wait for a keepalive response from worker before marking it as dead.

References work_queue.WorkQueue._work_queue, and work_queue_specify_keepalive_timeout().

def work_queue.WorkQueue.estimate_capacity (   self)

Turn on manager capacity measurements.

Parameters
selfReference to the current work queue object.

References work_queue.WorkQueue._work_queue, and work_queue_specify_estimate_capacity_on().

def work_queue.WorkQueue.tune (   self,
  name,
  value 
)

Tune advanced parameters for work queue.

Parameters
selfReference to the current work queue object.
nameThe name fo the parameter to tune. Can be one of following:
  • "resource-submit-multiplier" Treat each worker as having ({cores,memory,gpus} * multiplier) when submitting tasks. This allows for tasks to wait at a worker rather than the manager. (default = 1.0)
  • "min-transfer-timeout" Set the minimum number of seconds to wait for files to be transferred to or from a worker. (default=10)
  • "foreman-transfer-timeout" Set the minimum number of seconds to wait for files to be transferred to or from a foreman. (default=3600)
  • "transfer-outlier-factor" Transfer that are this many times slower than the average will be aborted. (default=10x)
  • "default-transfer-rate" The assumed network bandwidth used until sufficient data has been collected. (1MB/s)
  • "fast-abort-multiplier" Set the multiplier of the average task time at which point to abort; if negative or zero fast_abort is deactivated. (default=0)
  • "keepalive-interval" Set the minimum number of seconds to wait before sending new keepalive checks to workers. (default=300)
  • "keepalive-timeout" Set the minimum number of seconds to wait for a keepalive response from worker before marking it as dead. (default=30)
  • "short-timeout" Set the minimum timeout when sending a brief message to a single worker. (default=5s)
  • "long-timeout" Set the minimum timeout when sending a brief message to a foreman. (default=1h)
  • "category-steady-n-tasks" Set the number of tasks considered when computing category buckets.
  • "hungry-minimum" Mimimum number of tasks to consider queue not hungry. (default=10)
  • "wait-for-workers" Mimimum number of workers to connect before starting dispatching tasks. (default=0)
  • "wait_retrieve_many" Parameter to alter how work_queue_wait works. If set to 0, work_queue_wait breaks out of the while loop whenever a task changes to WORK_QUEUE_TASK_DONE (wait_retrieve_one mode). If set to 1, work_queue_wait does not break, but continues recieving and dispatching tasks. This occurs until no task is sent or recieved, at which case it breaks out of the while loop (wait_retrieve_many mode). (default=0)
valueThe value to set the parameter to.
Returns
0 on succes, -1 on failure.

References work_queue.WorkQueue._work_queue, and work_queue_tune().

def work_queue.WorkQueue.submit (   self,
  task 
)

Submit a task to the queue.

It is safe to re-submit a task returned by wait.

Parameters
selfReference to the current work queue object.
taskA task description created from work_queue.Task.

References work_queue.WorkQueue._task_table, work_queue.WorkQueue._work_queue, and work_queue_submit().

Referenced by work_queue.WorkQueue.map(), work_queue.WorkQueue.pair(), and work_queue.WorkQueue.tree_reduce().

def work_queue.WorkQueue.wait (   self,
  timeout = WORK_QUEUE_WAITFORTASK 
)

Wait for tasks to complete.

This call will block until the timeout has elapsed

Parameters
selfReference to the current work queue object.
timeoutThe number of seconds to wait for a completed task before returning. Use an integer to set the timeout or the constant WORK_QUEUE_WAITFORTASK to block until a task has completed.

References work_queue.WorkQueue.wait_for_tag().

def work_queue.WorkQueue.wait_for_tag (   self,
  tag,
  timeout = WORK_QUEUE_WAITFORTASK 
)

Similar to wait, but guarantees that the returned task has the specified tag.

This call will block until the timeout has elapsed.

Parameters
selfReference to the current work queue object.
tagDesired tag. If None, then it is equivalent to self.wait(timeout)
timeoutThe number of seconds to wait for a completed task before returning.

References work_queue.WorkQueue._task_table, work_queue.WorkQueue._update_status_display(), work_queue.WorkQueue._work_queue, work_queue.WorkQueue.empty(), and work_queue_wait_for_tag().

Referenced by work_queue.WorkQueue.map(), work_queue.WorkQueue.pair(), work_queue.WorkQueue.tree_reduce(), and work_queue.WorkQueue.wait().

def work_queue.WorkQueue.map (   self,
  fn,
  array,
  chunk_size = 1 
)

Maps a function to elements in a sequence using work_queue.

Similar to regular map function in python

Parameters
selfReference to the current work queue object.
fnThe function that will be called on each element
seqThe sequence that will call the function
chunk_sizeThe number of elements to process at once

References work_queue.WorkQueue.empty(), batch_queue_module.submit, batch_queue_module::@3.submit, work_queue.WorkQueue.submit(), and work_queue.WorkQueue.wait_for_tag().

def work_queue.WorkQueue.pair (   self,
  fn,
  seq1,
  seq2,
  chunk_size = 1 
)

Returns the values for a function of each pair from 2 sequences.

The pairs that are passed into the function are generated by itertools

Parameters
selfReference to the current work queue object.
fnThe function that will be called on each element
seq1The first seq that will be used to generate pairs
seq2The second seq that will be used to generate pairs

References work_queue.WorkQueue.empty(), batch_queue_module.submit, batch_queue_module::@3.submit, work_queue.WorkQueue.submit(), and work_queue.WorkQueue.wait_for_tag().

def work_queue.WorkQueue.tree_reduce (   self,
  fn,
  seq,
  chunk_size = 2 
)

Reduces a sequence until only one value is left, and then returns that value.

The sequence is reduced by passing a pair of elements into a function and then stores the result. It then makes a sequence from the results, and reduces again until one value is left.

If the sequence has an odd length, the last element gets reduced at the end.

Parameters
selfReference to the current work queue object.
fnThe function that will be called on each element
seqThe seq that will be reduced
chunk_sizeThe number of elements per Task (for tree reduc, must be greater than 1)

References work_queue.WorkQueue.empty(), batch_queue_module.submit, batch_queue_module::@3.submit, work_queue.WorkQueue.submit(), and work_queue.WorkQueue.wait_for_tag().

def work_queue.WorkQueue.application_info (   self)

Should return a dictionary with information for the status display.

This method is meant to be overriden by custom applications.

The dictionary should be of the form:

{ "application_info" : {"values" : dict, "units" : dict} }

where "units" is an optional dictionary that indicates the units of the corresponding key in "values".

Parameters
selfReference to the current work queue object.

For example:

1 >>> myapp.application_info()
2 {'application_info': {'values': {'size_max_output': 0.361962, 'current_chunksize': 65536}, 'units': {'size_max_output': 'MB'}}}

The documentation for this class was generated from the following file: