Difference between revisions of "WorkflowJDLAdaptor"

From Gcube Wiki
Jump to: navigation, search
(Known limitations)
Line 1: Line 1:
 
-Overview=
 
-Overview=
This adaptor as part of the adaptors offered by the [[WorkflowEngine]] constructs an [[ExecutionPlan]] based on the description of a job defined in a description using the JDL syntax. This description can be of a single job or it can include a DAG of jobs. The JDL description is parsed using the {@link JDLParser} and the adaptor then processed the retrieved {@link ParsedJDLInfo} to create the {@link ExecutionPlan}
+
This adaptor as part of the adaptors offered by the [[WorkflowEngine]] constructs an [[ExecutionPlan]] based on the description of a job defined in a description using the JDL syntax. This description can be of a single job or it can include a DAG of jobs. The JDL description is parsed using the JDLParser and the adaptor then processed the retrieved ParsedJDLInfo to create the ExecutionPlan
  
The {@link AdaptorJDLResources} provided include all the {@link AttachedJDLResource} items that are expected as input from the jobs defined. The output resources of the workflow, retrievable as {@link OutputSandboxJDLResource} instances, are constructed from the elements found in all the jobs Output Sandbox.
+
The AdaptorJDLResources provided include all the AttachedJDLResource items that are expected as input from the jobs defined. The output resources of the workflow, retrievable as OutputSandboxJDLResource instances, are constructed from the elements found in all the jobs Output Sandbox.
  
Depending on the configuration, the adaptor will create the {@link ExecutionPlan} that will orchestrate the execution of a DAG of jobs either as a series of {@link SequencePlanElement} and {@link FlowPlanElement} elements or as a single {@link BagPlanElement}. The first case allows for a well defined series of operation but since the creation of such a series of constructs is an exercise on graph topological sorting, which as a problem can provide multiple answers that depending on the nature of the original graph might restrict the parallelization factor of the overall DAG, in cases of complex graphs, this case can damage the parallelization capabilities of n execution plan. The second case is much more dynamic. It allows for execution time decision making of the nodes to be executed. This of course comes as a tradeoff with increased complexity at runtime with respect to the well defined plan, but it can provide the optimal parallelization capabilities.
+
Depending on the configuration, the adaptor will create the ExecutionPlan that will orchestrate the execution of a DAG of jobs either as a series of SequencePlanElement and FlowPlanElement elements or as a single BagPlanElement. The first case allows for a well defined series of operation but since the creation of such a series of constructs is an exercise on graph topological sorting, which as a problem can provide multiple answers that depending on the nature of the original graph might restrict the parallelization factor of the overall DAG, in cases of complex graphs, this case can damage the parallelization capabilities of n execution plan. The second case is much more dynamic. It allows for execution time decision making of the nodes to be executed. This of course comes as a tradeoff with increased complexity at runtime with respect to the well defined plan, but it can provide the optimal parallelization capabilities.
  
Staging of input files for the executables is performed at a level of Input Sandbox defined for each job. The resources that are attached to the adaptor are stored in the Storage System and are retrieved in the node that hosts that defines the input sandbox that needs them. The files declared in the Output Sandbox of a job are stored in the Storage System and information on the way to retrieve the output is provided through the {@link OutputSandboxJDLResource} which is valid after the completion of the execution.
+
Staging of input files for the executables is performed at a level of Input Sandbox defined for each job. The resources that are attached to the adaptor are stored in the Storage System and are retrieved in the node that hosts that defines the input sandbox that needs them. The files declared in the Output Sandbox of a job are stored in the Storage System and information on the way to retrieve the output is provided through the OutputSandboxJDLResource which is valid after the completion of the execution.
  
 
=Known limitations=
 
=Known limitations=
 
*Supported jobs are only those of type Normal
 
*Supported jobs are only those of type Normal
*The case of node collocation is not handled correctly because multiple {@link BoundaryPlanElement} are created. The node used is still a single one but it is contacted multiple times and data locality is not exploited correctly.
+
*The case of node collocation is not handled correctly because multiple BoundaryPlanElement are created. The node used is still a single one but it is contacted multiple times and data locality is not exploited correctly.
*The arguments defined for an executable in the respective JDL attribute, when passed to the {@link ShellPlanElement} are split using the space character (' ') as a delimiter. This way no space containing phrase can be passed a single argument  
+
*The arguments defined for an executable in the respective JDL attribute, when passed to the ShellPlanElement are split using the space character (' ') as a delimiter. This way no space containing phrase can be passed a single argument  
*The Retry and Shallow Retry attributes of the JDl are treated equally and are used at the level of {@link ShellPlanElement} and not at the level of {@link BoundaryPlanElement}
+
*The Retry and Shallow Retry attributes of the JDl are treated equally and are used at the level of ShellPlanElement and not at the level of BoundaryPlanElement
 
*After the execution completion not cleanup in the Storage system is done.
 
*After the execution completion not cleanup in the Storage system is done.

Revision as of 16:21, 30 January 2010

-Overview= This adaptor as part of the adaptors offered by the WorkflowEngine constructs an ExecutionPlan based on the description of a job defined in a description using the JDL syntax. This description can be of a single job or it can include a DAG of jobs. The JDL description is parsed using the JDLParser and the adaptor then processed the retrieved ParsedJDLInfo to create the ExecutionPlan

The AdaptorJDLResources provided include all the AttachedJDLResource items that are expected as input from the jobs defined. The output resources of the workflow, retrievable as OutputSandboxJDLResource instances, are constructed from the elements found in all the jobs Output Sandbox.

Depending on the configuration, the adaptor will create the ExecutionPlan that will orchestrate the execution of a DAG of jobs either as a series of SequencePlanElement and FlowPlanElement elements or as a single BagPlanElement. The first case allows for a well defined series of operation but since the creation of such a series of constructs is an exercise on graph topological sorting, which as a problem can provide multiple answers that depending on the nature of the original graph might restrict the parallelization factor of the overall DAG, in cases of complex graphs, this case can damage the parallelization capabilities of n execution plan. The second case is much more dynamic. It allows for execution time decision making of the nodes to be executed. This of course comes as a tradeoff with increased complexity at runtime with respect to the well defined plan, but it can provide the optimal parallelization capabilities.

Staging of input files for the executables is performed at a level of Input Sandbox defined for each job. The resources that are attached to the adaptor are stored in the Storage System and are retrieved in the node that hosts that defines the input sandbox that needs them. The files declared in the Output Sandbox of a job are stored in the Storage System and information on the way to retrieve the output is provided through the OutputSandboxJDLResource which is valid after the completion of the execution.

Known limitations

  • Supported jobs are only those of type Normal
  • The case of node collocation is not handled correctly because multiple BoundaryPlanElement are created. The node used is still a single one but it is contacted multiple times and data locality is not exploited correctly.
  • The arguments defined for an executable in the respective JDL attribute, when passed to the ShellPlanElement are split using the space character (' ') as a delimiter. This way no space containing phrase can be passed a single argument
  • The Retry and Shallow Retry attributes of the JDl are treated equally and are used at the level of ShellPlanElement and not at the level of BoundaryPlanElement
  • After the execution completion not cleanup in the Storage system is done.