Difference between revisions of "Statistical Algorithms Importer: Docker Support"

From Gcube Wiki
Jump to: navigation, search
(The Algorithm Docker Image Executor (DIE))
Line 22: Line 22:
 
</pre>
 
</pre>
  
In addition to passing the token and the input file, the algorithm also passes the id of the temporary folder that was created on the StorageHub service to contain the computation results. The service created from the chosen image will be responsible for saving the data of its own computation in the folder indicated by the algorithm interacting with the StorageHub service. When the execution of the docker service is finished, that is, it will have created the results and saved on the temporary folder, then the [https://services.d4science.org/group/rprototypinglab/data-miner?OperatorId=org.gcube.dataanalysis.wps.statisticalmanager.synchserver.mappedclasses.transducerers.DOCKER_IMAGE_EXECUTOR Docker Image Executor (DIE)] algorithm will take care of returning the result as a zip file of the temporary folder. So, it is important that the docker image is written with these constraints in mind.
+
In addition to passing the token and the input file, the algorithm also passes the id of the temporary folder that was created on the StorageHub service to contain the computation results. The service created from the chosen image will be responsible for saving the data of its own computation in the folder indicated by the algorithm interacting with the [[StorageHub_REST_API|StorageHub]] service. When the execution of the docker service is finished, that is, it will have created the results and saved on the temporary folder, then the [https://services.d4science.org/group/rprototypinglab/data-miner?OperatorId=org.gcube.dataanalysis.wps.statisticalmanager.synchserver.mappedclasses.transducerers.DOCKER_IMAGE_EXECUTOR Docker Image Executor (DIE)] algorithm will take care of returning the result as a zip file of the temporary folder. So, it is important that the docker image is written with these constraints in mind.
  
 
=== How Create Docker Image ===
 
=== How Create Docker Image ===
Line 32: Line 32:
  
 
So in general, the image could also be constructed using other languages and different base images.
 
So in general, the image could also be constructed using other languages and different base images.
What remains binding is that the image that is created accepts the parameters as passed by the DIE and respects the constraint to save the results in the temporary folder on StorageHub as indicated.
+
What remains binding is that the image that is created accepts the parameters as passed by the DIE and respects the constraint to save the results in the temporary folder on [[StorageHub_REST_API|StorageHub]] as indicated.
  
 
<!--
 
<!--

Revision as of 13:21, 22 September 2020

This page explains how to create and run docker images on the D4Science infrastructure through the DataMiner Manager service and the algorithms developed with the Statistical Algorithms Importer (SAI). Currently for this purpose there is the Docker Image Executor (DIE) algorithm. More information on Docker can be found here.

The Algorithm Docker Image Executor (DIE)

The Docker Image Executor (DIE) algorithm is already present and accessible on the D4Science infrastructure:

File:DockerImageExecutor.png
Docker Image Executor (DIE), Docker Support

This algorithm allows you to retrieve the image that you intend to run on the D4Science Swarm cluster from a Docker Hub repository. To run the algorithm the user must enter:

  • Image, the name of the repository (e.g. d4science/sortapp)
  • CommandName, the name of the command to invoke when the service is started (e.g. sortapp)
  • FileParam, a file present in the user's workspace to be passed as an input parameter along with the run command (e.g. sortableelements.txt)

This algorithm will take care of retrieving the user token and passing the parameters to the docker service in this format:

<command-name> <token> <file-item-id> <temp-dir-item-id>

In addition to passing the token and the input file, the algorithm also passes the id of the temporary folder that was created on the StorageHub service to contain the computation results. The service created from the chosen image will be responsible for saving the data of its own computation in the folder indicated by the algorithm interacting with the StorageHub service. When the execution of the docker service is finished, that is, it will have created the results and saved on the temporary folder, then the Docker Image Executor (DIE) algorithm will take care of returning the result as a zip file of the temporary folder. So, it is important that the docker image is written with these constraints in mind.

How Create Docker Image

An example of how create a docker image suitable for running via the DIE algorithm is shown here. : This image is built starting from the base python:3.6-alpine image and installing the sortapp application written in python3.6 (see Dockerfile).

The image is available on docker hub here: d4science/sortapp

So in general, the image could also be constructed using other languages and different base images. What remains binding is that the image that is created accepts the parameters as passed by the DIE and respects the constraint to save the results in the temporary folder on StorageHub as indicated.