Difference between revisions of "How to use the DataMiner Pool Manager"

From Gcube Wiki
Jump to: navigation, search
(Architecture)
(Process (From SAI to Production VRE))
 
(113 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
DataMiner Pool Manager service, aka DMPM, is a REST service able to rationalize and automatize the current process for publishing SAI algorithms on DataMiner nodes and keep DataMiner cluster updated.
 
DataMiner Pool Manager service, aka DMPM, is a REST service able to rationalize and automatize the current process for publishing SAI algorithms on DataMiner nodes and keep DataMiner cluster updated.
  
 +
== Maven coordinates ==
 +
The second version of the the service has been released in gCube 4.6.1.
 +
The maven artifact coordinates are:
 +
<dependency>
 +
    <groupId>org.gcube.dataanalysis</groupId>
 +
    <artifactId>dataminer-pool-manager</artifactId>
 +
    <version>2.0.0-SNAPSHOT</version>
 +
    <packaging>war</packaging>
 +
</dependency>
 +
 +
==Overview==
 +
 +
The service may accept an algorithm descriptor, including its dependencies, generates (via templating) ansible playbook, inventory and roles for the relevant stuff (algorithm installer, algorithms, dependencies), executes ansible playbook on a Staging DataMiner, and finally udpdates the lists of dependendencies and algorithms that will be used from a Cron-job for the installation.
 +
 +
In such sense, the service accepts as input, the url of an algorithm package (including jar, and metadata), extracts the information needed to installation, installs the script, updates the list of dependencies, publishes the new algorithm in the Information System and returns asynchronously the execution outcome to the caller.
  
 
==Architecture==
 
==Architecture==
Line 9: Line 24:
 
* '''SAI''': such component allows the user to upload the Package related to the algorithm to deploy and to decide on which VRE
 
* '''SAI''': such component allows the user to upload the Package related to the algorithm to deploy and to decide on which VRE
 
* '''Dataminer Pool Manager''': a Smartgears REST service in charge of managing the installation of algorithms on the infrastructure dataminers
 
* '''Dataminer Pool Manager''': a Smartgears REST service in charge of managing the installation of algorithms on the infrastructure dataminers
* The Staging DataMiner: a particular dataminer machine, usable only by the Dataminer Pool Manager, used to test the installation of an algorithm and to its dependencies. Two different dataminers in the d4science infrastructure are staging-oriented (such information can be set by the user inside the configuration file):
+
* The '''Staging DataMiner''': a particular dataminer machine, usable only by the Dataminer Pool Manager, used to test the installation of an algorithm and to its dependencies. Two different dataminers in the d4science infrastructure are staging-oriented (such information can be set by the user inside the configuration file):
** dataminer1-devnext.d4science.org
+
** dataminer1-devnext.d4science.org for the development environment
** dataminer-proto-ghost.d4science.org
+
** dataminer-proto-ghost.d4science.org for the production environment
* '''SVN Dependencies Lists''': lists (in files on SVN) of dependencies to must be installed on Dataminer machines. There is one list for type of dependency (system, github, cran) both for RPrototypingLab and for Production.
+
* '''SVN Dependencies Lists''': lists (in files on SVN) of dependencies that must be installed on Dataminer machines. There is one list for type of dependency both for Dev, RProto and Production.
* The '''Cron job''': runs on every Dataminer and periodically (every minute) aligns the packages installed on the machine with the SVN Dependencies List (both Production and RPrototypingLab)
+
* '''SVN Algorithms List''': lists (in files on SVN) of algorithms that must be installed on Dataminer machines. The service uses three different lists, one for the Dev environment, one for RProto and another one for the production.
 +
* The '''Cron job''': runs on every Dataminer and periodically (every minute) aligns the packages and the algorithms installed on the machine with the SVN Dependencies List and the SVN Algorithms Lists. Concerning the Algorithms, The Cron Job should have to be configured to run the command line available as record of SVN list, while as far as the Dependencies concerns, the Cron Job should have to be configured in order to just read from all the set of dependencies lists. The lists to consider are the following:
 +
** Production Algorithms:
 +
    http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/prod/algorithms
 +
** RProto Algorithms:
 +
    http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/proto/algorithms
 +
** Dev Algorithms:
 +
  http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/dev/algorithms
  
==Requirements==
+
==Process (From SAI to Production VRE)==
  
On the host where the Service is deployed, must be possible to execute the ansible-playbook command, in order to allow the installation of the dependencies on the staging dataminer, and to install the algorithm on the target VRE
+
Until now, SAI was deployed in several scopes and the user may deploy the algorithm just in the actual VRE.
The dependencies in the metadata file inside the algorithm package (coming from SAI then), must respect the following guidelines:
+
The idea is to have different instances of SAI in many VREs and allow the user to specify the VRE to consider for the deploy.
R Dependencies must have prefix cran:
+
OS Dependencies must have prefix os:
+
Custom Dependencies must have prefix github:
+
At least for the staging dataminer used in the test phase, the application must have SSH root access; for the installation of the algorithm at production level instead, should have to be enough the gcube priviliges.
+
  
 +
The process is composed of two main phases:
  
 +
* '''STAGING Phase''': the installation of an algorithm in the staging dataminer; it ends with the publishing of an algorithm in the pool of dataminers of the target VRE
 +
** The DMPM contacts the Staging Dataminer and installs the algorithm and the dependencies
 +
** The output is retrieved. If there are errors in the installation (e.g. a dependency that does not exist or is written not correctly) it stops and the log is returned to the user (a mail notification is sent to the user and to the VRE adminstrators).
 +
** The DMPM updates the SVN Algorithms list
 +
** A mail notification is sent to the user and to the VRE administrators
 +
** Cron read the SVN lists (both Dependencies and Algorithms) and installs the algorithm in the pool of dataminers for the current VRE.
 +
** The script publishes the new algorithm in the Information System.
  
==Process (From SAI to Production VRE)==
+
[[File:staging.png]]
  
The idea is to have just an instance of SAI in RPrototypingLab VRE and allow the user to specify the VRE by providing the token for that VRE.
+
* '''RELEASE Phase'''
The Installation of the new algorithms by means of SAI involves the following input therefore:
+
** SAI will invoke the service working in RELEASE PHASE in order to install the algorithm in a particular VRE of production (provided by the user); SAI will pass to the DMPM the target VRE.
 +
** The DMPM updates the SVN Production Algorithms list
 +
** Cron installs the algorithm in the production dataminers
 +
** A mail notification is sent to the user and to the VRE administrators
 +
** The script publishes the algorithm in the VRE
  
Package containing Metadata and dependencies
+
[[File:release.png]]
The target VRE and the token to access it
+
  
The process is composed of two main phases:
+
==Configuration and Testing==
  
TEST Phase: installation of an algorithm and its deps in the staging dataminer; it ends with the publishing of an algorithm in the pool of dataminers of the RPrototypingLab VRE;
+
DMPM is a SmartGears compliant service.  
The DMPM contacts the Staging Dataminer and installs the algorithm and the dependencies
+
The output is retrieved. If there are errors in the installation (e.g. a dependency that does not exist) it stops and the log is returned to the user
+
The DMPM updates the SVN RPrototypingLab Dependencies lists
+
Install the algorithm only (not the dependencies) in RPrototypingLab dataminers
+
Publish the new algorithm in RPrototypingLab VRE (the script do this by itself: if an algorithm is already available on the IS in that scope, the script updates the .jar files, but the resource on the IS, the  .properties and the wps config do not change)
+
  
RELEASE Phase
+
<source lang="text">
SAI will invoke the service working in RELEASE PHASE in order to install the algorithm in a particular VRE of production (provided by the user); SAI will pass to the DMPM the target VRE name and the token to access to that VRE
+
/home/gcube/tomcat/webapps/dataminer-pool-manager-2.0.0-SNAPSHOT
The DMPM updates the SVN Production Dependencies lists
+
</source>
Install the algorithm only (not the dependencies) in the production dataminers
+
Publish the algorithm in the VRE
+
  
 +
In such sense, an instance has been deployed and configured at Development, Preprod and Prod levels.
  
SAI is deployed in several scopes and the user may deploy the algorithm just in the actual VRE.
+
<source lang="text">
The idea now is to have just an instance of SAI in RProto and allow the user to specify the VRE by providing the token for that VRE.
+
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/rest/
Ideally both the token and the name of the VRE will be passed to the DMPM service in order to install the algorithm in that VRE.
+
</source>
  
 +
In order to use the service, two manual configuration are needed:
  
==Functionalities==
+
** to modify the parameter ''<param-value>Dev</param-value>'' in ''/home/gcube/tomcat/webapps/dataminer-pool-manager/WEB-INF/web.xml'' file according to scope where the service runs (Dev, RProto or Prod); such information will be read dinamically from the service for the switching among the list algorithms and dependencies to consider, of for the selection of the staging dataminer.
 +
** to edit the file ''/home/gcube/dataminer-pool-manager/dpmConfig/service.properties''. Such file contains among the others, the staging dataminer to consider (automatically selected based on the environment at the previous point), the SVN repositories for the algorithms of each environment and for all the typologies of dependencies generated from SAI and available in the metadata file. (e.g., the service carries out his checks on the correctness of the name of a dependency by going to read in the correspondent file according to the language defined in the ''info.txt'' file available in the algorithm package).
 +
An example of ''info.txt'' file is the following:
  
The DMPM REST Service will expose three main functionalities:
+
<source lang="text">
TEST PHASE: A method able to test the installation of the algorithm and its dependencies on a staging dataminer first, to update the SVN lists dedicated to  RPrototypingLab, and to install the algorithm on the set of dataminers of RPrototypingLab VRE. The parameters to consider are the following:
+
Username: giancarlo.panichi
the algorithm (URL to package containing the dependencies and the script to install)
+
Full Name: Giancarlo Panichi
the category to which the algorithm belong to
+
Email: g.panichi@isti.cnr.it
the VRE token from which SAI is used (ideally RPrototypingLab)
+
  
    An example of Rest call is the following:
+
Language: R
 +
Algorithm Name: RBLACKBOX
 +
Class Name: org.gcube.dataanalysis.executor.rscripts.RBlackBox
 +
Algorithm Description: RBlackBox
 +
Algorithm Category: BLACK_BOX
  
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/algorithm/stage?gcube-token=708e7eb8-11a7-4e9a-816b-c9ed7e7e99fe-98187548&algorithmPackageURL=http://data.d4science.org/dENQTTMxdjNZcGRpK0NHd2pvU0owMFFzN0VWemw3Zy9HbWJQNStIS0N6Yz0&category=ICHTHYOP_MODEL       
+
Interpreter Version: 3.2.1
  
RELEASE PHASE: a method invoked from SAI, executed after that the Test phase has successfully finished, able to update the SVN list of production with the dependencies extracted from the package (if new ones), and to install the algorithm on a set of Dataminers/a single one belonging to a particular production VRE provided by the user. Some of the parameters to consider are the following:
+
Packages:
the algorithm (URL to package containing the dependencies and the script to install)
+
Package Name: DBI
the category to which the algorithm belong to
+
Package Name: RPostgreSQL
the VRE token from which SAI is used (ideally RPrototypingLab)
+
Package Name: raster
The target VRE on which install the algorithm
+
Package Name: maptools
The token for the target VRE
+
Package Name: sqldf
 +
Package Name: RJSONIO
 +
Package Name: data.table
 +
</source>
  
An example of Rest call is the following:
 
        http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/algorithm/add?gcube-token=708e7eb8-11a7-4e9a-816b-c9ed7e7e99fe-98187548&algorithmPackageURL=http://data.d4science.org/dENQTTMxdjNZcGRpK0NHd2pvU0owMFFzN0VWemw3Zy9HbWJQNStIS0N6Yz0&category=ICHTHYOP_MODEL&targetVREToken=3a23bfa4-4dfe-44fc-988f-194b91071dd2-843339462&targetVRE=/d4science.research-infrastructures.eu/gCubeApps/RPrototypingLab   
 
  
The result of the execution will be monitored asynchronously by means of a REST call to a log having as parameter the ID of the operation. This can be done both at TEST and RELEASE phases.
+
An example of ''service.properties'' file is the following:
        http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/log?gcube-token=708e7eb8-11a7-4e9a-816b-c9ed7e7e99fe-98187548&logUrl=426c8e35-a624-4710-b612-c90929c32c27
+
  
 +
<source lang="text">
 +
#YML node file
 +
DEV_STAGING_HOST: dataminer1-devnext.d4science.org
 +
PROTO_PROD_STAGING_HOST: dataminer-proto-ghost.d4science.org
 +
SVN_REPO: https://svn.d4science.research-infrastructures.eu/gcube/trunk/data-analysis/RConfiguration/RPackagesManagement/
 +
#HAPROXY_CSV:  http://data.d4science.org/Yk4zSFF6V3JOSytNd3JkRDlnRFpDUUR5TnRJZEw2QjRHbWJQNStIS0N6Yz0
  
  
 +
svn.repository = https://svn.d4science.research-infrastructures.eu/gcube
  
 +
svn.algo.main.repo = /trunk/data-analysis/DataMinerConfiguration/algorithms
  
==Overview==
+
svn.rproto.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/proto/algorithms
  
The service may accept an algorithm descriptor, including its dependencies (either OS, R and custom packages), queries the IS for dataminers in the current scope, generates (via templating) ansible playbook, inventory and roles for relevant stuff (algorithm installer, algorithms, dependencies), executes ansible playbook on a DataMiner.
+
svn.rproto.deps-linux-compiled =
 +
svn.rproto.deps-pre-installed =
 +
svn.rproto.deps-r-blackbox =
 +
svn.rproto.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/test_r_cran_pkgs.txt
 +
svn.rproto.deps-java =
 +
svn.rproto.deps-knime-workflow =
 +
svn.rproto.deps-octave =
 +
svn.rproto.deps-python =
 +
svn.rproto.deps-windows-compiled =
  
In such sense, the service accepts as input, among the others, the url of an algorithm package (including jar, and metadata), extracts the information needed to installation, installs the script, update the list of dependencies, optionally publishs the new algorithm in the Information System and returns asynchronously the execution outcome to the caller.
 
  
== Maven coordinates ==
+
svn.prod.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/prod/algorithms
The first version of the the service will be released in gCube 4.4.0 released.
+
The maven artifact coordinates are:
+
<dependency>
+
    <groupId>org.gcube.dataanalysis</groupId>
+
    <artifactId>dataminer-pool-manager</artifactId>
+
    <version>2.0.0-SNAPSHOT</version>
+
    <packaging>war</packaging>
+
</dependency>
+
  
==Configuration and Testing==
+
svn.prod.deps-linux-compiled =  
 +
svn.prod.deps-pre-installed =  
 +
svn.prod.deps-r-blackbox =  
 +
svn.prod.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/r_cran_pkgs.txt
 +
svn.prod.deps-java =
 +
svn.prod.deps-knime-workflow =
 +
svn.prod.deps-octave =
 +
svn.prod.deps-python =
 +
svn.prod.deps-windows-compiled =
 +
 
 +
 
 +
 
 +
svn.dev.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/dev/algorithms
 +
 
 +
svn.dev.deps-linux-compiled =
 +
svn.dev.deps-pre-installed =
 +
svn.dev.deps-r-blackbox =
 +
svn.dev.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/r_cran_pkgs.txt
 +
svn.dev.deps-java =
 +
svn.dev.deps-knime-workflow =
 +
svn.dev.deps-octave =
 +
svn.dev.deps-python =
 +
svn.dev.deps-windows-compiled =  
  
DMPM is a SmartGears compliant service.
 
  
<source lang="text">
 
/home/gcube/SmartGears-Bundle/tomcat/webapps/dataminer-pool-manager-2.0.0-SNAPSHOT
 
 
</source>
 
</source>
  
In such sense, an instance has already been deployed and configured at Development level.
+
==Usage and APIs==
 +
 
 +
 
 +
The DMPM REST Service will expose five main functionalities (three for the staging phase, and two for the release phase).
 +
The result of the execution will be monitored asynchronously by means of a REST call to a log having as parameter the ID of the operation.
 +
This can be done both at STAGING and RELEASE phases.
 +
 
 +
1. '''STAGING PHASE''': a method returning immediately the log ID useful to monitor the execution, able to:
 +
** test the installation of the algorithm on a staging dataminer
 +
** to update the algorithms SVN list
 +
 
 +
The parameters to consider are the following:
 +
* the '''algorithm''' (URL to package containing the dependencies and the script to install)
 +
* the '''targetVRE''' (actually the current VRE)
 +
* the '''category''' to which the algorithm belong to
 +
* the '''algorithm_type'''
 +
 
 +
An example of Rest call related to the Installation is the following:
  
 
<source lang="text">
 
<source lang="text">
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/rest/
+
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/algorithm/stage?gcube-token=*****
 +
&algorithmPackageURL=http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0
 +
&category=BLACK_BOX
 +
&algorithm_type=transducerers
 +
&targetVRE=/gcube/devNext/NextNext   
 
</source>
 
</source>
  
Such environment contains the configurations for ansible playbook, inventory and roles for algorithm installer, scripts, algorithms, dependencies and the logs of the execution.
+
An example of Rest call related to the log is the following:
  
 
<source lang="text">
 
<source lang="text">
/home/gcube/tomcat/webapps/dataminer-pool-manager/WEB-INF/classes/static          // static resource inside the WAR containing static roles
+
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/log?gcube-token=*****
/home/gcube/tomcat/webapps/dataminer-pool-manager/WEB-INF/classes/templates      // static resource inside the WAR containing the templates
+
&logUrl=id_from_previous_call
/home/gcube/tomcat/webapps/dataminer-pool-manager/WEB-INF/classes/custom  // static resource inside the WAR containing the custom roles
+
/home/gcube/dataminer-pool-manager/dpmConfig/service.properties      // static resource on the filesystem containing configuration data
+
/home/gcube/dataminer-pool-manager/jobs    //dynamically generated resource concerning the logs of the different job executions
+
/home/gcube/dataminer-pool-manager/work    //dinamically generated resource concerning the Ansible worker for each job
+
 
+
 
</source>
 
</source>
  
In order to allow Ansible to work on the pool of DataMiners, is necessary that the SSH key of the VM on which the service run (e.g., node2-d-d4s.d4science.org) must be deployed on the pool of dataminers with ''root'' permissions.
+
An example of Rest call related to the monitor of the execution is the following (actually three different status are available: COMPLETED, IN PROGRESS, FAILED):
  
==Requirements==
+
<source lang="text">
 +
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/monitor?gcube-token=*****
 +
&logUrl=id_from_previous_first_call
 +
</source>
  
The dependencies in the metadata file inside the algorithm package, must respect the following guidelines:
 
  
* R Dependencies must have prefix '''cran:'''
+
2. '''RELEASE PHASE''': a method invoked from SAI, executed after that the Test phase has successfully finished, able to:
* OS Dependencies must have prefix '''os:'''
+
** update the SVN list of production with the new algorithms (that is the input for the CRON-job); many attributes have been extracted from the metadata file, others are generated dinamically (e.g., the VRE, the type of algorithm, the URL to package, the Timestamp related to the last modification of the package, the current environment and so on)
* Custom Dependencies must have prefix '''github:'''
+
  
In case no prefix is specified, the service considers such dependencies as OS ones.
+
<source lang="text">
 +
| OCTAVEBLACKBOX | Giancarlo Panichi | BLACK_BOX | Dev | <notextile>./addAlgorithm.sh OCTAVEBLACKBOX BLACK_BOX org.gcube.dataanalysis.executor.rscripts.OctaveBlackBox /gcube/devNext/NextNext transducerers N http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0 "OctaveBlackBox" </notextile> | none | Fri Sep 01 16:58:47 UTC 2017 |
 +
</source>
  
==Usage and APIs==
 
Currently the service exposes the following REST methods:
 
  
* '''Adding an Algorithm to a single DataMiner host'''
+
The parameters to consider are the following:
 +
* the '''algorithm''' (URL to package containing the dependencies and the script to install)
 +
* the '''targetVRE'''
 +
* the '''category''' to which the algorithm belong to
 +
* the '''algorithm_type'''  
  
Such functionality installs the Algorithm on the specific DataMiner, update the SVN list of dependencies, optionally stores the algorithm in the Information System and returns immediately the log ID useful to monitor the execution.
+
An example of Rest call related to the publishing is the following:
It is possible to use such feature in case the service runs inside a VRE having a single dataminer (e.g., pre-production environment).
+
  
 
+
<source lang="text">
<source lang="java">
+
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/algorithm/add?gcube-token=*****
addAlgorithmToHost(algorithm, hostname, name, description, category, algorithmType, skipJava, publish, updateSVN);
+
&algorithmPackageURL=http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0
 +
&category=BLACK_BOX
 +
&algorithm_type=transducers
 +
&targetVRE=/gcube/devNext/NextNext
 
</source>
 
</source>
  
 +
An example of Rest call related to the monitoring is the following:
  
<source lang="java">
+
<source lang="text">
        @GET
+
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/monitor?gcube-token=*****
@Path("/hosts/add")
+
&logUrl=id_from_previous_call
@Produces("text/plain")
+
public String addAlgorithmToHost(
+
@QueryParam("algorithm") String algorithm,
+
@QueryParam("hostname") String hostname,
+
@QueryParam("name") String name,
+
@QueryParam("description") String description,
+
@QueryParam("category") String category,
+
@DefaultValue("transducerers") @QueryParam("algorithmType") String algorithmType,
+
@DefaultValue("N") @QueryParam("skipJava") String skipJava,
+
@DefaultValue("false") @QueryParam("publish") boolean publish,
+
@DefaultValue("false") @QueryParam("updateSVN") boolean updateSVN)
+
throws IOException, InterruptedException, SVNException {
+
Algorithm algo = this.getAlgorithm(algorithm, null, hostname, name, description, category, algorithmType,
+
skipJava);
+
// publish algo
+
+
if (publish) {
+
service.addAlgToIs(algo);
+
}
+
return service.addAlgorithmToHost(algo, hostname, updateSVN);
+
}
+
 
</source>
 
</source>
  
 +
==Notification==
  
It is possible to distinguish among mandatories parameters and optional ones:
+
Both for the Staging and Release phases, the user and the VRE administrators will be notified with the outcome of the execution.
 +
Some examples of notification are the following:
 +
 
 +
 
 +
<source lang="text">
 +
Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
  
* Mandatories:
+
Message:  
** '''algorithm''': URL related the package of the Algorithm; such parameter is mandatory.
+
** '''hostname''': the hostname of the DataMiner on which deploy the script; such parameter is mandatory.
+
  
* Optionals (The overall set of parameters, except the mandatory ones. can be extract from the metadata file (where available), or overwritten by the caller):
+
Dear Nunzio,
** '''name''': the name of the Algorithm (e.g.,ICHTHYOP_MODEL_ONE_BY_ONE )
+
** '''description''': the description of the Algorithm
+
** '''category''': the category to which the Algorithm belongs to (e.g, ICHTHYOP_MODEL)
+
** '''algorithmType''': by default set to "transducerers"
+
** '''skipJava''': by default set to "N"
+
** '''publish''': by default set to "false"; the registration of the algorithm in the VRE is currently done both by the service and by the install script (addAlgorithm.sh); by the way, if set to true, this parameter forces the registration in the IS of the deployed algorithm at caller scope level. The algorithm will be registered as Generic Resource.
+
** '''updateSVN''': by default set to "false"; If the package contains dependencies not present in the SVN list of R/OS/GitHub Packages to be installed, the caller can set such parameter to true and they will be added to such lists in the related files by using the default svn credentials of the caller (~/.subversion folder). The update happens only at the end of the process, when the algorithm has been successfully deployed
+
  
An example of the usage is the following:
+
DataMiner sent you a message:
<source lang ="text">
+
An error occurred while deploying your algorithm
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/hosts/add?gcube-token=VRE_TOKEN_ID&algorithm=URL_TP_ALGORITHM&hostname=TARGET_DATAMINER
+
 
 +
Here are the error details:
 +
 
 +
Installation failed with return code = 2
 +
 
 +
 
 +
Algorithm details:
 +
 
 +
User: Giancarlo Panichi
 +
Algorithm name: OCTAVEBLACKBOX
 +
Staging DataMiner Host: dataminer1-devnext.d4science.org
 +
Caller VRE: /gcube/devNext
 +
Target VRE: /gcube/devNext
 
</source>
 
</source>
  
  
  
* '''Adding an Algorithm to the set of DataMiners (or a single one) available in the VRE'''
 
  
Such functionality installs the Algorithm on the set of dataminers of a particular VRE, update the SVN list of dependencies, optionally stores the algorithm in the Information System and returns immediately the log ID useful to monitor the execution.
+
<source lang="text">
It's possible to automatically retrieve the set of dataminers in a cluster from the HA proxy. In fact, this exposes its status in csv format where we can retrieve the association between clusters and dataminers. ('''In the future, the cluster and the list of dataminers associated, will be available as properties of the service endpoint related to HAProxy resource in the IS. DMPM will be made compliant with such modification in the gCube release 4.5.0'''). For this to be effective, the name of the cluster in the HAProxy has been normalized with the name of dataminer/haproxy in the VRE. By relying on this, given a VRE (e.g. RProtoLab), the service will be able to deploy an algorithm to all dataminers belonging to it.
+
Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
In case the HAProxy is a dataminer itself (e.g., pre-production environment), the service is able to install automatically the algorithm just on it.
+
  
<source lang="java">
+
Message:
addAlgorithmToVRE(algorithm, name, description, category, algorithmType, skipJava, publish, updateSVN);
+
</source>
+
  
 +
Dear Nunzio,
  
<source lang="java">
+
DataMiner sent you a message:
        @GET
+
An error occurred while deploying your algorithm
@Path("/scopes/add")
+
 
@Produces("text/plain")
+
Here are the error details:
public String addAlgorithmToVRE(
+
 
@QueryParam("algorithm") String algorithm,
+
Following dependencies are not defined:
@QueryParam("name") String name,
+
@QueryParam("description") String description,
+
@QueryParam("category") String category,
+
@DefaultValue("transducerers") @QueryParam("algorithmType") String algorithmType,
+
@DefaultValue("N") @QueryParam("skipJava") String skipJava,
+
@DefaultValue("false") @QueryParam("publish") boolean publish,
+
@DefaultValue("false") @QueryParam("updateSVN") boolean updateSVN)
+
throws IOException, InterruptedException, SVNException {
+
Algorithm algo = this.getAlgorithm(algorithm, /*vre*/null, null, name, description, category, algorithmType, skipJava);
+
  
// publish algo
+
pippo
if (publish) {
+
 
service.addAlgToIs(algo);
+
 
}
+
Algorithm details:
return service.addAlgorithmToVRE(algo, ScopeProvider.instance.get(), updateSVN);
+
 
}
+
User: Giancarlo Panichi
 +
Algorithm name: OCTAVEBLACKBOX
 +
Staging DataMiner Host: dataminer1-devnext.d4science.org
 +
Caller VRE: /gcube/devNext
 +
Target VRE: /gcube/devNext
 
</source>
 
</source>
  
  
It is possible to distinguish among mandatories parameters and optional ones:
 
  
* Mandatories:
 
** '''algorithm''': URL related the package of the Algorithm; such parameter is mandatory.
 
  
* Optionals (The overall set of parameters, except the mandatory ones. can be extract from the metadata file (where available), or overwritten by the caller):
 
** '''name''': the name of the Algorithm (e.g., ICHTHYOP_MODEL_ONE_BY_ONE )
 
** '''description''': the description of the Algorithm
 
** '''category''': the category to which the Algorithm belongs to (e.g, ICHTHYOP_MODEL)
 
** '''algorithmType''': by default set to "transducerers"
 
** '''skipJava''': by default set to "N"
 
** '''publish''': by default set to "false"; the registration of the algorithm in the VRE is currently done both by the service and by the install script (addAlgorithm.sh); by the way, if set to true, this parameter forces the registration in the IS of the deployed algorithm at caller scope level. The algorithm will be registered as Generic Resource.
 
** '''updateSVN''': by default set to "false"; If the package contains dependencies not present in the SVN list of R/OS/GitHub Packages to be installed, the caller can set such parameter to true and they will be added to such lists in the related files by using the default svn credentials of the caller (~/.subversion folder). The update happens only at the end of the process, when the algorithm has been successfully deployed.
 
  
An example of the usage is the following:
+
<source lang="text">
<source lang ="text">
+
Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/scopes/add?gcube-token=VRE_TOKEN_ID&algorithm=URL_TP_ALGORITHM
+
 
 +
Message:
 +
 
 +
Dear Nunzio,
 +
 
 +
DataMiner sent you a message:
 +
An error occurred while deploying your algorithm
 +
 
 +
Here are the error details:
 +
 
 +
Installation completed but DataMiner Interface not working correctly or files OCTAVEBLACKBOX.jar and OCTAVEBLACKBOX_interface.jar not availables at the expected path
 +
 
 +
 
 +
Algorithm details:
 +
 
 +
User: Giancarlo Panichi
 +
Algorithm name: OCTAVEBLACKBOX
 +
Staging DataMiner Host: dataminer1-devnext.d4science.org
 +
Caller VRE: /gcube/devNext
 +
Target VRE: /gcube/devNext
 
</source>
 
</source>
  
  
* '''Monitoring the execution'''
 
  
Such functionality allows the caller to monitor asynchronously the execution by using the log ID obtained when an algorithm is deployed.
+
<source lang="text">
 +
Subject: [DataMinerGhostInstallationRequestReport] is SUCCESS for SAI_INHERITANCE algorithm
  
<source lang="java">
+
Message:
getLogById(logID);
+
Dear Nunzio,
</source>
+
  
 +
DataMiner sent you a message:
 +
The installation of the algorithm in the ghost dataminer is completed successfully.
 +
 
 +
You can retrieve experiment results under the '/DataMiner' e-Infrastructure Workspace folder or from the DataMiner interface.
 +
 
 +
 
 +
Algorithm details:
 +
 
 +
User: Gianpaolo Coro
 +
Algorithm name: SAI_INHERITANCE
 +
Staging DataMiner Host: dataminer1-devnext.d4science.org
 +
Caller VRE: /gcube/devNext/NextNext
 +
Target VRE: /gcube/devNext/NextNext
  
<source lang="java">
+
- This message was also sent to:
  @GET
+
 
  @Path("/log")
+
    Lucio Lelii
  @Produces("text/plain")
+
    Gianpaolo Coro
  public String getLogById(@QueryParam("logUrl") String logUrl) throws IOException {
+
    Giancarlo Panichi
      // TODO Auto-generated method stub
+
    Paolo Scarponi
      LOGGER.debug("Returning Log =" + logUrl);
+
    Gianpaolo Coro
      return service.getScriptFromURL(service.getURLfromWorkerLog(logUrl));
+
  }
+
 
</source>
 
</source>
  
  
An example of the usage is the following:
 
 
<source lang="text">
 
<source lang="text">
http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/log?gcube-token=TOKEN_ID&logUrl=LOG_ID
+
Subject: [DataMinerReleaseInstallationRequestReport] is SUCCESS for OCTAVEBLACKBOX algorithm
 +
 
 +
Message:
 +
Dear Nunzio,
 +
 
 +
DataMiner sent you a message:
 +
SVN REPOSITORY CORRECTLY UPDATED.
 +
 
 +
  The CRON job will install the algorithm in the target VRE 
 +
 
 +
 
 +
 
 +
 
 +
Algorithm details:
 +
 
 +
User: Giancarlo Panichi
 +
Algorithm name: OCTAVEBLACKBOX
 +
Caller VRE: /gcube/devNext/NextNext
 +
Target VRE: /gcube/devNext/NextNext
 +
 
 +
- This message was also sent to:
 +
 
 +
    Nunzio Andrea Galante
 +
    Lucio Lelii
 +
    Gianpaolo Coro
 +
    Giancarlo Panichi
 +
    Paolo Scarponi
 
</source>
 
</source>
 +
 +
 +
 +
 +
==DataMinerPoolManager Portlet==
 +
 +
Please refer to https://next.d4science.org/group/nextnext/dataminerdeployer for a graphical representation of the service.

Latest revision as of 17:33, 5 September 2017

DataMiner Pool Manager

DataMiner Pool Manager service, aka DMPM, is a REST service able to rationalize and automatize the current process for publishing SAI algorithms on DataMiner nodes and keep DataMiner cluster updated.

Maven coordinates

The second version of the the service has been released in gCube 4.6.1. The maven artifact coordinates are:

<dependency>
   <groupId>org.gcube.dataanalysis</groupId>
   <artifactId>dataminer-pool-manager</artifactId>
   <version>2.0.0-SNAPSHOT</version> 
   <packaging>war</packaging>
</dependency>

Overview

The service may accept an algorithm descriptor, including its dependencies, generates (via templating) ansible playbook, inventory and roles for the relevant stuff (algorithm installer, algorithms, dependencies), executes ansible playbook on a Staging DataMiner, and finally udpdates the lists of dependendencies and algorithms that will be used from a Cron-job for the installation.

In such sense, the service accepts as input, the url of an algorithm package (including jar, and metadata), extracts the information needed to installation, installs the script, updates the list of dependencies, publishes the new algorithm in the Information System and returns asynchronously the execution outcome to the caller.

Architecture

The following main entities will be involved in the process of integration between SAI and the production environment:

  • SAI: such component allows the user to upload the Package related to the algorithm to deploy and to decide on which VRE
  • Dataminer Pool Manager: a Smartgears REST service in charge of managing the installation of algorithms on the infrastructure dataminers
  • The Staging DataMiner: a particular dataminer machine, usable only by the Dataminer Pool Manager, used to test the installation of an algorithm and to its dependencies. Two different dataminers in the d4science infrastructure are staging-oriented (such information can be set by the user inside the configuration file):
    • dataminer1-devnext.d4science.org for the development environment
    • dataminer-proto-ghost.d4science.org for the production environment
  • SVN Dependencies Lists: lists (in files on SVN) of dependencies that must be installed on Dataminer machines. There is one list for type of dependency both for Dev, RProto and Production.
  • SVN Algorithms List: lists (in files on SVN) of algorithms that must be installed on Dataminer machines. The service uses three different lists, one for the Dev environment, one for RProto and another one for the production.
  • The Cron job: runs on every Dataminer and periodically (every minute) aligns the packages and the algorithms installed on the machine with the SVN Dependencies List and the SVN Algorithms Lists. Concerning the Algorithms, The Cron Job should have to be configured to run the command line available as record of SVN list, while as far as the Dependencies concerns, the Cron Job should have to be configured in order to just read from all the set of dependencies lists. The lists to consider are the following:
    • Production Algorithms:
   http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/prod/algorithms
    • RProto Algorithms:
   http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/proto/algorithms
    • Dev Algorithms:
  http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/data-analysis/DataMinerConfiguration/algorithms/dev/algorithms

Process (From SAI to Production VRE)

Until now, SAI was deployed in several scopes and the user may deploy the algorithm just in the actual VRE. The idea is to have different instances of SAI in many VREs and allow the user to specify the VRE to consider for the deploy.

The process is composed of two main phases:

  • STAGING Phase: the installation of an algorithm in the staging dataminer; it ends with the publishing of an algorithm in the pool of dataminers of the target VRE
    • The DMPM contacts the Staging Dataminer and installs the algorithm and the dependencies
    • The output is retrieved. If there are errors in the installation (e.g. a dependency that does not exist or is written not correctly) it stops and the log is returned to the user (a mail notification is sent to the user and to the VRE adminstrators).
    • The DMPM updates the SVN Algorithms list
    • A mail notification is sent to the user and to the VRE administrators
    • Cron read the SVN lists (both Dependencies and Algorithms) and installs the algorithm in the pool of dataminers for the current VRE.
    • The script publishes the new algorithm in the Information System.

Staging.png

  • RELEASE Phase
    • SAI will invoke the service working in RELEASE PHASE in order to install the algorithm in a particular VRE of production (provided by the user); SAI will pass to the DMPM the target VRE.
    • The DMPM updates the SVN Production Algorithms list
    • Cron installs the algorithm in the production dataminers
    • A mail notification is sent to the user and to the VRE administrators
    • The script publishes the algorithm in the VRE

Release.png

Configuration and Testing

DMPM is a SmartGears compliant service.

/home/gcube/tomcat/webapps/dataminer-pool-manager-2.0.0-SNAPSHOT

In such sense, an instance has been deployed and configured at Development, Preprod and Prod levels.

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/rest/

In order to use the service, two manual configuration are needed:

    • to modify the parameter <param-value>Dev</param-value> in /home/gcube/tomcat/webapps/dataminer-pool-manager/WEB-INF/web.xml file according to scope where the service runs (Dev, RProto or Prod); such information will be read dinamically from the service for the switching among the list algorithms and dependencies to consider, of for the selection of the staging dataminer.
    • to edit the file /home/gcube/dataminer-pool-manager/dpmConfig/service.properties. Such file contains among the others, the staging dataminer to consider (automatically selected based on the environment at the previous point), the SVN repositories for the algorithms of each environment and for all the typologies of dependencies generated from SAI and available in the metadata file. (e.g., the service carries out his checks on the correctness of the name of a dependency by going to read in the correspondent file according to the language defined in the info.txt file available in the algorithm package).

An example of info.txt file is the following:

Username: giancarlo.panichi
Full Name: Giancarlo Panichi
Email: g.panichi@isti.cnr.it
 
Language: R
Algorithm Name: RBLACKBOX
Class Name: org.gcube.dataanalysis.executor.rscripts.RBlackBox
Algorithm Description: RBlackBox
Algorithm Category: BLACK_BOX
 
Interpreter Version: 3.2.1
 
Packages:
Package Name: DBI
Package Name: RPostgreSQL
Package Name: raster
Package Name: maptools
Package Name: sqldf
Package Name: RJSONIO
Package Name: data.table


An example of service.properties file is the following:

#YML node file
DEV_STAGING_HOST: dataminer1-devnext.d4science.org 
PROTO_PROD_STAGING_HOST: dataminer-proto-ghost.d4science.org
SVN_REPO: https://svn.d4science.research-infrastructures.eu/gcube/trunk/data-analysis/RConfiguration/RPackagesManagement/
#HAPROXY_CSV:  http://data.d4science.org/Yk4zSFF6V3JOSytNd3JkRDlnRFpDUUR5TnRJZEw2QjRHbWJQNStIS0N6Yz0
 
 
svn.repository = https://svn.d4science.research-infrastructures.eu/gcube
 
svn.algo.main.repo = /trunk/data-analysis/DataMinerConfiguration/algorithms
 
svn.rproto.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/proto/algorithms
 
svn.rproto.deps-linux-compiled = 
svn.rproto.deps-pre-installed = 
svn.rproto.deps-r-blackbox = 
svn.rproto.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/test_r_cran_pkgs.txt
svn.rproto.deps-java =
svn.rproto.deps-knime-workflow = 
svn.rproto.deps-octave = 
svn.rproto.deps-python = 
svn.rproto.deps-windows-compiled = 
 
 
svn.prod.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/prod/algorithms
 
svn.prod.deps-linux-compiled = 
svn.prod.deps-pre-installed = 
svn.prod.deps-r-blackbox = 
svn.prod.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/r_cran_pkgs.txt
svn.prod.deps-java = 
svn.prod.deps-knime-workflow =
svn.prod.deps-octave = 
svn.prod.deps-python = 
svn.prod.deps-windows-compiled = 
 
 
 
svn.dev.algorithms-list = /trunk/data-analysis/DataMinerConfiguration/algorithms/dev/algorithms
 
svn.dev.deps-linux-compiled = 
svn.dev.deps-pre-installed = 
svn.dev.deps-r-blackbox = 
svn.dev.deps-r = /trunk/data-analysis/RConfiguration/RPackagesManagement/r_cran_pkgs.txt
svn.dev.deps-java = 
svn.dev.deps-knime-workflow =
svn.dev.deps-octave = 
svn.dev.deps-python = 
svn.dev.deps-windows-compiled =

Usage and APIs

The DMPM REST Service will expose five main functionalities (three for the staging phase, and two for the release phase). The result of the execution will be monitored asynchronously by means of a REST call to a log having as parameter the ID of the operation. This can be done both at STAGING and RELEASE phases.

1. STAGING PHASE: a method returning immediately the log ID useful to monitor the execution, able to:

    • test the installation of the algorithm on a staging dataminer
    • to update the algorithms SVN list

The parameters to consider are the following:

  • the algorithm (URL to package containing the dependencies and the script to install)
  • the targetVRE (actually the current VRE)
  • the category to which the algorithm belong to
  • the algorithm_type

An example of Rest call related to the Installation is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/algorithm/stage?gcube-token=*****
&algorithmPackageURL=http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0
&category=BLACK_BOX
&algorithm_type=transducerers
&targetVRE=/gcube/devNext/NextNext

An example of Rest call related to the log is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/log?gcube-token=*****
&logUrl=id_from_previous_call

An example of Rest call related to the monitor of the execution is the following (actually three different status are available: COMPLETED, IN PROGRESS, FAILED):

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/monitor?gcube-token=*****
&logUrl=id_from_previous_first_call


2. RELEASE PHASE: a method invoked from SAI, executed after that the Test phase has successfully finished, able to:

    • update the SVN list of production with the new algorithms (that is the input for the CRON-job); many attributes have been extracted from the metadata file, others are generated dinamically (e.g., the VRE, the type of algorithm, the URL to package, the Timestamp related to the last modification of the package, the current environment and so on)
| OCTAVEBLACKBOX | Giancarlo Panichi | BLACK_BOX | Dev | <notextile>./addAlgorithm.sh OCTAVEBLACKBOX BLACK_BOX org.gcube.dataanalysis.executor.rscripts.OctaveBlackBox /gcube/devNext/NextNext transducerers N http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0 "OctaveBlackBox" </notextile> | none | Fri Sep 01 16:58:47 UTC 2017 |


The parameters to consider are the following:

  • the algorithm (URL to package containing the dependencies and the script to install)
  • the targetVRE
  • the category to which the algorithm belong to
  • the algorithm_type

An example of Rest call related to the publishing is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/algorithm/add?gcube-token=*****
&algorithmPackageURL=http://data-d.d4science.org/TSt3cUpDTG1teUJMemxpcXplVXYzV1lBelVHTTdsYjlHbWJQNStIS0N6Yz0
&category=BLACK_BOX
&algorithm_type=transducers
&targetVRE=/gcube/devNext/NextNext

An example of Rest call related to the monitoring is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-2.0.0-SNAPSHOT/api/monitor?gcube-token=*****
&logUrl=id_from_previous_call

Notification

Both for the Staging and Release phases, the user and the VRE administrators will be notified with the outcome of the execution. Some examples of notification are the following:


Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
 
Message: 
 
Dear Nunzio,
 
DataMiner sent you a message:
An error occurred while deploying your algorithm
 
Here are the error details:
 
Installation failed with return code = 2
 
 
Algorithm details:
 
User: Giancarlo Panichi
Algorithm name: OCTAVEBLACKBOX
Staging DataMiner Host: dataminer1-devnext.d4science.org
Caller VRE: /gcube/devNext
Target VRE: /gcube/devNext



Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
 
Message: 
 
Dear Nunzio,
 
DataMiner sent you a message:
An error occurred while deploying your algorithm
 
Here are the error details:
 
Following dependencies are not defined:
 
pippo
 
 
Algorithm details:
 
User: Giancarlo Panichi
Algorithm name: OCTAVEBLACKBOX
Staging DataMiner Host: dataminer1-devnext.d4science.org
Caller VRE: /gcube/devNext
Target VRE: /gcube/devNext



Subject: [DataMinerGhostInstallationRequestReport] is FAILED for OCTAVEBLACKBOX algorithm
 
Message: 
 
Dear Nunzio,
 
DataMiner sent you a message:
An error occurred while deploying your algorithm
 
Here are the error details:
 
Installation completed but DataMiner Interface not working correctly or files OCTAVEBLACKBOX.jar and OCTAVEBLACKBOX_interface.jar not availables at the expected path
 
 
Algorithm details:
 
User: Giancarlo Panichi
Algorithm name: OCTAVEBLACKBOX
Staging DataMiner Host: dataminer1-devnext.d4science.org
Caller VRE: /gcube/devNext
Target VRE: /gcube/devNext


Subject: [DataMinerGhostInstallationRequestReport] is SUCCESS for SAI_INHERITANCE algorithm
 
Message: 
Dear Nunzio,
 
DataMiner sent you a message:
The installation of the algorithm in the ghost dataminer is completed successfully.
 
You can retrieve experiment results under the '/DataMiner' e-Infrastructure Workspace folder or from the DataMiner interface.
 
 
Algorithm details:
 
User: Gianpaolo Coro
Algorithm name: SAI_INHERITANCE
Staging DataMiner Host: dataminer1-devnext.d4science.org
Caller VRE: /gcube/devNext/NextNext
Target VRE: /gcube/devNext/NextNext
 
- This message was also sent to:
 
    Lucio Lelii
    Gianpaolo Coro
    Giancarlo Panichi
    Paolo Scarponi
    Gianpaolo Coro


Subject: [DataMinerReleaseInstallationRequestReport] is SUCCESS for OCTAVEBLACKBOX algorithm
 
Message: 
Dear Nunzio,
 
DataMiner sent you a message:
SVN REPOSITORY CORRECTLY UPDATED.
 
  The CRON job will install the algorithm in the target VRE  
 
 
 
 
Algorithm details:
 
User: Giancarlo Panichi
Algorithm name: OCTAVEBLACKBOX
Caller VRE: /gcube/devNext/NextNext
Target VRE: /gcube/devNext/NextNext
 
- This message was also sent to:
 
    Nunzio Andrea Galante
    Lucio Lelii
    Gianpaolo Coro
    Giancarlo Panichi
    Paolo Scarponi



DataMinerPoolManager Portlet

Please refer to https://next.d4science.org/group/nextnext/dataminerdeployer for a graphical representation of the service.