Difference between revisions of "How to use the Data Miner Pool Manager"

From Gcube Wiki
Jump to: navigation, search
Line 37: Line 37:
  
 
<source lang="java">
 
<source lang="java">
addAlgorithmToHost(algorithm, hostname, name, description, category, algorithmType, skipJava);
+
addAlgorithmToHost(algorithm, hostname, name, description, category, algorithmType, skipJava, publish);
 
</source>
 
</source>
  
  
 
<source lang="java">
 
<source lang="java">
  @GET
+
@GET
  @Path("/hosts/add")
+
@Path("/hosts/add")
  @Produces("text/plain")
+
@Produces("text/plain")
  public String addAlgorithmToHost(
+
public String addAlgorithmToHost(
      @QueryParam("algorithm") String algorithm,  
+
@QueryParam("algorithm") String algorithm,  
      @QueryParam("hostname") String hostname,
+
@QueryParam("hostname") String hostname,
      @QueryParam("name") String name,
+
@QueryParam("name") String name,
      @QueryParam("description") String description,
+
@QueryParam("description") String description,
      @QueryParam("category") String category,
+
@QueryParam("category") String category,
      @QueryParam("algorithmType") String algorithmType,
+
@DefaultValue("transducerers") @QueryParam("algorithmType") String algorithmType,
      @QueryParam("skipJava") String skipJava) throws IOException, InterruptedException {
+
@DefaultValue("N") @QueryParam("skipJava") String skipJava,
    Algorithm algo= this.getAlgorithm(algorithm, null, hostname, name, description, category, algorithmType, skipJava);
+
@DefaultValue("false") @QueryParam("publish") boolean publish)
    //service.addAlgToIs(algo);
+
throws IOException, InterruptedException, SVNException {
    return service.addAlgorithmToHost(algo, hostname);
+
Algorithm algo = this.getAlgorithm(algorithm, null, hostname, name, description, category, algorithmType,
 +
skipJava);
 +
// publish algo
 +
if (publish) {
 +
service.addAlgToIs(algo);
 +
}
 +
 
 +
// update svn
 +
for (Dependency d : algo.getDependencies()) {
 +
 
 +
if (d.getType().equals("os")) {
 +
List<String> ls = new LinkedList<String>();
 +
ls.add(d.getName());
 +
service.updateSVN("r_deb_pkgs.txt", ls);
 +
}
 +
if (d.getType().equals("cran")) {
 +
List<String> ls = new LinkedList<String>();
 +
ls.add(d.getName());
 +
service.updateSVN("r_cran_pkgs.txt", ls);
 +
}
 +
if (d.getType().equals("github")) {
 +
List<String> ls = new LinkedList<String>();
 +
ls.add(d.getName());
 +
service.updateSVN("r_github_pkgs.txt", ls);
 +
}
 +
}
 +
return service.addAlgorithmToHost(algo, hostname);
 +
}
 
</source>
 
</source>
  

Revision as of 11:07, 7 April 2017

DataMiner Pool Manager

DataMiner Pool Manager service, aka DMPM, is a REST service able to rationalize and automatize the current process for publishing SAI algorithms on DataMiner nodes and keep DataMiner cluster updated.

Overview

The service may accept an algorithm descriptor, including its dependencies (either OS, R and custom packages), queries the IS for dataminers in the current scope, generates (via templating) ansible playbook, inventory and roles for relevant stuff (algorithm installer, algorithms, dependencies), executes ansible playbook on a DataMiner.

In such sense, the service accepts as input, among the others, the url of an algorithm package (including jar, and metadata), extracts the information needed to installation, installs the script and returns asynchronously the execution outcome to the caller.


Testing

DMPM is a SmartGear compliant service. In such sense, an instance has already been deployed and configured at Development level.

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/

In order to allow Ansible to access the DataMiner, it is necessary that the SSH key of the host where the Service is deployed is correctly configured at DataMiner host level.

Requirements

The dependencies in the metadata file inside the algorithm package, must respect the following guidelines:

  • R Dependencies must have prefix cran:
  • OS Dependencies must have prefix os:
  • Custom Dependencies must have prefix github:

In case no prefix is specified, the service considers such dependencies as OS ones.

Usage and APIs

Currently the service exposes the following REST methods:

  • Adding An Algorithm to DataMiner

Such functionality installs the Algorithm on the specific DataMiner and returns immediately the log ID useful to monitor the execution.

addAlgorithmToHost(algorithm, hostname, name, description, category, algorithmType, skipJava, publish);


@GET
	@Path("/hosts/add")
	@Produces("text/plain")
	public String addAlgorithmToHost(
			@QueryParam("algorithm") String algorithm, 
			@QueryParam("hostname") String hostname,
			@QueryParam("name") String name,
			@QueryParam("description") String description,
			@QueryParam("category") String category,
			@DefaultValue("transducerers") @QueryParam("algorithmType") String algorithmType,
			@DefaultValue("N") @QueryParam("skipJava") String skipJava,
			@DefaultValue("false") @QueryParam("publish") boolean publish)
			throws IOException, InterruptedException, SVNException {
		Algorithm algo = this.getAlgorithm(algorithm, null, hostname, name, description, category, algorithmType,
				skipJava);
		// publish algo
		if (publish) {
			service.addAlgToIs(algo);
		}
 
		// update svn
		for (Dependency d : algo.getDependencies()) {
 
			if (d.getType().equals("os")) {
				List<String> ls = new LinkedList<String>();
				ls.add(d.getName());
				service.updateSVN("r_deb_pkgs.txt", ls);
			}
			if (d.getType().equals("cran")) {
				List<String> ls = new LinkedList<String>();
				ls.add(d.getName());
				service.updateSVN("r_cran_pkgs.txt", ls);
			}
			if (d.getType().equals("github")) {
				List<String> ls = new LinkedList<String>();
				ls.add(d.getName());
				service.updateSVN("r_github_pkgs.txt", ls);
			}
		}
		return service.addAlgorithmToHost(algo, hostname);
	}


It is possible to distinguish among mandatories parameters and optional ones:

  • Mandatories:
    • algorithm: URL related the package of the Algorithm; such parameter is mandatory.
    • hostname: the hostname of the DataMiner on which deploy the script; such parameter is mandatory.
  • Optionals (The overall set of parameters, except the mandatory ones. can be extract from the metadata file (where available), or overwritten by the caller):
    • name: the name of the Algorithm (e.g.,ICHTHYOP_MODEL_ONE_BY_ONE )
    • description: the description of the Algorithm
    • category: the category to which the Algorithm belongs to (e.g, ICHTHYOP_MODEL)
    • algorithmType: by default set to "transducerers"
    • skipJava: by default set to "N"


An example of the usage is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/hosts/add?gcube-token=TOKEN_ID&algorithm=URL_TP_ALGORITHM&hostname=TARGET_DATAMINER


  • Monitoring the execution

Such functionality allows the caller to monitor asynchronously the execution by using the log ID obtained when an algorithm is deployed.

getLogById(logID);


   @GET
   @Path("/log")
   @Produces("text/plain")
   public String getLogById(@QueryParam("logUrl") String logUrl) throws IOException {
       // TODO Auto-generated method stub
       LOGGER.debug("Returning Log =" + logUrl);
       return service.getScriptFromURL(service.getURLfromWorkerLog(logUrl));
   }


An example of the usage is the following:

http://node2-d-d4s.d4science.org:8080/dataminer-pool-manager-1.0.0-SNAPSHOT/rest/log?gcube-token=TOKEN_ID&logUrl=LOG_ID

Next Steps

  • Add a functionality able to automatically retrieve the set of dataMiners in a cluster from the Hproxy, in order to allow the deploy of an algorithm to the set of DataMiners available in a particular VRE (e.g., RProtoLab) --> waiting for HAproxy normalization (to update documentation)
  • Add a functionality able to update the svn lists (by using the default svn credentials of the caller) related to dependencies (according to typology, if a dependency is present in a package but not in the SVN list, the SVN page will contain also the missing ones). --> done (to update documentation)
  • Add an optional functionality able to register an algorithm in the IS (VRE scope) --> done (to update documentation)