Difference between revisions of "Legacy applications integration"

From Gcube Wiki
Jump to: navigation, search
(Architectural Design)
(The Legacy Application processing trigger script)
Line 82: Line 82:
  
 
<code>
 
<code>
 +
some code
  
 
</code>
 
</code>
 +
 +
=== The Legacy Application WPS-Hadoop process ===
 +
 +
The The Legacy Application WPS-Hadoop process is a Java class that transforms the WPS process inputs (see next sub-section) into Hadoop Map/Reduce streaming arguments where:
 +
* Fixed input parameters are passed as environmental variable using the -cmdenv name=value (the option can be used to pass more parameters. In our simple example, the Java class defines
 +
 +
<code>
 +
hadoop ... -cmdenv a=2 -cmdenv b=3
 +
</code>
 +
 +
The Legacy Application files (R code) and processing trigger script are packaged as a jar on the WPS-Hadoop server and deployed on the Hadoop Map/Reduce cluster at runtime. This eases the Legacy Application maintenance activities. The Java class has then to:
 +
* copy the jar package to the HDFS filesystem of the target Hadoop Map/Reduce cluster
 +
* define the Hadoop streaming option -cacheArchive and the path in the HDFS (copied in the previous step)
 +
<code>
 +
-cacheArchive hdfs://<path to RHelloWorld.jar>
 +
</code>
 +
 +
 +
=== The Legacy Application WPS interface ===
  
 
== Legacy Application package ==
 
== Legacy Application package ==

Revision as of 16:17, 5 February 2013

Context

The Geospatial Cluster goal is to:

  • Data discovery of internal/external geospatial data repositories
  • Data access to discovered data
  • Data processing of discovered/accessed data
  • Data visualization of discovered/accessed/processed data

This page focuses on the geospatial data processing of discovered/accessed data with Legacy Applications.

Objectives

The main objectives for the geospatial data processing are

  • Define enrichment needs of bio-ecological or activity occurrences with environmental data: OBIS Ocean Physics, VTI, VME
  • Designing and planning implementation for enrichment capacity

Advanced geospatial analytical and modelling features - e.g. R geospatial, reallocation, aggregation

  • Defining advanced geospatial processes required in reallocation, aggregation, interpolation
  • Designing and planning implementation for geospatial processes capacity

What are Legacy Applications

Legacy Applications are existing software applications written in third party languages such as R, IDL, MatLab, Python. Legacy Applications can not be re-written in Java as:

  • legacy applications come a very specific knowledge domain hard to transfer to coders
  • Time and resource consuming
  • Converted applications would have poor maintainability
  • Time-to-market too long
  • Limitations on the number of applications supported

Examples of legacy applications are those written in R, IDL and MatLab. These are common software packages used for science applications development.

The computing resources and interfaces for Legacy Applications

The OGC Web Processing Service allows exposing processing services over geospatial data 52North has implemented a WPS Java framework where processing algorithms (e.g. spatial resampling, temporal aggregation, etc.) are implemented as WPS processes that can invoked by clients. This implementation does not provide underlying computing resources besides the server hosting the WPS implementation. The scalability is not ensured and QoS/SLA cannot be guaranteed.

The Hadoop Map/Reduce model is used to provide the processing resources where:

  • Processes can be map/reduce pure implementations using Java libraries packed at runtime and deployed by Hadoop Map/Reduce. This approach is not applicable to Legacy Applications
  • Processes can be third party or other languages (bash, python, etc.) using Map/Reduce Streaming (pipes)

Coupling both allows exposing geospatial processing services using the OGC WPS interface and exploit scalable processing resources.

Legacy Applications thus exploit the Hadoop Map/Reduce streaming, a utility which allows users to create and run jobs with any executables (e.g. shell utilities) as the mapper and/or the reducer.

Types of input parameters in WPS-Hadoop

In order to exploit the parallel computing model offered by Hadoop Map/Reduce, we define two types of parameters:

  • Fixed input parameters - these are parameters that have fixed values for all input parameters to process (e.g. PI=3.14, a=2, b=3). These parameters are managed as environmental variables at processing runtime
  • Mapped input parameter - these parameters are used as input to the Hadoop Map/Reduce mapper. If the mapped input parameter contains more than one value, Hadoop Map/Reduce will try to use more task trackers at runtime.

Architectural Design

The figure below depicts the overall approach to support Legacy Applications with WPS-Hadoop. Legacy applications.png

The sub-sections below provide the bottom-up description of the architectural design elements

The Legacy Application: R Hello World!

We use a very simple R application that takes two fixed parameters: a=2, b=3 and a few words as mapper parameter.

The R Hello World! code is provided below:

Source code


The Legacy Application processing trigger script

The Legacy Application processing trigger script is bash script (any other scripting language will do) to manage the inputs and outputs:

  • Deal with the inputs that come from the Hadoop Map/Reduce as:
    • environmental variables for fixed input parameter
    • piped arguments for mapper parameter values
  • Publish the outputs generated by the Legacy Application

The R Hello World! processing trigger script is provided below:

some code

The Legacy Application WPS-Hadoop process

The The Legacy Application WPS-Hadoop process is a Java class that transforms the WPS process inputs (see next sub-section) into Hadoop Map/Reduce streaming arguments where:

  • Fixed input parameters are passed as environmental variable using the -cmdenv name=value (the option can be used to pass more parameters. In our simple example, the Java class defines

hadoop ... -cmdenv a=2 -cmdenv b=3

The Legacy Application files (R code) and processing trigger script are packaged as a jar on the WPS-Hadoop server and deployed on the Hadoop Map/Reduce cluster at runtime. This eases the Legacy Application maintenance activities. The Java class has then to:

  • copy the jar package to the HDFS filesystem of the target Hadoop Map/Reduce cluster
  • define the Hadoop streaming option -cacheArchive and the path in the HDFS (copied in the previous step)

-cacheArchive hdfs://<path to RHelloWorld.jar>


The Legacy Application WPS interface

Legacy Application package

The structure we propose is as depicted below (for a processing step named "align", provided as an example)

The application directory follows a set of best practices

  • for its folders and files structure
  • for its descriptive metadata

so to ease the subsequent deployment of the application to the WPS-hadoop environment.

Application folder.png

The application.xml file has two main blocks:

  • the job template section
  • and the workflow template section.

The first part is to define the job templates in the workflow XML application definition file. The second would not be used and it is just there to pave the road (if needed) to support workflows with Oozie)

Our unique processing block of the workflow needs a job template.

A proposed example contains the XML lines below:

<jobTemplate id="align">
	<streamingExecutable>/application/align/run</streamingExecutable> <!-- processing trigger -->		
	<defaultParameters> <!-- default parameters of the job -->
		<!-- Default values are specified here, for testing purposes only! -->
		<parameter id="param1">2</parameter>	<!-- no default value -->
		<parameter id="param2">4</parameter>
	</defaultParameters>
	<defaultJobconf>
		<property id="app.job.max.tasks">1</property>	<!-- Maximum number of parallel tasks -->
		</defaultJobconf>
</jobTemplate>

We could provide tools to test a job on the local workstation.

Once done, this is packaged as a jar file, and stored in a repository accessible from the WPS-hadoop server. When a processing request is triggered, WPS-hadoop deploys, via the hadoop streaming, that jar file, and the legacy application is invoked.

Hadoop clusters serving legacy applications only need to have R (and/or IDL, MatLab, Octave, etc.) installed.

The procedure is applied and validated for the iMarine partners through:

  • SimpleTestNono application (IRD, Norbert Billet)
  • ...