Difference between revisions of "Spatial Data Processing"

From Gcube Wiki
Jump to: navigation, search
(Subsystems)
 
(90 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== Overview ==
+
<!-- CATEGORIES -->
Geospatial Data Processing takes advantage of the OGC Web Processing Service (WPS) as web interface to allow for the dynamic deployment of user processes. In this case the WPS chosen is the [http://52north.org/communities/geoprocessing/wps/index.html 52° North WPS], allowing the development and deployment of user “algorithms”. Is dimostrated that such “algorithms” can be developed to be processed exploiting the powerful and distributed framework offered by [http://hadoop.apache.org/mapreduce/ Apache™ Hadoop™ MapReduce]
+
[[Category: gCube Spatial Data Infrastructure]]
 +
<!-- CATEGORIES -->
  
Thus was born '''''WPS-hadoop'''''.
+
{| align="right"
 +
||__TOC__
 +
|}
  
== Key Features ==
+
gCube Spatial Data Processing offers a rich array of data analytics methods via OGC Web Processing Service (WPS).
WPS-hadoop offers a web interface to access the algorithms from external HTTP clients through three different kind of requests, made available to 52 North WPS:
+
  
- The '''GetCapabilities''' operation provides access to general information about a live WPS implementation, and lists the operations and access methods supported by that implementation. 52N WPS supports the GetCapabilities operation via HTTP GET and POST.
+
== Overview ==
 +
Geospatial Data Processing takes advantage of the OGC Web Processing Service (WPS) as web interface.
 +
It is implemented by relying on the [[Data Mining Facilities | gCube platform for data analytics]].  
  
- The '''DescribeProcess''' operation allows WPS clients to request a full description of one or more processes that can be executed by the service. This description includes the input and output parameters and formats and can be used to automatically build a user interface to capture the parameter values to be used to execute a process.
+
[[File:Spatial_Data_Processing.png|400px|Overall Architecture]]
  
- The '''Execute''' operation allows WPS clients to run a specified process implemented by the server, using the input parameter values provided and returning the output values produced. Inputs can be included directly in the Execute request, or reference web accessible resources.
+
== Key Features ==
 
+
 
+
== Design ==
+
Extending the AbstractAlgorithm class we have created a new abstract class called HadoopAbstractAlgorithm where the Business Logic, hidden to the developer, is used to execute the process creating a Job for the hadoop framework.
+
 
+
 
+
[[Image:blocks.png]]
+
 
+
 
+
=== Develop a custom process ===
+
The custom process class has to extend HadoopAbstractAlgorithm which allows you to specify the Hadoop Configuration parameters (e.g. from XML files), the Mapper and Reducer classes, Input Paths, Output Path, all the operations needed before to run the process and the way to retrieve the results.
+
By using HadoopAbstractAlgorithm, you need to fill these simple methods:
+
 
+
* protected Class<? extends Mapper<?, ?, LongWritable, Text>> getMapper()
+
 
+
This method returns the class to be used as Mapper;
+
  
* protected Class<? extends Reducer<LongWritable, Text, ?, ?>> getReducer()
+
gCube Spatial Data Processing distinguishing features include:
  
This method returns the class to be used as Reducer (if exists);
+
; WPS-based access to an open and extensible set of processes
* protected Path[] getInputPaths(Map<String, List<IData>> inputData)
+
: all the processes hosted by the system are exposed via RESTful protocol enacting clients to be informed on the list of available processes (GetCapabilities), to get the specification of every process (DescribeProcess) and to execute a selected process (Execute); 
  
This method allows to the business logic to know the exact input path(s) to pass to the Hadoop framework;
+
; relying on a Hybrid and Distributed Computing Infrastructure;
 +
: every process can be designed to be executed on diverse and many 'computing nodes' (e.g. R engines, Java);
  
* protected String getOutputPath()
+
; easy integration of user-defined processes;
 +
: the system enact users to easily add their own algorithms to the set of those offered by the system, e.g. by [[Statistical Algorithms Importer]];
  
This method allows to the business logic to know the exact output path to pass to the Hadoop framework;
+
; rich array of ready to use processes;
 +
: the system is equipped with a [[Statistical Manager Algorithms | large set of ready to use algorithms]];
  
* protected Map buildResults()
+
; open science support
 +
: the system automatically provide for process repeatability and provenance by recording on the [[Workspace]] a comprehensive research object;
  
This method is called by the business logic method to pass build output that the WPS does expect;
+
== Subsystems ==
  
* public void prepareToRun(Map<String, List<IData>> inputData)
+
;[[Statistical Manager|DataMiner / Statistical Manager]]
 +
: ...
  
This method has to be filled by all the operations to do before to run the Hadoop Job (e.g. WPS input validation);
+
;[[Ecological Modeling]]
 +
: ...
  
* protected JobConf getJobConf()
+
;[[Signal Processing]]
 +
: ...
  
This method  allows the user to specify all the configuration resources for (from) Hadoop framework (e.g. XML conf files).
+
; [[Geospatial Data Mining]]
 +
: ...

Latest revision as of 19:17, 6 July 2016


gCube Spatial Data Processing offers a rich array of data analytics methods via OGC Web Processing Service (WPS).

Overview

Geospatial Data Processing takes advantage of the OGC Web Processing Service (WPS) as web interface. It is implemented by relying on the gCube platform for data analytics.

Overall Architecture

Key Features

gCube Spatial Data Processing distinguishing features include:

WPS-based access to an open and extensible set of processes
all the processes hosted by the system are exposed via RESTful protocol enacting clients to be informed on the list of available processes (GetCapabilities), to get the specification of every process (DescribeProcess) and to execute a selected process (Execute);
relying on a Hybrid and Distributed Computing Infrastructure;
every process can be designed to be executed on diverse and many 'computing nodes' (e.g. R engines, Java);
easy integration of user-defined processes;
the system enact users to easily add their own algorithms to the set of those offered by the system, e.g. by Statistical Algorithms Importer;
rich array of ready to use processes;
the system is equipped with a large set of ready to use algorithms;
open science support
the system automatically provide for process repeatability and provenance by recording on the Workspace a comprehensive research object;

Subsystems

DataMiner / Statistical Manager
...
Ecological Modeling
...
Signal Processing
...
Geospatial Data Mining
...