Difference between revisions of "Search Planning and Execution Specification"
(→Design) |
(→Design) |
||
Line 38: | Line 38: | ||
=== Architecture === | === Architecture === | ||
− | The | + | The main components of the Search system are the Planner, the Search Operators and the distributed Execution engine. The architecture is shown in the following figure: |
[[Image:Search_system_architecture.jpg|frame|center|Search System Architecture]] | [[Image:Search_system_architecture.jpg|frame|center|Search System Architecture]] | ||
− | The | + | The Planner |
− | + | ||
* Updater: An Updater instance enables the on-the-fly update on an Index partition. It applies the preprocessing steps required to transform the data to be indexed into an appropriate format. | * Updater: An Updater instance enables the on-the-fly update on an Index partition. It applies the preprocessing steps required to transform the data to be indexed into an appropriate format. | ||
* Manager: A Manager instance ensures the correct synchronization and application of update actions on all the Replicas of a specific Index partition. Moreover, it handles abnormal conditions that affect the operation of the related Index partition. | * Manager: A Manager instance ensures the correct synchronization and application of update actions on all the Replicas of a specific Index partition. Moreover, it handles abnormal conditions that affect the operation of the related Index partition. |
Revision as of 19:56, 3 March 2012
Contents
Overview
A fundamental part of the gCube Information Retrieval framework consists of the Search Planning and Execution components. The Search Planner enables the on-the fly integration of CQL-compliant Data Sources. The key concept in this process is the publication of CQL capabilities by the integrated Sources. The Search Planner will involve any of the Sources that have published their capabilities on a given infrastructure, as long as they contribute to the result for a query.
The optimization mechanisms of the Planner detect the smallest set of Sources required to answer a query. Moreover, using a probabilistic approach, a near-optimal plan for execution is found. The algorithms of the Planning and Optimization stages allow the IR framework to scale in the number of Data Sources that are integrated in an infrastructure.
Search Planner produces an plan that combines Data Sources and a selection from various Search Operators. A distributed execution environment ensures the efficient execution of the plan. Information travels from Data Sources to Search Operators and, in turn, to the Search clients through a pipelining data transferring mechanism that provides low latency, large throughput and a flow control facility.
Key features
- On-the-fly Integration of New Data Sources
- CQL-compliant Sources that publish their capabilities, are dynamically involved in the IR process.
- Involves the minimum number of Data Sources
- Detection of all the Sources that contribute to the result of a query.
- Scalability in the number of Sources integrated in an Infrastructure
- Planning and Execution components designed to scale.
- Dynamic Integration of New Search Operators
- Operators with new functionality can be dynamically integrated in the IR process.
- Pipelining Mechanism
- Offers flow control, low latency and high throughput.
Design
Philosophy
Search Planning and Execution components are designed in order to:
- allow the efficient and flexible integration of new Sources and Operators.
- exploit the IR capabilities and functionality of various information providers.
- scale in environments with a large number of heterogeneous Sources.
- decouple and eliminate the dependencies among the Planner, the Execution environment and the information providers.
Architecture
The main components of the Search system are the Planner, the Search Operators and the distributed Execution engine. The architecture is shown in the following figure:
The Planner
- Updater: An Updater instance enables the on-the-fly update on an Index partition. It applies the preprocessing steps required to transform the data to be indexed into an appropriate format.
- Manager: A Manager instance ensures the correct synchronization and application of update actions on all the Replicas of a specific Index partition. Moreover, it handles abnormal conditions that affect the operation of the related Index partition.
- Replica: A Replica hosts the actual data being indexed for an Index partition. It dynamically applies update actions on the index structure it maintains, without ceasing its operation.
The OpenSearch framework uses the OpenSearch specification in order to connect to external IR providers and exploit their information. A different OpenSearch instance is used to connect to each provider. In such way, the IR capabilities of external providers are published to gCube infrastructure and can be utilized by the gCube IR framework.
On the top layer of the Index and OpenSearch Sources the CQL standard provides the link to the gCube Search System. While only Index and OpenSearch Sources are internal parts of gCube, other IR providers can be wrapped as Data Sources, as long as they support CQL.
Deployment
Data Sources are deployed over gCore containers. The gRS2 pipelining mechanism must also be part of the node. Index Replicas and OpenSearch instances can use a large number of nodes in cases where load balancing is required for large scale infrastructures. For better synchronization, Index Managers and Updaters can be co-deployed, while it is preferable to deploy Lookups in different nodes. Note that Data Sources instances are most commonly deploy on a VO level.
Use Cases
The suitability of the gCube Data Source specifications for IR components is strongly related to the two standards adopted:
- CQL: IR providers that support functionality which can be directly mapped in the CQL standard are good candidates for being wrapped into Data Sources.
- OpenSearch: IR providers that implement the OpenSearch API can be directly wrapped into Data Sources.
Well suited Use Cases
Components that provide IR functionality are well-suited for forming Data Sources based on their relation to the above standards. Integration of an IR provider through the OpenSearch Data Source is preferable in cases where there is no direct mapping of the provider's functionality to the CQL standard. However, if CQL can express accurately the provided IR capabilities, the direct integration of the corresponding IR component as a separate Data Source can be advantageous. The advantages in that case are mainly related to the better exploitation of the component's IR functionality. Note that CQL is chosen as the standard in our framework, because it is a highly expressive query language that suits the IR functionality of most general-case IR systems.
Less well suited Use Cases
In case a Data provider can not be associated with any of the two standards, the alternative approach is to apply an intermediate step by inserting the provider's data into an Index partition. In this case the provider's information will be exploited through the Index System functionality. However, this alternative implies a significant overhead when the content of the provider is frequently updated.