GCore Based Information System Installation
The Information System is composed by 4 Services (IS-Collector, IS-Registry, IS-Notifier, IS-gLiteBridge) and a set of client libraries to ease the access to the IS services. The libraries are automatically shipped with the gHN-distribution and do not require any specific configuration than the gHN configuration.
Deployment Scenario
In their current release, the IS services are mainly intended to serve a specific infrastructure or VO scope. This means that there should be one and only one instance of these services for each of such types of scope. Starting from the infrastructure scope, but at the time being also all the VOs, the instances of the IS services have to be manually installed.
Per each infrastructure or VO scope, the typical deployment scenario requires at least 2 gHNs per scope dedicated to the IS:
- one to deploy an instance of the IS-IC service
- one to deploy the IS-Registry, the IS-Notifier and the IS-gLiteBridge
IS-Registry and IS-Publisher have a limited support for replication. Hence, an alternative scenario could be the following:
- one gHN for the IS-IC instance
- N gHNs for N instances of the IS-Registry service
- one gHN for the IS-Notifier and IS-gLiteBridge
Service Map
A Service Map must be prepared and distributed in all the gHNs aiming at joining the infrastructure and/or a VO scope. A Service Map is a XML file. The minimal Service Map reports two EPRs of the IS-Collector instance deployed in the scope: one for the queries and another one for the resource registration.
Such a map has to be placed in each gHN in a file named $GLOBUS_LOCATION/config/ServiceMap_<scope name>.xml, where the scope name is:
- the VO name in case of VO scope
- the infrastructure name in case of infrastructure scope
For an detailed explanation of the scopes hierarchy, see here.
The EPRs reported in the map refer the IC instance in scope:
<ServiceMap> <Service name ="ISICAllQueryPT" endpoint ="http://host:port/wsrf/services/gcube/informationsystem/collector/XQueryAccess"/> <Service name ="ISICAllRegistrationPT" endpoint ="http:/host:port/wsrf/services/gcube/informationsystem/collector/Sink"/> <Service name ="ISICAllCollectionPT" endpoint ="http://host:port/wsrf/services/gcube/informationsystem/collector/wsdaix/XMLCollectionAccess"/> <Service name ="ISICAllStoragePT" endpoint ="http://host:port/wsrf/services/gcube/informationsystem/collector/XMLStorageAccess"/> </ServiceMap>
where host and port must be replaced with the ones of the IC instance.
IS-Collector
Installation
The static installation of the IS-Collector service is a two-step process:
- first, the back-end XML database (eXist) has to be installed and the environment properly configured
- then, the service is installed on the gHN
eXist 1.2
After having downloaded the package, the following steps have to be performed:
- enter the following Java command in the shell command prompt:
java -jar eXist-setup-1.2.6-rev9165.jar -p <target folder>
- This will launch the installer using the specified target folder as destination. Simply follow the steps to complete the installation process;
- configure the EXIST_HOME environment variable to point to the target folder as follows:
export EXIST_HOME=<target folder>
- the command above must be placed in the $HOME/.bashrc (if the bash shell is used) or any other script loaded at login time;
- ensure that you have "write" permissions set for the data directory located in $EXIST_HOME/webapp/WEB-INF/.
- copy some eXist JARs in the $GLOBUS_LOCATION's library folder by typing:
cp $EXIST_HOME/exist.jar $GLOBUS_LOCATION/lib cp $EXIST_HOME/lib/core/quartz-1.6.0.jar $GLOBUS_LOCATION/lib cp $EXIST_HOME/lib/core/xmlrpc-* $GLOBUS_LOCATION/lib cp $EXIST_HOME/lib/core/xmldb.jar $GLOBUS_LOCATION/lib cp $EXIST_HOME/lib/core/jta.jar $GLOBUS_LOCATION/lib cp $EXIST_HOME/lib/core/commons-pool-1.4.jar $GLOBUS_LOCATION/lib
Service
The IS-Collector can be deployed on a gHN as any other gCube Service by typing:
gcore-deploy-service <path>/org.gcube.informationsystem.collector.gar
The command above assumes that the $GLOBUS_LOCATION/bin folder is in your PATH
gHN Configuration
The gHN where an IS-Collector instance is running has to be configured in a special way.
First, set the GLOBUS_OPTIONS as follows:
export GLOBUS_OPTIONS="-Xms2000M -Xmx2000M -Dexist.home=$EXIST_HOME"
the command above must be placed in the $HOME/.bashrc (if the bash shell is used) or any other script loaded at login time;
Then, the $GLOBUS_LOCATION/config/GHNConfig.xml file must be configured in order to:
- be declared a STATIC node (GHNtype parameter), i.e. a node where no dynamic deployment activities can be performed
- run in a ROOT mode (mode parameter), meaning that it starts up in a special way
These two configuration settings have to be specified as reported in the following example:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <global> <environment name="securityenabled" value="false" type="java.lang.Boolean" override="false" /> <!-- setting for ROOT mode --> <environment name="mode" value="ROOT" type="java.lang.String" override="false" /> <environment name="startScopes" value="devsec" type="java.lang.String" override="false" /> <environment name="infrastructure" value="gcube" type="java.lang.String" override="false" /> <environment name="labels" value="GHNLabels.xml" type="java.lang.String" override="false" /> <!-- setting for STATIC gHN --> <environment name="GHNtype" value="STATIC" type="java.lang.String" override="false" /> <environment name="localProxy" value="/home/globus/..." type="java.lang.String" override="false" /> <environment name="coordinates" value="43.719627,10.421626" type="java.lang.String" override="false" /> <environment name="country" value="it" type="java.lang.String" override="false" /> <environment name="location" value="Pisa" type="java.lang.String" override="false" /> <environment name="updateInterval" value="60" type="java.lang.Long" override="false" /> </global> </jndiConfig>
The last configuration step is necessary to deal with encrypted resources: a special key must be installed on the gHN as explained here.
Instance Configuration
The configuration of an IS-Collector instance is done in the JNDI file $GLOBUS_LOCATION/etc/org.gcube.informationsystem.collector/jndi-config.xml, in section gcube/informationsystem/collector. Here is an example of that configuration:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <service name="gcube/informationsystem/collector"> <environment name="configDir" value="etc/org.gcube.informationsystem.collector" type="java.lang.String" override="false" /> <environment name="backupDir" value="existICBackups" type="java.lang.String" override="false" /> <environment name="maxBackups" value="10" type="java.lang.String" override="false" /> <environment name="scheduledBackupInHours" value="12" type="java.lang.String" override="false" /> <!-- a value=="-1" means no sweeping is requested --> <environment name="sweeperIntervalInMillis" value="240000" type="java.lang.String" override="false" /> <environment name="registrationURI" value="http://...." type="java.lang.String" override="false" /> <environment name="resourceExpirationTimeInMillis" value="240000" type="java.lang.String" override="false" /> <environment name="deleteRPsOnStartup" value="true" type="java.lang.String" override="false" /> <environment name="maxOperationsPerConnection" value="5000" type="java.lang.String" override="false" /> <environment name="startScopes" value="/gcube/devsec" type="java.lang.String" override="false" /> </service> ...
where:
- backupDir is the folder where the scheduled backups are maintained. If this is an absolute path, it is used as it is. Otherwise, if this is a relative path, it is placed under the ~/.gcore/persisted/hostname-port/IS-Collector/ folder;
- maxBackups is the max number of backups maintained;
- scheduledBackupInHours is the interval in hours between two backups, i.e. a backup is performed every X hours;
- sweeperIntervalInMillis is the interval in milliseconds between two Sweeper executions. Each execution performs a clean up of the XMLStorage removing expired resources;
- resourceExpirationTimeInMillis is the time that identifies when a resource is expired. Each resource has a lifetime stamp associated; when it expires plus the value indicated here, the resource is marked as expired and removed in the next Sweeper execution;
- deleteRPsOnStartup true if a clean up of all the resources must be performed at start up time, false otherwise;
- maxOperationsPerConnection is the number of operation after which a XML Storage connection is reset. It is strongly suggested to keep the default value
- startScopes reports the scope(s) to which the instance must be joined. In the suggested deployment model, one IS-Collector instance has to join only one scope.
Verifying the Installation
Once the IS-Collector has been installed, the gHN can be [1].
The nohup.out file has to report (among the others) the following portType:
gHN started at: http://host:port/wsrf/services/ with the following services: GCUBE SERVICES: ... [4]: http://host:port/wsrf/services/gcube/informationsystem/collector/Sink [5]: http://host:port/wsrf/services/gcube/informationsystem/collector/SinkEntry [6]: http://host:port/wsrf/services/gcube/informationsystem/collector/XMLStorageAccess [7]: http://host:port/wsrf/services/gcube/informationsystem/collector/XQueryAccess [8]: http://host:port/wsrf/services/gcube/informationsystem/collector/wsdaix/XMLCollectionAccess
Know issues
Once that the gHn is started, check if the IS-Collector gCore endpoint is registered on the infrastructure, If it is not registered then restart the gHn.
Managing the XML Storage
Importing the Database from 1.X installations
In order to migrate from IC 1.0.0 to 2.0.0 the most important action to perform is to move the XML Storage content from eXist 1.1.1 to eXist 1.2.6. Firstly, the old XML Storage content has to be backupped, and then it has to be restored in the new one.
- The following example shows how to backup the content of a previous installation in a folder named $BACKUP_FOLDER:
mkdir $BACKUP_FOLDER $EXIST_HOME/bin/backup.sh -u admin -p admin -b /db -d $BACKUP_FOLDER -ouri=xmldb:exist://
- Then, this content has to be restored in the new XML Storage by typing:
$EXIST_HOME/bin/backup.sh -u admin -r $BACKUP_FOLDER/db/__contents__.xml -ouri=xmldb:exist://
Of course, in this second step, the $EXIST_HOME must point the installation folder of a fresh 1.2.6 version.
Restoring a corrupted Database
Rarely, it could happen that the XML Storage used to manage the collected data may become corrupted and not accessible. In this case the only possible solution is to restore the last backup available. In the default configuration, backups are performed every 12 hours and stored as Zip file in this folder:
~/.gcore/persisted/host-port/IS-Collector/existICBackups/
To restore the latest backup, the following steps have to be done:
- to delete the EXIST_HOME folder
- to install the eXist Database
- to reimport the latest backup
For instance:
rm -rf $EXIST_HOME java -jar eXist-setup-1.2.6-rev9165.jar -p $EXIST_HOME $EXIST_HOME/bin/backup.sh -u admin -r ~/.gcore/persisted/dlib01.isti.cnr.it-8080/IS-Collector/existICBackups/9-9-2009-1252467685784/full20090909-0442.zip -ouri=xmldb:exist://
IS-Registry
Installation
The IS-Registry can be deployed on a gHN as any other gCube Service by typing:
gcore-deploy-service <path>/org.gcube.informationsystem.registry.gar
The command above assumes that the $GLOBUS_LOCATION/bin folder is in your PATH
gHN Configuration
The gHN on which an IS-Registry instance is running has to be configured in a special way:
- it must be declared a STATIC node (GHNtype parameter), i.e. a node where no dynamic deployment activities can be performed
- it must run in a ROOT mode (mode parameter), meaning that it starts up in a special way
These two configuration settings have to be specified in the $GLOBUS_LOCATION/config/GHNConfig.xml file as reported in the following example:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <global> <environment name="securityenabled" value="false" type="java.lang.Boolean" override="false" /> <!-- setting for ROOT mode --> <environment name="mode" value="ROOT" type="java.lang.String" override="false" /> <environment name="startScopes" value="devsec" type="java.lang.String" override="false" /> <environment name="infrastructure" value="gcube" type="java.lang.String" override="false" /> <environment name="labels" value="GHNLabels.xml" type="java.lang.String" override="false" /> <!-- setting for STATIC gHN --> <environment name="GHNtype" value="STATIC" type="java.lang.String" override="false" /> <environment name="localProxy" value="/home/globus/..." type="java.lang.String" override="false" /> <environment name="coordinates" value="43.719627,10.421626" type="java.lang.String" override="false" /> <environment name="country" value="it" type="java.lang.String" override="false" /> <environment name="location" value="Pisa" type="java.lang.String" override="false" /> <environment name="updateInterval" value="60" type="java.lang.Long" override="false" /> </global> </jndiConfig>
Then, the service also require the special configuration for highly contacted services.
Finally, the last configuration step is necessary to deal with encrypted resources: a special key must be installed on the gHN as explained here.
Instance Configuration
Configuring an IS-Registry instance is a two-step process:
- since an IS-Registry can act in one and only one scope, its JNDI file ($GLOBUS_LOCATION/etc/org.gcube.informationsystem.registry/jndi-config.xml) must be configured to join only that scope. In the following example, the instance (startScopes environment variable) is configured to join the scope named /testing/vo1:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <service name="gcube/informationsystem/registry"> <environment name="configDir" value="etc/org.gcube.informationsystem.registry" type="java.lang.String" override="false" /> <environment name="securityManagerClass" value="org.gcube.common.core.security.GCUBESimpleServiceSecurityManager" type="java.lang.String" override="false" /> <environment name="startScopes" value=" /testing/vo1" type="java.lang.String" override="false" /> </service> ... </jndiConfig>
- due to its role, the IS-Registry requires a special Service Map for the scope it is acting in. In that map, the EPR of the ResourceRegistration port-type must be reported. For instance, let's suppose that an instance running on a node named grid5.4dsoft.hu joins the /testing/vo1 scope, the $GLOBUS_LOCATION/config/ServiceMap_vo1.xml file must be filled as follows (the reported EPRs are just for sample purposes):
<ServiceMap> <Service name ="..." endpoint ="..."/> <Service name ="ISRegistry" endpoint="http://grid5.4dsoft.hu:8080/wsrf/services/gcube/informationsystem/registry/ResourceRegistration" /> </ServiceMap>
This action is required only on the machine where the IS-Registry is running and only on the scope where the IS-Registry is configured.
IS Filters
Optionally, a set of filters can be configured in order to prevent the publication of some GCUBEResources matching the given criteria. Actually, gHN and RI resources can be banned with the filtering mechanism.
Filters can be actually defined on the value of some Profile's child elements (Domain, Site, Name for gHN, Endpoint for RI). Two filtering operations are available:
- exclude: if the value of the element is equal to the value specified in the filter, the resource is banned
- exclude_if_contains: if the value of the element contains the string (even as substring) specified in the filter, the resource is banned
IS filters can be stored in two places:
- as GenericResource with secondary type equals to
ISFilters
registered on the IS in the same IS-Registry scope - as a GenericResource serialization in a file stored in $GLOBUS_LOCATION/etc/org.gcube.informationsytem.registry/ResourceFilters.xml
They are mutually exclusive: in case both of them are available, the filters on the IS are considered.
The following example defines two filters:
- all the gHNs with the "localhost" string in the GHNDescription/Name's value are banned
- all the RI with the "localhost" string in the AccessPoint/RunningInstanceInterfaces/Endpoint/Name's value are banned
<Resource> <ID>db7b21f0-4acb-11de-b519-8417e45ea243</ID> <Type>GenericResource</Type> <Scopes> <Scope>/gcube/devsec</Scope> </Scopes> <Profile> <SecondaryType>ISFilters</SecondaryType> <Name>ISFilters</Name> <Description>Filtering rules applied on the IS at GCUBEResource registration time</Description> <Body> <Filters> <Filter resourceType="GHN"> <Target>GHNDescription/Name</Target> <Value>localhost</Value> <Operation>exclude_if_contains</Operation> </Filter> <Filter resourceType="RunningInstance"> <Target>AccessPoint/RunningInstanceInterfaces/Endpoint</Target> <Value>localhost</Value> <Operation>exclude_if_contains</Operation> </Filter> </Filters> </Body> </Profile> </Resource>
Verify the Installation
Once the IS-Registry has been configured, the gHN can be started.
The nohup.out file has to report (among the others) the following portTypes:
gHN started at: http://host:port/wsrf/services/ with the following services: GCUBE SERVICES: ... [4]: http://host:port/wsrf/services/gcube/informationsystem/registry/ResourceRegistration [5]: http://host:port/wsrf/services/gcube/informationsystem/registry/RegistryFactory
Then, a set of messages stating that the local profiles are registered or updated with the local event mechanism has to appear in the $GLOBUS_LOCATION/logs/container.fulllog file.
This is an example of such messages:
2009-06-20 00:31:41,409 DEBUG contexts.ServiceContext [GHNConsumer$<anon>,debug:66] [0.165s] GHNManager: Publishing GHN profile in scope /testing/vo1 2009-06-20 00:31:41,410 TRACE impl.GCUBEProfileManager [GHNConsumer$<anon>,trace:82] GCUBEProfileManager: Configured ISRegistry instance detected 2009-06-20 00:31:41,410 TRACE impl.GCUBEProfileManager [GHNConsumer$<anon>,trace:82] GCUBEProfileManager: Checking local configuration for http://host:port/wsrf/services/gcube/informationsystem/registry/RegistryFactory 2009-06-20 00:31:41,410 TRACE impl.GCUBEProfileManager [GHNConsumer$<anon>,trace:82] GCUBEProfileManager: Local ISRegistry instance detected 2009-06-20 00:31:41,411 TRACE impl.GCUBELocalPublisher [GHNConsumer$<anon>,trace:82] GCUBELocalPublisher: Updating resource ea5bd570-4ad5-11de-a511-84b4178b18ed via local event
IS-Notifier
Installation
The IS-Notifier can be deployed on a gHN as any other gCube Service by typing:
gcore-deploy-service <path>/org.gcube.informationsystem.notifier.gar
gHN Configuration
No specific configuration is needed, but the usual gHN's one.
Instance Configuration
The configuration of an IS-Notifier instance does not require any special setting. The only thing to take into account is that there must exist one and only one instance of the service per scope. The scope(s) of the instance can be set in the JNDI file ($GLOBUS_LOCATION/etc/org.gcube.informationsystem.notifier/jndi-config.xml) of the service as follows:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <service name="gcube/informationsystem/notifier"> <environment name="configDir" value="@config.dir@" type="java.lang.String" override="false" /> <environment name="securityManagerClass" value="org.gcube.common.core.security.GCUBEServiceSecurityManagerImpl" type="java.lang.String" override="false" /> <environment name="startScopes" value="/testing/vo1" type="java.lang.String" override="false" /> </service> ... </jndiConfig>
In this example, the instance is configured to join the scope named /testing/vo1. If one wants to configure the instance for serving multiple scopes, a comma-separated list must be provided as value of the startScopes variable. If the configuration above is not provided, the instance is joined by default to all the gHN scopes.
Verify the Installation
Once the IS-Notifier has been installed, the gHN can be started.
The nohup.out file has to report (among the others) the following portType:
gHN started at: http://host:port/wsrf/services/ with the following services: GCUBE SERVICES: ... [6]: http://host:port/wsrf/services/gcube/informationsystem/notifier/Notifier
know issues
IS-gLiteBridge
Installation
The IS-gLiteBridge can be deployed on a gHN as any other gCube Service by typing:
gcore-deploy-service <path>/org.gcube.informationsystem.glitebridge.gar
gHN Configuration
No specific configuration is needed, but the usual gHN's one.
Instance Configuration
The configuration of an IS-gLiteBridge instance requires some settings. The parameters can be set in the JNDI file ($GLOBUS_LOCATION/etc/org.gcube.informationsystem.glitebridge/jndi-config.xml) of the service as follows:
<?xml version="1.0" encoding="UTF-8"?> <jndiConfig xmlns="http://wsrf.globus.org/jndi/config"> <service name="gcube/informationsystem/glitebridge"> <environment name="samURL" value="http://lcg-sam.cern.ch:8080/same-pi/" type="java.lang.String" override="false" /> <environment name="bdiiHost" value="lcg-bdii.cern.ch" type="java.lang.String" override="false" /> <environment name="bdiiPort" value="2170" type="java.lang.String" override="false" /> <environment name="loginDN" value="" type="java.lang.String" override="false" /> <environment name="password" value="" type="java.lang.String" override="false" /> .... <environment name="VO" value="d4science.research-infrastructures.eu" type="java.lang.String" override="false" /> <environment name="statusVO" value="OPS" type="java.lang.String" override="false" /> <environment name="harvestingIntertime" value="10" type="java.lang.String" override="false" /> </service> ... </jndiConfig>
The configurable parameters are:
- samURL the SAM server URL
- bdiiHost the BDII Host
- bdiiPort the BDII Port
- loginDN the login Distinguished Name
- password the password
- vo the VO in BDII from which gathers information
- statusVO the VO where getting status from SAM
- harvestingIntertime harvesting interval (in minutes)
Verify the Installation
Once the IS-gLiteBridge has been installed, the gHN can be started.
The nohup.out file has to report (among the others) the following portType:
gHN started at: http://host:port/wsrf/services/ with the following services: GCUBE SERVICES: ... [6]: http://host:port/wsrf/services/gcube/informationsystem/glitebridge/GLiteBridge