DHN Installation

From Gcube Wiki
Revision as of 19:55, 6 July 2016 by Leonardo.candela (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Host preparation

Minimal host requirements

  • Operating System
    • any Unix-based tested platform for Java WS Core
      • tested platform: SL3.0, Red Hat FC6, Ubuntu
  • Connectivity
    • a machine configured with a static IP address

Migrating from the previous releases

The 1.0 version of the DHN bundle can be installed as a full installation as well as an upgrade from a previous DHN distribution.

Package Installation


The DHN 1.0 can be downloaded from here.

Installation Procedure

After having prepared the node as described in Host preparation, the following steps have to be performed in order to install the DHN:

  • uncompress the DHN_installer_1.0.tar.gz file
  • stop the Java WS Core container (if any)
  • source your ${GLOBUS_LOCATION}/etc/globus-devel-env.sh file
  • if you are installing the DHN from scratch
    • type make install in the uncompressed ./DHN_installer_1.0 folder
  • if you are upgrading a previous FRC installation:
    • type make upgradeFromFRC in the uncompressed ./DHN_installer_1.0 folder
  • if you are upgrading a previous Beta installation:
    • type make upgradeFromBeta in the uncompressed ./DHN_installer_1.0 folder
  • follow the Post-installation configuration steps
  • (re)start the container

Included software

The Final Release Candidate version of the DHN Installer includes the following Collective Layer components:

  • NAL 1.0 Final
  • DIS-HLSClient 1.0 Final
  • DIS-IP 1.0 Final
  • HNM Service 1.0 Final
  • HNM Service stubs 1.0 Final
  • DIS-IC stubs 1.0 Final
  • DIS-Registry stubs 1.0 Final
  • DIS-Broker stubs 1.0 Final
  • DLManagement stubs 1.0 Final
  • KeeperCommon Library 1.0 Final
  • DIS-Util Library 1.0 Final
  • ProfileManagement Library 1.0 Final
  • DILIGENTProvider 1.0 Final
  • DILIGENT Provider Stubs 1.0 Final
  • Authentication API 1.0 Final
  • Delegation Service 1.0 Final
  • Delegation Stubs 1.0 Final
  • DVOS Common Library 1.0 Final
  • a customised distribution of the Aggregator Framework for Java WS Core 4.0.4

All the components above are also available in ETICS in their *_0_3_0 configurations.

Post-installation configuration

Configuration files

Configuration files that have to be edited after the installation:

Java WS Core


The Log4J output should be redirected on the file system in order to simplify the debugging of what is happening on the DHN and the exchange of such information. The following configuration in the $GLOBUS_LOCATION/container-log4j.properties file has to replace the default one for the log4j.appender.A1:

# A1 is set to be a RollingFileAppender
# Keep 100 backup files

# A1 uses PatternLayout.
log4j.appender.A1.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} [%t,%M:%L] %m%n

# Display any warnings generated by our code

This configuration enables the file rolling behavior and a rollover schedule when the file log size reaches 10MB by managing in this way 100 log files.

Moreover, we initially suggest also to enable a DEBUG log level for the HNMService and the Profile Management Library. In order to do that, add the following lines to the same file:


This file includes a set of global properties related to the container. It is located in the $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd. It must be changed in order to allow the DHN to publish its hostname. The following two lines have to be added in the <globalConfiguration> section:

<parameter name="logicalHost" value="yourHostName.yourDomain"/>
<parameter name="publishHostName" value="true"/>

Of course, the yourHostName.yourDomain string must be replaced with your real hostname.


JNDI file

The HNMService performs JNDI lookups for some static configuration parameters. Its JNDI file is located in $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/jndi-config.xml

The global resource HNMServiceConfiguration groups the configuration parameters. The following ones have to be changed accordingly to the suggestions reported:

  • infrastructure This parameter specify the type of infrastructure to join:
    • PPS: the PPS infrastructure (in case the DHN has to join Arte or ImpECt VOs)
    • development: the development infrastructure (in case the DHN has to join the devsec VO)

The default value is PPS.

  • defaultVO
    • This parameter has to be filled with the name of the VO in which the DHN will act. This is the Default VO where the DHN and all the hosted RIs will be published as default behavior. The DHN is pre-configured to work in three different VOs (ARTE, ImpECt and Development) or in the root VO (named diligent). One of the followings has to be specified as default VO value:
      • /diligent → to join only the global DILIGENT VO
      • /diligent/ARTE → to join the ARTE VO (working in the pre-production infrastructure)
      • /diligent/ImpECt → to join the ImpECt VO (working in the pre-production infrastructure)
      • /diligent/devsec → to join the Secure Development VO (working in the development infrastructure)

The default value is /diligent.

  • DHNProfileUpdateIntervalInMillis
    • the DHN profile is updated accordingly to this interval. The interval is specified as milliseconds.
  • latitude + longitude
    • these two coordinates are used to correctly place the DHN on the Google Map visualized by the Monitoring Portlet. To discover which are the coordinates for the DILIGENT partner DHNs see here
  • country: two letter code of the country (e.g. IT)
  • location: a free text placeholder (e.g. Pisa)
  • localFileSystem
    • the partition on your file system that you want to share with the hosted services
  • DHNType
    • allowed values here are: Dynamic, Static and SelfCleaning. A static DHN is not used as target for the dynamic deployment, while a SelfCleaning is automatically cleaned every night (used just for demos). The default value is Dynamic.
  • securityEnabled
    • if true, the DHN supports a secure context both at VO and DL level. In this case, it is necessary to
  1. configure the DHN following the instructions reported here
  2. overwrite the $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/deploy-server.wsdd with the $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/deploy-server.wsdd_SEC
  • rootDHN
    • state if the DHN is a root DHN or not (the root DHNs are special nodes dedicated to the VO management)

There are other parameters in the resource and they have to be left with their default values.

DHN static description file

The DHN profile can be enriched with some static labels that describes the DHN characteristics. These labels are matched at deployment time with the ones reported . Such labels can be added to the $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/customDHNlabels.xml file.

VOMap files

A VO Map is a file placed in the $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/VOMaps folder reporting all the basic EPRs needed to properly startup a DHN. Such EPRs are the starting point to discover all the others gCube resource available in the VO.
An example of VO Map file can be found here
The file name has to follow a naming convention:

  • VOMap_<VO-name>.xml if the VO is not the root one in the current infrastructure
  • VOMap_<VO-name>_<infrastructure-name>.xml if the VO is the root one

Testing and Verification

After the installation the DHN automatically joins the Beta infrastructure of DILIGENT. This is a completely new infrastructure that officially becomes the target infrastructure for the Beta releases of all the DILIGENT components. All the profiles produced by the DHN software are published on this new infrastructure glued together by the Beta release of the Collective Layer components.

To understand if a DHN has been successfully installed it is important to check if the DHN profile and the HNMService Running Instance profiles are correctly published in the DIS of such a infrastructure. A new DILIGENT Administrative Portal is available to manage this new infrastructure. In order to access to the monitoring capabilities of the Portal, you can use the following:

  • user monitoring
  • pass monitoring

Installation Troubleshooting

In case something goes wrong in the restart of the Java WS Core container after an upgrade of the DHN, it is possible to source the $GLOBUS_LOCATION/etc/org_diligentproject_keeperservice_hnm/utils/cleanDHNstate.sh"" script. This script cleans up the DHN state and forces the HNM Service to rebuild the DHN state from scratch. The script has to be executed when the container is not running.

Manuelesimi 20:12, 4 December 2007 (EET)