Neptune Configuration

Neptune is the test server for NDG3 (MSI).

$ cat /etc/issue


Welcome to openSUSE 10.3 (X86-64) - Kernel \r (\l).

Neptune has an alias

Known Issues

  • ssh login fails with:
$ ssh


Permissions on the password database may be too restrictive.



... indicates the store mount has failed. Contact SDDCS

  • wildly inaccurate dates causing unpredictable behaviour. Contact SDDCS

Python Configuration

System default is Python 2.5 in /usr/bin. Under SuSE, the site package location is customised to /usr/local/lib64/python2.5/site-packages with /usr/lib64/python2.5/distutils/distutils.cfg.

Application packages will be installed separately to avoid version conflicts and maintenance problems with a single package area. virtualenv or zc.buildout could achieve this. virtualenv is easy to set-up with mod_wsgi - see Apache Configuration. zc.buildout enables overriding control over package versions to define a package and version combination to make a stable deployment. zc.buildout  collective.recipe.modwsgi enables integration with mod_wsgi. zc.buildout is currently the preferred means of configuration (17/06/2009).


virtualenv bootstrap:

$ virtualenv --no-site-packages myenv

... failed with this message:

TEST FAILED: /usr/local/lib64/python2.5/site-packages/ does NOT support .pth files 

This is a known problem with SuSE:

This discussion suggests commenting out the prefix setting in /usr/lib64/python2.5/distutils/distutils.cfg, but a less intrusive option is to override the setting by creating an  alternative config file setup.cfg or ~/.pydistutils.cfg:

$ cat > ./setup.cfg







Then ...

$ virtualenv --no-site-packages myenv

Move the setup.cfg file to within the virtualenv directory so that it doesn't interfere with other components:

$ mv ./setup.cfg ./myenv 

Install setuptools cding to myenv directory first to ensure that setup.cfg is picked up:

$ cd ./myenv $ wget

$ ./bin/python ./


See the Discovery service browser interface section for details.

Apache Configuration

See: Apache Configuration page.

Application Configuration

Discovery Browser Interface

The discovery service has been deployed with a buildout script (as of 04/06/2009). The configuration is in /usr/local/ndg-discovery. The procedure to set up was:

  1. Install zc.buildout and  collective.recipe.modwsgi:
$ sudo /usr/local/bin/easy_install zc.buildout

$ sudo /usr/local/bin/easy_install collective.recipe.modwsgi

  1. The Apache config needs altering to add a mount point for the Discovery service:
WSGIScriptAlias /services /srv/www/wsgi_scripts/discovery.wsgi

  1. Create the buildout.cfg /usr/local/ndg-discovery:

parts = DiscoveryServiceProGlueMirror


# Configuration mirroring eggs as currently deployed on proglue


recipe = collective.recipe.modwsgi


eggs = ows_server==0.0.0dev_r5354





#       cdat_lite==4.1.2_0.2.5



config-file = ${buildout:directory}/service.ini

find-links =

The DiscoveryServiceProGlueMirror part mirrors the configuration on proglue:

  • Explicit Pylons and WebHelpers versions were set to avoid webhelpers 'auto_link' AttributeError.
  • cdat_lite should be set to version 4 but this wouldn't install. Version 5 installs but there is a known error with cdms imports. This will be fixed with an upgrade to the latest version of discovery.
  1. To generate the configuration run buildout from /usr/local/ndg-discovery.
$ /usr/local/bin/buildout

This will create a WSGI script in ./parts/DiscoveryServiceProGlueMirror/wsgi

  1. The script as generated could be improved to enable logging by adding a line to include fileConfig from paste.script.util.logging_config e.g.:
from paste.deploy import loadapp

from paste.script.util.logging_config import fileConfig


configFilePath = '/usr/local/myapp/service.ini'



application = loadapp('config:%s' % configFilePath)

This could be done by customising the collective.recipe.modwsgi recipe. As an interim measure. a Makefile makes the changes using an ugly set of sed commands :) The Makefile also includes targets to install the script into the right area for Apache to pick up:






install_wsgi: add_logging

        @echo installing WSGI script ...


        @echo Done.



        @echo Altering WSGI script file to include logging functionality ...

        @cat ${WSGI_SCRIPT_IN_FILE} | sed s/"application = loadapp(\"config:"/"from paste.script.util.logging_config import fileConfig\n\nconfigFilePath

 = \""/g |sed s/"\")"/"\"\nfileConfig(configFilePath)\n\napplication = loadapp(\"config:\"+configFilePath)"/g > ${TMP_FILE}

        @echo Done.





        export http_proxy=${http_proxy}; /usr/local/bin/buildout

To build and install then,

$ make buildout

$ make

mod_wsgi set up in daemon mode will automatically reload the script content without the need to restart Apache.

  1. Nb. Ensure debug = false in the ini file config! Check ndgDiscovery.config server setting consistent with the mount point set by WSGIScriptAlias in the Apache config file.

Tomcat configuration

In order to minimize the number of installed Tomcat a "multiple instances" configuration has been setup.

Apache Tomcat 6.0.24 (CATALINA_HOME: /opt/apache-tomcat-6.0.24)

Java6 (JAVA_HOME: /opt/SDK6/jdk)

Discovery Web Service

To run the Discovery backend server the following has been installed:

  • develop configuration (CATALINA_BASE: /opt/tomcatInstances/develop)

  • Apache Axis Axis2-1.4 ($CATALINA_BASE/webapps/axis2) (Axis2.war deployed within tomcat)

  • A configuration file ($CATALINA_BASE/bin/ defines
    export JAVA_HOME=/opt/SDK6/jdk
    export JRE_HOME=$JAVA_HOME/jre
    export CATALINA_HOME=/opt/apache-tomcat-6.0.24
    export CATALINA_BASE=/opt/tomcatInstances/develop
    export JAVA_OPTS="-Xmx512M -Xms64M -Dfile.encode=UTF-8 -XX:MaxPermSize=256M -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=\"*|localhost\""
    export AXIS2_HOME=$CATALINA_BASE/webapps/axis2/axis2-1.4

  • The db resource ($CATALINA_BASE/conf/context.xml) has to contains the following resurces
         <Resource name="jdbc/discoveryDB" auth="Container"
              type="javax.sql.DataSource" driverClassName="org.postgresql.Driver"
              removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true"
              username="xxxx" password="yyyy" maxActive="100" maxIdle="30" maxWait="10000"/>
       <Resource name="jdbc/searchLogDB" auth="Container"
              type="javax.sql.DataSource" driverClassName="org.postgresql.Driver"
              removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true"
              username="gggg" password="zzzz" maxActive="100" maxIdle="30" maxWait="10000"/>
  • The service /etc/init.d/tomcat has to been made to allow tomcat to stop/start Discovery service backend. I should define the following environment variables:
    export CATALINA_HOME=/opt/apache-tomcat-6.0.24
    export CATALINA_BASE=/opt/tomcatInstances/develop  

  • This service will run on port 8080


To run the Geonetwork application the following has been installed:

  • develop (CATALINA_BASE: /opt/tomcatInstances/gnTomcatBase)
  • A configuration file ($CATALINA_BASE/bin/ defines
    export JAVA_HOME=/opt/SDK6/jdk
    export JRE_HOME=$JAVA_HOME/jre
    export CATALINA_HOME=/opt/apache-tomcat-6.0.24
    export CATALINA_BASE=/opt/tomcatInstances/geonetwork
    export JAVA_OPTS="-Xmx512M -Xms64M -Dfile.encode=UTF-8 -XX:MaxPermSize=256M -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=\"*|localhost\""
  • Updates to /etc/init.d/geonetwork have been made to allow GeoNetwork?'s tomcat to stop/start Discovery service backend.

Following environment variables need to be set:

export CATALINA_HOME=/opt/apache-tomcat-6.0.24
export CATALINA_BASE=/opt/tomcatInstances/geonetwork

  • This service will run on port 8280

Postgres database

Postgres 8.3.1 has been installed in the standard location: /usr/local/pgsql. The database data files and configuration files are stored in the 'data' subdirectory. The postgis extension has also been installed. This database is not currently being backed up.

Postgres should automatically start on reboot via /etc/init.d/postgresql file.

To start/stop/restart/reload postgres:

As linux user 'postgres':

/usr/local/pgsql/bin/pg_ctl start/stop/restart/reload/status -D /usr/local/pgsql/data

OR, As 'root':

/etc/init.d/postgresql start/stop/restart/status

Passwords can be found in the Secrets box.

Discovery NDG URL redirection service

The NDG redirection service redirects links to the original URL, once the original URL has been replaced during discovery ingestion with the relevant details. Before the required link is redirected details on the link and associated dataset are recorded within a logging database as well as a counter incremented in the original database for the relevant dataset.

A converted redirected URL looks like this:;;docTitle=NCAVEO%20field%20experiment%20data

and redirects the link to the original specified url at . Dataset title and unique id are also encoded so as to allow easy dissemination of which datasets produce the most traffic.

The NDG redirection service endpoint is: and requires the addition of 3 parameters to allow parsing and update of the database:

url, docID & docTitle. These need to be encoded using UTF-8 so they can be parsed correctly by the service.

The service is available as egg which can be run within a paster buildouts session (you will need to unpack service.ini as well as config files for both the url tracking database and the main discovery database - not in the egg - contact steve.donegan@…). Dependancies etc as required are detailed in the configs below

Buildout script to run paster on 8081:


Run as mod_wsgi script with buildout



There is a virtualenv based python under /usr/local/cows_virtualenv

This python has numpy and matplotlib installed. This python is then used to run buildout scripts for both the COWS WXS Services, and the COWS Client code.

COWS Services

A buildout script is in /usr/local/cowsserver_buildout/


parts = cows_server_app



recipe = zc.recipe.egg

interpreter = py


find-links =










recipe = collective.recipe.modwsgi

find-links =













config-file = /usr/local/cows_virtualenv/cows_server_app/development.ini



There is a bootstrap script in the cowsserver_buildout directory, to run it use:

 /usr/local/cows_virtualenv/bin python

This creates a custom buildout script. Run this:


The generated WSGI script ./parts/cows_server_app/wsgi should be installed in the Apache WSGI scripts directory.

Note that the pylons ini file is in:


/usr/local/cows_virtualenv/cows_server_app/ contains a lightweight Pylons app that sets up the controllers for COWS. This app was created using COWS project-templates.

TODO: The bootstrap script needs modifying so it adds logging capabilities to the final WSGI document (as per Phil's makefile).

05/08/2009 - Phil has added a similar Makefile. The procedure for update and install is:

$ make bootstrap

$ make


There is a buildout config for the 'cowsclient' app here:





parts = cows_client_app



recipe = collective.recipe.modwsgi

find-links =





config-file = ${buildout:directory}/appconfig/cowsclient.ini


Again this should be run using the virtualenv python and


This creates a custom buildout script. Run this:


The generated WSGI script ./parts/cows_client_app/wsgi should be installed in the Apache WSGI scripts directory.

Alternatively, use the Makefile:

$ make bootstrap

$ make


pyDAP 3.0 is installed in /usr/local/dap. This is being updated (17/06/2009) to use a zc.buildout configuration:


parts = pyDAP


# Configuration mirroring eggs as currently deployed on proglue


recipe = collective.recipe.modwsgi


eggs = ndg_security




config-file = ${buildout:directory}/etc/service.ini


find-links =

The eggs list includes NetCDF response and handler plugins and NDG Security filter to intercept requests.


There are two components:

  1. Security Services: an application running security services such as OpenID, Attribute Authority and Session Management
  1. Application Filters: handler filters which are configured with existing applications to protect them

The first is installed in it's own mod_wsgi application running over HTTPS. For the second, there are filters configured to secure COWS and pyDAP services.

Security Services

This is installed using the same technique as described above for the Discovery Service Browser interface: a buildout script installs the eggs required in /usr/local/ndg-security/eggs and creates a mod_wsgi script. A Makefile installs the script in the script location set-up for Apache mod_wsgi scripts. The script is mounted via a WSGIScriptAlias directive in the Apache config file.


parts = NDGSecurity



recipe = collective.recipe.modwsgi

interpreter = py


# SQLAlchemy is used by OpenID / Session Manager for authentication Database

eggs =





config-file = ${buildout:directory}/config/securityservices.ini

find-links =

See the README file in /usr/local/ndg-security for additional configuration information.

Application Filters

Each application secured with NDG Security is configured with security filters to intercept requests to them. These are applied by making settings in the Paste ini file for the application. This ini file extract shows how the main application is configured at the end of a WSGI pipeline fronted with Authentication and Authorisation filters:


pipeline = AuthenticationFilter AuthorizationFilter myApp

AuthenticationFilter, AuthorizationFilter and myApp refer to sub-sections in the file where the specific settings are for each individual component.

The filters make use of services running in the Security Services application stack described above. The authentication filter is configured to invoke the OpenID Relying Party interface running in the Security Services stack. This prompts the user to enter the OpenID for their home site.

The authorisation filter is configured with an XML policy file which sets which request URIs are to be secured. It also makes callouts to the Attribute Authority and Session Manager web services which similarly, run on the Security Services stack.

pyDAP and COWS services are secured using this configuration.

Vocab Term Editor

Configuration is set in /usr/local/vocab-editor. Ownership should be set so that the Apache user has read permission.


parts = Vocab_Term_Editor


# Configuration mirroring eggs as currently deployed on proglue


recipe = collective.recipe.modwsgi

extra-paths = ${buildout:directory}/passwords


eggs =  ndgCommon==0.1.1.dev_r5445










config-file = ${buildout:directory}/production.ini

find-links =


PySVN is not installable from an egg so an egg was created locally for PySVN from the tar ball distribution and this referenced with the find-links option above.

PySVN Egg creation

PySVN is not available as an egg so it has to be adapted (all commands

executed as root)...

Get Dependencies - the svn development package:

$ yast2

  • Navigate to Software -> Software Management
  • <Alt+S> to search and enter 'subversion'
  • pick svn-devel package and select with <Alt+T> and '+' key.
  • <Alt+U> to update
  • <Alt+N> - no in response to option to install or remove more packages
  • <Alt+Q> to quit

Get tarball distribution:

$ wget

$ tar zxvf pysvn-1.7.0.tar.gz

$ cd pysvn-1.7.0

Follow to to get details

of how to patch PySVN to make it eggable. See  patch

for the patch. Unfortunately this refers to older version of PySVN but it's

still possible to hack the changes into 1.7.0(!). Additionally hacking

necessary to correctly link to libcom_err in /lib64 directory. Follow the

list of steps in  issue 86 entry to create the egg.

Build and Installation

A Makefile in the installation directory can be used to call zc.buildout and install the WSGI script created:

$ make buildout

$ make

The Apache configuration is set up with a WSGIScriptAlias to pick up the script from the target location.