Version 12 (modified by lawrence, 13 years ago) (diff)


Discussion about MOLES issues and priorities


  • Initial Version, BNL, reporting a discussion between BNL and KON. Based on ndgmetadata1.2.5
  • Modified by Siva, Dom and Kevin with explicit inline comments.
  • Modified BNL, 25 July, based on followup discussion between BNL and KON on the 24th ...

Issues on the Table

  1. The CoatHanger
  2. Granules
  3. Stub-B Schema
  4. ISO19139
  5. UML

The aim of this document is to discuss these issues and identify the particular tickets we need to raise towards solving them as soon as we can.


Kev to turn all the issues and tasks in this document into trac tickets, so we can make decisions on the issues, and get on to the coding. Aiming to have this done by planned meeting on Tuesday.


Work is underway to modularise MOLES so that components can be used in the MOLES schema itself and in the stub-B schema. (No new ticket needed, this is ticket:287, but it does need more detail. What is involved?).

  • Noting that stub-b is a major interface for browse and travelling metadata
  • Noting also that we accept that very large deployment lists may occur, but we'll worry about that when it happens.

The CoatHanger?

Issue: how do we import material into MOLES? It turns out we already have the dgMetadataDescriptionType:

which should occur in each moles record, although it doesn't form part of one of the major entities (Activity, Data Production Tool, Observation Station, Deployment, Data Entity). The descriptionSection would seem to be a useful adendum to each of these in their own right (possibly instead of making it part of the overall description, since we see this additional information being additional attribute(s) of the entities).

  • BNL: Agreed? Then: Ticket Needed:Making a descriptionSection part of each of the major entities, allowing a stub-b to include this information for each of the first order entities in a natural way.
    • KDO - Not agreed. The description is here to help provide a minimum amount of information where the record is being interpreted outside an envirionment that acknowledges the metadata record types, by putting these standard fragments in a standard and easily accessible place. It also simplifies the schema by having a single place for this section that all metadata records should have.
      • BNL - ok, the difference of opinion comes down to understanding how MOLES is constructed. In fact, one can think of an Activity as a subclass (specialisation) of !dgMetadata. With that in mind, we see immediately we already have what I wanted ... but it's not quite that simple because we potentially want to add multiple documents. So what I think we want is something like the following (maybe exploiting the online reference type which follows):

First cut at UML relationship between dgMetadata and dgActivity etc

Issue: The Online Reference type

should evolve towards something that exploits xlink, so that we can indicate whether one expects to insert the linked object, point to the linked object, or render the remote object, and insert it ...

Issue: Ticket Needed: Provide a suggested mechanism of exploiting xlink to do this. (A proposal should be a schema fragment which includes a controlled vocabulary for the attributes of the xlink, recognising that we will be on the bleeding edge here and some future changes in our technology may be necessary).

  • KDO - the mechanism I expected to use was to extend the online reference type into a choice between the existing simple reference and an "xlink type structure" (maybe a citation type structure as well?). However, as Bryan points out, another schema fragment and associated enumerations or vocabs is required.
    • BNL - so I think we're good to go on this one!

Issue NumSim in particular. Here we expect to wait for an ISO19139 compliant version (ticket:284), which will have clear subcomponents targetted for the deployment and data production tools.


Here the issue is to come up with schema content that maximises the amount of information we can promote from CSML and provides content to clearly indicate to the browse user which granules are of interest.

There are two parts of the Data Entity which are of interest: overall information about the data entity, and the information we put in a data granule.

  • KDO question - I've always assumed that there would be a level of summarisation in the parameters presented via the DE, but I'm getting the feeling that this might not be so...
    • BNL I think the problem is how this is done with respect to the granules. More of this below.

Starting with the data granule:

we see that there is a datamodel id and an instance uri.

Issue: BNL is confused, should we expect the datamodel id be the uri of the the csml document? (e.g. equivalent in content to and the uri to be a service binding to that instance, e.g.  http://badchost/dX?

  • KDO - there's some confusion due to history here I think. Given what has been said in the past, my expectation was that the data granule ID was the key needed by the relevant services, so the instance was redundant for the NDG. However, it was intended to provide a hook for data that may be accessed outside the NDG SOA.
    • BNL - ok, I think we're agreed that the answer is that for NDG it is a csml id (unadorned with a service binding), and we should use something else for non-NDG data. Which brings me to:
      • The only use case I can think of for non-NDG data in MOLES is a vehicle for migrating one harvested discovery format for another ... in which case we probably do want to put somewhere an option for a URL which binds to the data (i.e. including a service binding) ... we might use that for the other use case which is NDG data for which no data granules exist ... huh? needs more discussion.

Note that the granulecoverage is the spatio-temporal bounding box, it doesn't cover the sort of averaging (if any used), more of that later.

All the interesting stuff is in the dgParameterSummary ...

Looking through this we can see the

  • IsOutput? variable (boolean). BNL can't really see the point of this. KON did explain, but this needs revisiting. Decide: In or out?

Siva:In, At BODC, we are considering if IsOutput? is True, then that Parameter is visible in data discovery, and is invisible if it is False.

  • KDO - whoops, looks like something got lost here... The original intent was to differentiate between fixed parameters, eg, data taken at a constant height, and non-fixed, aka measured, parameters, such as the temperature at a particular time at the constant height. Siva's case, I expected to be dealt with by excluding the parameter from the DE parameter summary, and leaving it to be found at data browse time.)
    • BNL: So I think the height of the data parameter is something that belongs in CSML ... and so this should be out, and replaced (for BODC) with something that covers the BODC use case (but exactly what use case supports hiding a parameter from discovery?
  • The next thing is a choice of four items, only one of which should appear for any parameter. Either the value, or the range of values, or an enumeration list of the value types, or a compound group should appear. Yes/No?? If so, ticket needed: It needs to be a choice as to whether this thing exists and it needs a name. Also another ticket: Roy to give us a few practical examples of how the parameter group is intended to work
    • Siva: Yes,at BODC we are using the following Strategy.Go for dgRangeDataParameter and check if HighValue?=LowValue?, in which case we use dgValueDataParameter.The way we get the HighValue? and LowValue? is, by opening each Series data file (QXF file) and the min and max value for the required data channel is obtained.Once the limits for each Series have been obtained, the extremes may be determined to give the limits for the dataset.We cannot envisage using dgEnumerationParameter.
    • Dominic: I am concerned that it's not practical to obtain the High and Low values for a parameter when you are dealing with very (very) large datasets e.g. atmospheric model runs. Not practical in the sense that it would increase the processing time to generate CSML by many orders of magnitude.
      • BNL: I don't think we have to use it ... but I think it would be very cool from a model use perspective.
    • BNL: so the bottom line is that the schema needs to be modified as suggested.
  • The other elements are rather obvious, but ...
    • Note that we would expect to use the dgStdParameterMeasured variable to encode both the phenomenon name and the cell bounds (so we get the averaging information here). Can we promote something useful from the CF cell methods? Ticket Needed
  • I suppose we imagine a granule of consisting of multiple phenomena with multiple feature types, but we would expect that any one phenomenon in one granule to have one feature type (Andrew/Dominic??). In which case the feature type name and the feature type catalogue from which it is governed should also be encoded per parameter. However, one might argue that the assumption might be violated, and in any case, at this point the user might be pointed to the WFS level. It would certainly be simpler, and possibly more useful to generate a list of feature types present in the granule (along with their FTC antecedents). Yes/No?? Ticket Needed?''
    • Dominic:I think that assumption (one phenomenon -- one feature type (for a given granule)) is correct.
    • BNL: so the ticket is needed, and we should do this.

Now we have this information at the granule level, how much of it should be summarised up at the data entity level by the moles creator? (Ticket: We would need tools to do this'')

BNL: The argument for aggregation is to make it easier to generate the discovery level information which doesn't see the individual granule information. Easier to do at moles creation time than in the xquery for discovery!

The overall material includes the following data summary:

It is a moot question as to how much of this needs to be replicated from the granule content. Tickets needed on some of the following

  • BNL would argue that the spatio-temporal coverage should be the *union* of the granule coverages (need a tool to produce this).
  • The parameter coverage is a bit more complicated, because now we think we could have, for example, temperature monthly means and temperature annual means in the granules. I think the only thing that makes sense is to aggregate the granule parameter summaries. In which case why bother? We can parse the granule content. Remove?
  • There ought however to be a consolidated lists of feature types present ... as well. Add?
  • The other elements seem appropriate.

(KDO - ok, at this point I'm going to talk about summarisation: I thought there was a need to actively summarise the data to aid understanding, with the data browse phase dealing with the real detail. Also, this summarisation could take into account the needs of those from other disciplines who may need to access the data.)

(BNL - I've contradicted myself here. Either we should summarise the parameters and the features, or we should summarise neither ... I think both, to simplify that xquery ...)

Now looking at the other two elements in the data entity which are relevant:

  • The dgDataSet Type should allow 'Mixed' (as for example both model and obs may be included in a dataset). One assumes these are effectively booleans? Ticket''
  • I don't really understand dbBasicData and dgDerivedData. In particular, the basic data context is really about listing the feature types, but we think we have that elsewhere, and we have in the dgDataSet information as to whether the data is simulated or an analysis. The only other option is that the data has been processed (derived) in someway, in which case there is utility in providing links to underlying datasets but these ought to be DataEntities? not data granules ... assuming that the detailsof teh derivation/processing are in the dpt, the links are all that are really needed. The choice of timeseries, integration etc is redundant as that information exists in the feature type and phenomenon information. Remove most of this section in the schema?

(KDO - earlier versions of the schema had a comment for dgDataObjectType along the lines of "why isn't this just a term from a vocab to identify the "feature type", with answer that some data entity types might have attributes only of interest to discovery, that would rarely be populated for other types, and just confuse things. Examples are: input data entities/granules for the "derived DEs"; and a notional dgImage which would have details about the camera used and pixel resolution. Hence, restricting the number of types to only those with such attributes (suggestions wanted), and having a list of CSML feature types involved is probably a good way to go)

(BNL I'm still convinced that dgDataObjectType is covered by the dgGranule content (with feature type added), so it should go).


Given we don't have any schema for IOC ISO19139, and the WMO ISO19139 is a tiny extension and no contraction, we should first look at the example documents, and decide how much we think we could get away with by

  • exporting just the same content we have in a DIF, but in ISO19139, (i.e. requiring Kev to construct an appropriate xquery, which could simply be the DIF one minimallly changed so the output is in teh right places for

ISO19139) and

  • importing the WMO via a simple extension to BROWSE (bnl problem)


It's quite clear that the MOLES data model needs to be in UML. Ideally we'd want to be able to autogenerate the schema via ShapeChange?, but that's a long way away, meanwhile, the docs should make as much as possible clear with UML fragments.