Discussion about MOLES issues and priorities


  • Initial Version, BNL, reporting a discussion between BNL and KON. Based on ndgmetadata1.2.5
  • Modified by Siva, Dom and Kevin with explicit inline comments.
  • Modified BNL, 25 July, based on followup discussion between BNL and KON on the 24th ...
  • Modified RKL, KDO, 28 July with inline comments.
  • Modified BNL, 1 August, restructured, clear ticket based actions. This page should no longer be modified, only modify the tickets ...

Issues on the Table

  1. The CoatHanger:

    Develop dgOnlineReference to include xlink option
    Identify elements that may need to include xlink references

  2. Granules

    [M] MOLES dgDataGranule re-design
    [M] Modfications to dgParameterType
    [m] Where should entry_id go for incoming DIFs
    [M] CSML2 will provide cell_methods, which will require change in dgStdParameterMeasured

  3. Stub-B Schema

    [m] massive numbers of related entities in moles
    [M] Need a schema for Stub-B

  4. ISO19139

    [M] Create NDG Profile Schema of ISO19139
    [M] code to produce NDG ISO19139 records from MOLES
    [M] ISO xquery fails

  5. General Structure

    [M] dgBasicData doesn't match feature type concepts
    Convert elements under dgDataSetType to include list of included feature types
    Modification to allow dgDataSetType to state the different types of data granules within the data set.
    Remove dgDataObjectType

The aim of this document is to discuss these issues and identify specific actions.


Work is underway to modularise MOLES so that components can be used in the MOLES schema itself and in the stub-B schema. (No new ticket needed, this is ticket:287, but it does need more detail. What is involved?).

  • Noting that stub-b is a major interface for browse and travelling metadata
  • Noting also that we accept that very large deployment lists may occur, but we'll worry about that when it happens.

The CoatHanger?

Issue: how do we import material into MOLES? It turns out we already have the dgMetadataDescriptionType:

which should occur in each moles record, although it doesn't form part of one of the major entities (Activity, Data Production Tool, Observation Station, Deployment, Data Entity). The descriptionSection would seem to be a useful adendum to each of these in their own right (possibly instead of making it part of the overall description, since we see this additional information being additional attribute(s) of the entities).

  • BNL: Agreed? Then: Ticket Needed:Making a descriptionSection part of each of the major entities, allowing a stub-b to include this information for each of the first order entities in a natural way.
  • KDO - Not agreed. The description is here to help provide a minimum amount of information where the record is being interpreted outside an envirionment that acknowledges the metadata record types, by putting these standard fragments in a standard and easily accessible place. It also simplifies the schema by having a single place for this section that all metadata records should have.
  • BNL - ok, the difference of opinion comes down to understanding how MOLES is constructed. In fact, one can think of an Activity as a subclass (specialisation) of !dgMetadata. With that in mind, we see immediately we already have what I wanted ... but it's not quite that simple because we potentially want to add multiple documents. So what I think we want is something like the following (maybe exploiting the online reference type which follows):

First cut at UML relationship between dgMetadata and dgActivity etc

Issue: The Online Reference type

should evolve towards something that exploits xlink, so that we can indicate whether one expects to insert the linked object, point to the linked object, or render the remote object, and insert it ...

Issue: Ticket Needed: Provide a suggested mechanism of exploiting xlink to do this. (A proposal should be a schema fragment which includes a controlled vocabulary for the attributes of the xlink, recognising that we will be on the bleeding edge here and some future changes in our technology may be necessary).

  • KDO - the mechanism I expected to use was to extend the online reference type into a choice between the existing simple reference and an "xlink type structure" (maybe a citation type structure as well?). However, as Bryan points out, another schema fragment and associated enumerations or vocabs is required.
  • BNL - so I think we're good to go on this one!

Issue NumSim in particular. Here we expect to wait for an ISO19139 compliant version (ticket:284), which will have clear subcomponents targetted for the deployment and data production tools.


Here the issue is to come up with schema content that maximises the amount of information we can promote from CSML and provides content to clearly indicate to the browse user which granules are of interest.

There are two parts of the Data Entity which are of interest: overall information about the data entity, and the information we put in a data granule.

  • KDO question - I've always assumed that there would be a level of summarisation in the parameters presented via the DE, but I'm getting the feeling that this might not be so...
  • BNL I think the problem is how this is done with respect to the granules. More of this below.

Starting with the data granule:

we see that there is a datamodel id and an instance uri.

Issue: BNL is confused, should we expect the datamodel id be the uri of the the csml document? (e.g. equivalent in content to and the uri to be a service binding to that instance, e.g.  http://badchost/dX?

  • KDO - there's some confusion due to history here I think. Given what has been said in the past, my expectation was that the data granule ID was the key needed by the relevant services, so the instance was redundant for the NDG. However, it was intended to provide a hook for data that may be accessed outside the NDG SOA.
  • BNL - ok, I think we're agreed that the answer is that for NDG it is a csml id (unadorned with a service binding), and we should use something else for non-NDG data. Which brings me to:
    • The only use case I can think of for non-NDG data in MOLES is a vehicle for migrating one harvested discovery format for another ... in which case we probably do want to put somewhere an option for a URL which binds to the data (i.e. including a service binding) ... we might use that for the other use case which is NDG data for which no data granules exist ... huh? needs more discussion.
    • OK, I now understand that this is how instance should be used. However, I've raised an issue ticket on how we populate incoming stuff with this ... ticket:463

Note that the granulecoverage is the spatio-temporal bounding box, it doesn't cover the sort of averaging (if any used), more of that later.

All the interesting stuff is in the dgParameterSummary ...

Looking through this we can see the

IsOutput? variable (boolean).

  • BNL can't really see the point of this. KON did explain, but this needs revisiting. Decide: In or out?
  • Siva:In, At BODC, we are considering if IsOutput? is True, then that Parameter is visible in data discovery, and is invisible if it is False.
  • KDO - whoops, looks like something got lost here... The original intent was to differentiate between fixed parameters, eg, data taken at a constant height, and non-fixed, aka measured, parameters, such as the temperature at a particular time at the constant height. Siva's case, I expected to be dealt with by excluding the parameter from the DE parameter summary, and leaving it to be found at data browse time.)
  • BNL: So I think the height of the data parameter is something that belongs in CSML ... and so this should be out, and replaced (for BODC) with something that covers the BODC use case (but exactly what use case supports hiding a parameter from discovery?
  • KDO - height? Is this visibility? Anyway, this isn't the point of the flag (new name needed?), the concept behind which has been useful in other areas.
  • Roy: Seems to be a misunderstanding here. I was using IsOutput? to hide co-ordinate channels (date/time, depth, CTD pressure) as I thought that was how MOLES was to be used. Kicking them out altogether is just as good for me.
  • KDO - Good, if understand you aright, then Siva's point now makes sense to me. I'd rather use the word "mark" rather than "hide", but the point is that you might want this parameter there, but you don't want it mistaken for a measured value. Hence I say it stays. However, if Bryan still feels strongly, it could be made optional. This would then require consistent usage within a DP, and if it isn't there then "co-ordinate variables" should be left out.

The next thing is a choice of four items, only one of which should appear for any parameter. Either the value, or the range of values, or an enumeration list of the value types, or a compound group should appear. Yes/No?? If so, ticket needed: It needs to be a choice as to whether this thing exists and it needs a name (now in ticket:460).

Also another ticket: Roy to give us a few practical examples of how the parameter group is intended to work

  • Roy: The primary reason for this is the way we handle date/time in BODC, which is to carry two parameters (days elapsed since the start of the Gregorian Calendar and time within day), BUT we have now decided that the inclusion of this was down to a misunderstanding (see above) about what was to be done with co-ordinate data channels in MOLES. The other thing that was in the back of my mind was how we handle data quality information (parameter + flag) but I now see this is more of a CSML issue than a MOLES issue. So, I think parameter groups are dropping off our radar
  • KDO - It was also a way to link the GCMD valids to BODC variables in a controlled way, without putting the GCMD terms into structured keywords.
  • BNL - now recognises this is useful for vectors etc ...
  • Siva: Yes,at BODC we are using the following Strategy.Go for dgRangeDataParameter and check if HighValue?=LowValue?, in which case we use dgValueDataParameter.The way we get the HighValue? and LowValue? is, by opening each Series data file (QXF file) and the min and max value for the required data channel is obtained.Once the limits for each Series have been obtained, the extremes may be determined to give the limits for the dataset.We cannot envisage using dgEnumerationParameter.
  • Dominic: I am concerned that it's not practical to obtain the High and Low values for a parameter when you are dealing with very (very) large datasets e.g. atmospheric model runs. Not practical in the sense that it would increase the processing time to generate CSML by many orders of magnitude.
  • BNL: I don't think we have to use it ... but I think it would be very cool from a model use perspective.

The other elements are rather obvious, but ...

  • Note that we would expect to use the dgStdParameterMeasured variable to encode both the phenomenon name and the cell bounds (so we get the averaging information here). Can we promote something useful from the CF cell methods? Ticket Needed
  • Roy:'This worried me a little at first, but the more I think about it, the more I think it might help as exposes the two items of information needed to map CF to a BODC PUV term side by side making them a much easier target''
  • KDO - so you'll explain to me why this isn't a parameter group Bryan?
  • BNL - because it's part of the dgStandardParameterMeasured ... but this will depend on ticket:464 and ticket:465.

I suppose we imagine a granule of consisting of multiple phenomena with multiple feature types, but we would expect that any one phenomenon in one granule to have one feature type (Andrew/Dominic??). In which case the feature type name and the feature type catalogue from which it is governed should also be encoded per parameter. However, one might argue that the assumption might be violated, and in any case, at this point the user might be pointed to the WFS level. It would certainly be simpler, and possibly more useful to generate a list of feature types present in the granule (along with their FTC antecedents). Yes/No?? Ticket Needed?''

  • Dominic:I think that assumption (one phenomenon -- one feature type (for a given granule)) is correct.
  • BNL: so the ticket is needed, and we should do this .. now part of ticket:460

Now we have this information at the granule level, how much of it should be summarised up at the data entity level by the moles creator? (ticket:466)

  • BNL: The argument for aggregation is to make it easier to generate the discovery level information which doesn't see the individual granule information. Easier to do at moles creation time than in the xquery for discovery!

The overall material includes the following data summary:

It is a moot question as to how much of this needs to be replicated from the granule content. Tickets needed on some of the following

  • BNL would argue that the spatio-temporal coverage should be the *union* of the granule coverages (ticket:466).
  • KDO - would other data providers like to comment on what they want to do for their data?
  • The parameter coverage is a bit more complicated, because now we think we could have, for example, temperature monthly means and temperature annual means in the granules. I think the only thing that makes sense is to aggregate the granule parameter summaries. In which case why bother? We can parse the granule content. Remove?
  • BNL - no, where the granules are self consistent, then a summary would make sense, and when it doesn't, it doesn't. This will be a problem for the stub-B viewer, and the maintainer, but isn't a conceptual problem for the schema, provided the summaries are optional.
  • There ought however to be a consolidated lists of feature types present ... as well (ticket:451).

The other elements seem appropriate.

(KDO - ok, at this point I'm going to talk about summarisation: I thought there was a need to actively summarise the data to aid understanding, with the data browse phase dealing with the real detail. Also, this summarisation could take into account the needs of those from other disciplines who may need to access the data.)

(BNL - I've contradicted myself here. Either we should summarise the parameters and the features, or we should summarise neither ... I think both, to simplify that xquery ... which is now what I think we have in our tickets).

Now looking at the other two elements in the data entity which are relevant:

  • The dgDataSet Type should allow 'Mixed' (as for example both model and obs may be included in a dataset). One assumes these are effectively booleans? Ticket''
  • I don't really understand dbBasicData and dgDerivedData. In particular, the basic data context is really about listing the feature types, but we think we have that elsewhere, and we have in the dgDataSet information as to whether the data is simulated or an analysis. The only other option is that the data has been processed (derived) in someway, in which case there is utility in providing links to underlying datasets but these ought to be DataEntities? not data granules ... assuming that the detailsof teh derivation/processing are in the dpt, the links are all that are really needed. The choice of timeseries, integration etc is redundant as that information exists in the feature type and phenomenon information. Remove most of this section in the schema?

(KDO - earlier versions of the schema had a comment for dgDataObjectType along the lines of "why isn't this just a term from a vocab to identify the "feature type", with answer that some data entity types might have attributes only of interest to discovery, that would rarely be populated for other types, and just confuse things. Examples are: input data entities/granules for the "derived DEs"; and a notional dgImage which would have details about the camera used and pixel resolution. Hence, restricting the number of types to only those with such attributes (suggestions wanted), and having a list of CSML feature types involved is probably a good way to go)

(BNL I'm still convinced that dgDataObjectType is covered by the dgGranule content (with feature type added), so it should go).

(KDO - I'm not, but if no one comes forward to provide the relevant attributes, then it should go... for the moment ;-)


Given we don't have any schema for IOC ISO19139, and the WMO ISO19139 is a tiny extension and no contraction, we should first look at the example documents, and decide how much we think we could get away with by

  • exporting just the same content we have in a DIF, but in ISO19139, (i.e. requiring Kev to construct an appropriate xquery, which could simply be the DIF one minimallly changed so the output is in the right places for ISO19139) and
  • importing the WMO via a simple extension to BROWSE (bnl problem)


It's quite clear that the MOLES data model needs to be in UML. Ideally we'd want to be able to autogenerate the schema via ShapeChange?, but that's a long way away, meanwhile, the docs should make as much as possible clear with UML fragments.

  • (KDO - 19115 was in UML: 19139 was then necessary. 'nuff said!)
  • (BNL - true, but xml-schema alone isn't good enough either ... and the whole point of shapechange is that it encodes the decisions that are made about the actual xml-schema,

but let's not go there now :-).