wiki:NDGRoom101Meeting

Version 6 (modified by mpritcha, 11 years ago) (diff)

--

NDG "Room 101" Meeting

Meeting to remind ourselves of what we have, why, & what we have learned along the way.
Held in CR03 on Friday 22 August 2008.

Attendees

  • Sam Pepler
  • Dom Lowe
  • Phil Kershaw
  • Steve Donegan
  • Kevin Marsh
  • Bryan Lawrence
  • Stephen Pascoe
  • Matt Pritchard
  • Calum Byrom (by phone)

Intro from Matt

  • Good opportunity to see where we are & look at improving the way we do things.
  • Room 101 ...ok, not quite. We're not going to vote for our least favourite components & see them disappear. In reality, design & development decisions have already been made, often for good reason (but it's worth reminding ourselves what those were & seeing if we're happy with the decision-making process ...not just on this project).
  • NDG took some excursions down some blind alleys. Did we learn things along the way? Are there lessons to be learned from how we got down these blind alleys (without dwelling on what was down there?)

Review of Development Approach

How did we go about designing and developing the system?

  • Use of RM-ODP architecture
  • What approach did we use to the software life-cycle and did it work?
  • Development across multiple institutions

Review of current components

  • COWS
  • CSML
  • MOLES (v2+incomplete infrastructure, v3 on way)
  • Discovery (v2)
  • Security
  • Vocabserver

Suggested points for discussion for each component:

  • Overview (brief!)
    • Reminder of what it does, where it fits in
  • How did we get there?
    • Was it designed or did it grow organically?
    • (Would we do the same again, starting from scratch?)
  • SWOT
    • Strengths
      • Particular successes / things learned
      • Fitness for purpose
        • Does it do what it was specified to?
        • Does the spec meet current needs?
        • Have people been able to deploy / integrate / use it?
    • Weaknesses
      • What obstacles have been encountered?
      • Have these been overcome?
      • By satisfactory means?
    • Opportunities
      • What have we learned while building this?
      • What is the roadmap for this component?
    • Technology (...or Threats?)
      • Have we used appropriate technologies?
        • Element tree for XML procesing
        • Exist for XML databases
        • SOAP toolkits
        • WS-Security
        • Pylons
        • Postgres
      • Has development been guided too much / enough by available technology
      • Have we reinvented wheels?

Going forward

What new requirements lie ahead

  • NDG MSI
  • ??

Sum up

Notes from Meeting

I have tried to attribute comments to people where my notes captured this. Please read and amend as necessary. Where my own interpretation of my notes has been added post-hoc, this is indicated in italics.

Review of Development Approach

(BL) NDG wasn't [necessarily intended to be] a software development project at the start. As such, it didn't have very clear requirements at the start. We had to "do" the buzzwords in order to "do" e-science as far as funding was concerned.

How to roll out deployments : we are still asking questions now as to how we integrate security into BADC. Could it be done in smaller chunks or do we have to wait until a whole chunk is ready before attempting deployment?

(Sam) Initially there was a "do everything" approach [i.e. see which things were successull to aid in narrowing down candidate technologies for solving certain problems?]

(Stephen) Learned about modular software development, but (Bryan) at the time, there were no partners actually signed up to implementing [specific components]. It seemed that everyone was waiting for a complete system.

In order to make things better implementable [do we mean deployable?], things need to be in smaller chunks and this needs to be done all along.

Why didn't this happen?

  • Lost people at certain institutions (esp. key ones with link to Data Centres)
  • Had "services" but no [p??? ...sorry can't read my own notes] to start with

Was there a timetable for implementing these things?

  • Comes down to tightly-defined requirements, or lack of.

Example : CSML

  • (Dom) Suffers from being at the end of the data delivery chain. [All other bits in chain need to be deployed in order for this to be tried out properly / generated useful feedback]
  • (Matt) Should timetable for deployments be tailored to position in data chain?

(Calum) NDG development seemed to have a very adaptive approach, with a basic idea of what it wanted to do. This maps well onto the agile software development approach (cf. predictive where everything is very well planned out from the start).

  • Good because not too much time is spent down dead ends
  • But relies on very good communication between all involved

Maybe there was some attempt to be predictive in some parts of the project (bits of project were agile, bits predictive). But there was not enough attention to the relative pace of the different development streams.

RM-ODP model