Meeting tomorrow, 11/3, 6am PST

Hi, My homework from the last meeting was to talk to Hiro about interoperability. I have done so and Hiro asked me to discuss this with Cees, whom I communicated with by email (see my earlier email today). I'll continue this investigation further with people who have pursued interoperability within IETF in the past. I also discussed with Hiro how we can continue our engagement with OGSA. We are scheduled to be on the OGSA call next Wednesday, 11/9. We will address the homework we had received while at F2F after GGF, and which we have addressed at the first meeting after GGF. The assignment for the meeting tomorrow was for everyone to think of the dependencies they may have from others so that we can start tracking progress we are making towards reference implementations. I would like to start tracking using GridForge trackers. Please use our usual phone numbers: +1 866-639-4732 (toll free while in US) Code 2362906 (followed by a '#' sign) +1 574-948-0379 (toll & international) Thanks, Dejan.
-----Original Message----- From: Milojicic, Dejan S Sent: Wednesday, October 26, 2005 6:49 AM To: 'cddlm-wg@ggf.org' Subject: Notes from the meeting
Attendees: Stuart, Steve, Ayla, Dejan.
Notes taker Dejan.
We discussed how to demonstrate that something constitutes a reference implementation. We suggested the following steps:
1. Individual components tests (every owner will contribute one).
2. Use cases:
a) Single app
b) Multi-tier, one machine
c) Multi-tier, multi-machine
3. Optional: Grid computing apps (Blast, Globus)
4. Optional: Scale experiments
Steve suggested that he will write a draft of the test components by next week, a more complete suggestion in two weeks.
Everyone will write a dependency list that they have on other components so that we create a roadmap from now until next GGF. One good example of dependencies is component tests that should be available as early as possible so that reference implementations can test their compatibility. We do not want to track detailed plan of individual reference implementations although we want to have some sense of the overall progress we are making and whether we shall have reference implementations completed for the next GGF.
Dejan will talk to Hiro today to find out about the proofs for interoperability and compatibility (if any).
Thanks,
Dejan.

Hi, Please find the attached file that shows my current plan on CDL component test cases. In order to test CDL resolution, I think we need to have a special API. I will develop unit test code with such API if we agree on that. Thanks, Jun Milojicic, Dejan S wrote:
Hi,
My homework from the last meeting was to talk to Hiro about interoperability. I have done so and Hiro asked me to discuss this with Cees, whom I communicated with by email (see my earlier email today). I'll continue this investigation further with people who have pursued interoperability within IETF in the past.
I also discussed with Hiro how we can continue our engagement with OGSA. We are scheduled to be on the OGSA call next Wednesday, 11/9. We will address the homework we had received while at F2F after GGF, and which we have addressed at the first meeting after GGF.
The assignment for the meeting tomorrow was for everyone to think of the dependencies they may have from others so that we can start tracking progress we are making towards reference implementations. I would like to start tracking using GridForge trackers.
Please use our usual phone numbers:
+1 866-639-4732 (toll free while in US) Code 2362906 (followed by a '#' sign) +1 574-948-0379 (toll & international)
Thanks,
Dejan.
-----Original Message----- From: Milojicic, Dejan S Sent: Wednesday, October 26, 2005 6:49 AM To: 'cddlm-wg@ggf.org' Subject: Notes from the meeting
Attendees: Stuart, Steve, Ayla, Dejan.
Notes taker Dejan.
We discussed how to demonstrate that something constitutes a reference implementation. We suggested the following steps:
1. Individual components tests (every owner will contribute one).
2. Use cases:
a) Single app
b) Multi-tier, one machine
c) Multi-tier, multi-machine
3. Optional: Grid computing apps (Blast, Globus)
4. Optional: Scale experiments
Steve suggested that he will write a draft of the test components by next week, a more complete suggestion in two weeks.
Everyone will write a dependency list that they have on other components so that we create a roadmap from now until next GGF. One good example of dependencies is component tests that should be available as early as possible so that reference implementations can test their compatibility. We do not want to track detailed plan of individual reference implementations although we want to have some sense of the overall progress we are making and whether we shall have reference implementations completed for the next GGF.
Dejan will talk to Hiro today to find out about the proofs for interoperability and compatibility (if any).
Thanks,
Dejan.

Jun Tatemura wrote:
Hi, Please find the attached file that shows my current plan on CDL component test cases. In order to test CDL resolution, I think we need to have a special API. I will develop unit test code with such API if we agree on that.
Thanks, Jun
this is really interesting, and aligned with a lot of my thoughts. I think it ought to be possible to avoid having a specific junit tests/test runner. I'd expect most junit test methods to consist of the following 1. a test method that loads a document 2. then asserts that certain things have value e.g void testValid1() { document d=load(VALID_1); assertResolvesTo(d,"cdl:system/app:a1/@app:attr","value"); } or asserts that the load failed in some interesting way public void testWrongDocNamespace() throws Exception { assertInvalidCDL(CDL_DOC_WRONG_NAMESPACE, WRONG_NAMESPACE_TEXT); } Failures are just as interesting as successes, and right now we lack enough fault standardisation to begin to test those uniformly. It is not enough to assert that the doc failed to load, you want the doc to fail in the way you expected (this is important for regression testing, believe me). Yet unless you wrap every single XML parser fault in the The point is, the java-side is pretty simple, primarily a declaration of a doc to load and then either some exception expected or some xpath expressions and what they resolve to. All of this could be expressed in XML test documents, docs that declare both the test and either some <t:test id="uuid:787878-acb0-0bca"> <t:metadata> <!-- rdf metadata --> <dc:author> </t:metadata> <t:description>Load a CDL document with an invalid namespace</t:description> <t:source> <cdl:cdl xmlns:cdl="http://example.org/23" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" /> <t:source> <t:exception> <t:text t:required="false"> <t:description>this is the xerces message</t:description> <t:value>Cannot find the declaration of element 'cdl:cdl'"</t:value> <t:text> </t:exception> </t> For a succeeding doc, t:exception would be absent and we would have a list of assertions about path values down the list <t:assertions> <t:assertResolvesTo> <t:xpath>cdl:system/app:a1/@app:attr</t:xpath> <t:value>value</t:value> <t:assertResolvesTo> <t:assertNotResolved> <t:xpath>cdl:system/app:a2</t:xpath> <t:assertNotResolved> <t:assertResolvesTo> <t:xpath>cdl:system/app:a1/@app:attr</t:xpath> <t:lazyref>cdl:system/app:a3</t:lazyref> <t:assertResolvesTo> </t:assert> The value of this is 1. its platform neutral 2. the miracle that is XSLT could theoretically generate documentation from the test data, the way these people do http://www.w3.org/2001/sw/DataAccess/tests/ That w3c working group has a interesting policy: http://www.w3.org/2001/sw/DataAccess/tests/README.html -all issues must come with a test case -every test case is bound back to an issue, which is in the metadata -every test case also has a status. it is required that all implementations implement "approved" test cases, but the other ones (NotClassified, Rejected, Obsoleted and Withdrawn) can be run, but it is not mandatory for an implementation. I think this approach will work for CDL, though we will need to evolve the list assertions as we go on. I'd also add an <assertFalse> assert that asserts that something nested inside didnt pass to simplify the set. with that, I could replace the <t:assertNotResolved> <t:xpath>cdl:system/app:a2</t:xpath> <t:assertNotResolved> with the far longer <t:assertFailed> <t:assertResolvesTo> <t:xpath>cdl:system/app:a2</t:xpath> <t:assertResolvesTo> </t:assertFailed> -Steve

Can I also add my current thinking on how the standardisation process can benefit from being test-driven. Compared to a lot of other standards we are fairly test-centric, though we have put off defining most of our tests until after the specifications were put in the pipeline. The DAWG group looked at here wants all its test documents first. All issues must come with a test document. This is something we could consider for all CDL issues; its a bit harder for the others. I've already been circulating this doc with people in the W3C, in particular I've been giving the WS-A group a hard time for having no tests, and the TAG people a hard time for letting the WS-A ship without any tests. As such I am now engaging with them on what makes good tests for WS-A. Send me your addressing traces and I will forward them on to the relevant people. If we get our addresses into the formal definition of WS-A compliance, then we can be sure that at least our apps will interoperate. -steve

Steve Loughran wrote:
Can I also add my current thinking on how the standardisation process can benefit from being test-driven.
hi, this is an interesting article and i agree with the premises as it makes easier to know when a spec is implemented and track an implementation progress. however i think calling it Behavior Driven Specification (BDS similarly to BDD [1]) instead of TDS may better capture the intent and help to acid confusion with software writing - it is not really about writing tests but defining and asserting behaviors that are defined in a spec. best, alek [1] http://daveastels.com/index.php?p=5 http://jroller.com/page/obie/20051017
Compared to a lot of other standards we are fairly test-centric, though we have put off defining most of our tests until after the specifications were put in the pipeline.
The DAWG group looked at here wants all its test documents first. All issues must come with a test document. This is something we could consider for all CDL issues; its a bit harder for the others.
I've already been circulating this doc with people in the W3C, in particular I've been giving the WS-A group a hard time for having no tests, and the TAG people a hard time for letting the WS-A ship without any tests. As such I am now engaging with them on what makes good tests for WS-A. Send me your addressing traces and I will forward them on to the relevant people. If we get our addresses into the formal definition of WS-A compliance, then we can be sure that at least our apps will interoperate.
-steve
-- The best way to predict the future is to invent it - Alan Kay
participants (4)
-
Aleksander Slominski
-
Jun Tatemura
-
Milojicic, Dejan S
-
Steve Loughran