
Attendees: Ayla Dantes De Souza, Stuart Schaefer, Jun Tatemura, Dejan Milojicic Note takers: Dejan Milojicic & Steve Loughran Interoperability update: Dejan spoke about interactions with Hiro, Cees, this will be in progress for a while,a s we learn from IETF how they do it. Jun: discussed his unit tests, independent from Deployment APIs and Component Model, Stuart uses Nunit Steve: Some interesting features of W3C testing; you know where tests are coming from; Every test has a unique ID. Stuart took the RDF team approach; take documetns and spit them out as valid output; so Stuart gets in CDL and oputs out the final compilation. So he then compares what input and output is. RDF: input and output. -Stuart and Ayla do this Proposed: test files for our content w/ input and output. Could machine generate the output and manually verify that things match. -test cases that we have. Inputs and output Take the parsed documents, the intermediate representation. Print it out to see what it looks like. Simple resolution still has a Dom Full resolution leads to a DAG and not a tree. Describing errors. We haven't defined how things fail, so cannot test for it. Agreed. There is nothing to stop additional implementation-specific testing on top. -if every test has a unique ID, then it's a matter of declaring extra metadata about each test, such as Give the result document and declare LAZY is not in scope. But we do need a standard way to represent a partially faild. SmartFrog testing? They exist, but if there is to be another impl. then those tests would need pulling out. Agreement Stuart wants more tangible things to send, so everyone is more consistent. -an example CDL for testing a system. -Here are the commands to issue. Steve to run through and revise test plan w/ more stuff. Nov16th. Jun to do CDL tests Steve plans to do an update on Nov 16th. Stuart to follow by December 1st.