Good progress to report on TDML and on creating tools so we can do interoperability testing more easily and push for our demonstration of 2-interoperable implementations.


I am able to run TDML test suites against IBM DFDL now from the command-line/sbt.


The ibmCrossTestRig is not part of Daffodil (because it links against IBM DFDL), but is open source Apache License v2, and is currently in review at:


https://github.com/OpenDFDL/ibmDFDLCrossTester/pull/1


This may not be its permanent home, but for now that's where it is. There's a README.txt on how to set it up. It requires that one obtain IBM DFDL, and put the jars in lib, etc. 


This requires a daffodil 2.3.0 dev branch snapshot build, as the guts of the TDML runner are in one of daffodil's jars, and that depends on daffodil's IO and other libraries for synthesizing and comparing test data, etc.


In fact as of this writing, the daffodil changes to support this testing rig are still also in PR review at


https://github.com/apache/incubator-daffodil/pull/140


But we hope to have them on the main development 2.3.0 branch in about a week, and an official release within a month or so.


Individual TDML parser/unparser test cases, or whole TDML test suites can be marked with what implementations they should be run against.


A few tests are built in to the test rig. These will drive the IBM DFDL samples that come with IBM DFDL from TDML as tests.


The tests in daffodil's daffodil-test-ibm1 module are the first target for cross-testing. You basically modify one file (build.sbt) to point the test rig at suites of tests to run.


These tests are mostly set to run on both daffodil and IBM DFDL, but some are daffodil-only because they use computed values or hidden groups. Those tests (18 of them) get skipped if you run against IBM DFDL.


This little test suite includes most of the 70+ tests that were originally authored by IBM, and given to the "world" way back when as part of DFDL working-group promotion of TDML.   There are also around 30 tests that were part of the original Univ. of Illinois NCSA Daffodil project when it was first started. 


We don't currently know if these tests are still "as were contributed" or through the years we've tweeked them to get them to work on Daffodil. But in any case, they're a useful set to start from.


Current state of portability of this first small suite of 112 tests is this JUnit summary:

[error] Failed: Total 112, Failed 15, Errors 0, Passed 79, Skipped 18

That is, of 112 tests, 18 are skipped. 15 failed (are not portable, but we think they should be). These need some analysis. We've already created JIRA tickets for some of these non-portability issues. They need analysis to see what the  proper behavior is, and which implementation is correct or if there really is ambiguity/laxity and both are possibly correct.


https://issues.apache.org/jira/browse/DAFFODIL-2018

https://issues.apache.org/jira/browse/DAFFODIL-2017


The above include separator/terminator issues, and a number of issues with date/time formats and when "Z" vs "+00:00" are to be used, and when fractional seconds are supposed to be in the infoset, etc.


We also already have a test which failed on IBM DFDL, but in fact that is correct behavior.

This was the one where Daffodil was making the test work by inserting coercions that it shouldn't have.


https://issues.apache.org/jira/browse/DAFFODIL-2021


So we're starting to get some initial benefits from the cross testing already.



Mike Beckerle | OGF DFDL Workgroup Co-Chair | Tresys Technology | www.tresys.com
Please note: Contributions to the DFDL Workgroup's email discussions are subject to the OGF Intellectual Property Policy