
On Mar 18, Donal K. Fellows loaded a tape reading:
We had discussed using xsd:integer and this implies that any use of this type must just define the "base" units that are being counted. The only problem in our base terms is (cpu-)time, where I wonder if we are better off using a floating-point type to allow the base units to be seconds and still allow fractional second specification. Choosing some specoific fractional second as the base unit seems unappealing to me.
If we provide a floating point/fractional version, I think there needs to be an optional attribute on the exact element to specify a precision or epsilon value for equality tests, e.g. <exact jsdl:precision="0.001">3.1415927...</exact>
Actually, the correct thing to do is for the caller to always use bounded intervals with floats unless they really know exactly what they are after, since some floats (e.g. 1.25) actually exactly representable in IEEE arithmetic. OK, it's punting the problem to the document creator (the JSDL processor just checks what it is told) but that's the right thing to do in my experience with processing floats.
Well, if we're going to support floats, my point was moot. My only question is whether float/integer is a choice made by the schema author for a term or a runtime choice made by the document creator. I'd rather see to versions of our type, e.g. jsdl:integerRangeValueType and jsdl:floatingRangeValueType, and a term element definition has to pick one. I could see offering more variants while we are at it, to capture non-negative types etc. Because I do not feel confident I understand all future resource ontologies, I am not comfortable saying that resource-selection metrics never use negative values. So I think we need to offer signed integer (and float) in a core set of types.
Ick. That really makes handling floats much nastier! There's no need to do this; xsd:double will be handled right (and tooled nicely) as long as callers don't have unrealistic expectations of float math. (OK, many people do have those unrealistic expectations, but that's not our fault and we can't fix the world.)
I'm happy to have floating point variants.
A minor note is that the elements have to appear in the sequence order, which I argue is a good thing for machine-machine communication as the parse tree will yield three monomorphic arrays of values with clear meanings, rather than one polymorphic array that the consumer has to traverse.
I'm not sure I'd enforce that, but as we don't need to define an algorithm for minimization or testing equivalence of range types, I'd just not bother. Say that doing it is recommended, not required. :^)
Donal.
Well, it is actually a matter of having to write a more complicated schema to support reordering! The basic one Stephen and I discussed is (adjusted for my latest proposal): complexType sequence lowerBound? upperBound? exact* range* which exactly captures the cardinality requirements associated with the intended evaluation semantics. On the other hand, the mixed order one (which results in nastier parse trees), is something more like: complexType sequence lowerBound? upperBound? choice * exact range and it is even harder (or impossible?) to allow the two boundary elements to be reordered without relaxing the [0,1] cardinality constraint. karl -- Karl Czajkowski karlcz@univa.com