
Last week's mail conversation drifted from XML syntax for NML relations to the use of namespaces in NML messages. An important difference in view was identified. Jason assumed that a single NML messages would only contain one namespace. I assumed that a single NML messages would only contain multiple namespaces. While a few example crossed the list, those were very probably not very relevant nor convincing. So I'll explain a bit better how I envision the different namespaces: - core topology concepts (link, node, port, adaptation, ...) - Ethernet-specific topology concepts (VLANs, segment size, ...) - IP-specific topology concepts (IPv4/IPv6 address, routing table, ...) - geography enhancement (geo location) and I potentially see a mix & match with other applications: - path finding, topology aggregation, domain control (NSI, provisioning) - monitoring, performance (NMC) I expected and still expect each of the above to go in a different namespace. The above leads to three scenario's in mind where it may be prudent to mix namespaces in a single message: - core topology with technology specific (Ethernet, IP, ..) namespaces. This is probably the weakest use case, as it is possible to add the core topology concepts to each technology specific namespace (e.g. using chameleon namespaces) - topology namespace with geo namespace - topology namespace with an application-specific namespaces (eg. topology + NSI for path finding, or topology + NMC for monitoring). Regards, Freek

Hi Freek; Comments inline: On 8/22/11 6:24 PM, thus spake Freek Dijkstra:
Last week's mail conversation drifted from XML syntax for NML relations to the use of namespaces in NML messages.
An important difference in view was identified. Jason assumed that a single NML messages would only contain one namespace.
I never said nor implied this in any way - in last weeks email exchange or in prior conversations; please do not try to summarize my opinions for me. A web service message contains lots of elements with potentially different namespaces - this is how it is in perfSONAR today and how I would envision a set of topology to be encoded in NML. See examples from services in action: https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... A rich vocabulary of elements from these namespaces is the only way to foster extension and handle future use cases for this work. I envision that some network (and by extension, some exchange that would be witnessed by an NML capable service) may have different layers represented, and thus different namepsaces in use to describe links, ports, and other entities using the network abstractions at these layers.
I assumed that a single NML messages would only contain multiple namespaces.
While a few example crossed the list, those were very probably not very relevant nor convincing. So I'll explain a bit better how I envision the different namespaces: - core topology concepts (link, node, port, adaptation, ...)
Correct, living in a namespace similar to this: http://ogf.org/schema/network/topology/base/20070828/ Note this is something we used today, I expect the name will change.
- Ethernet-specific topology concepts (VLANs, segment size, ...)
e.g. http://ogf.org/schema/network/topology/l2/20070828/
- IP-specific topology concepts (IPv4/IPv6 address, routing table, ...)
e.g. http://ogf.org/schema/network/topology/l3/20070828/
- geography enhancement (geo location)
I am not sure why this would be viewed as a different topology extension akin to the network layers. Presumably this enhancement would make more sense as being being built into a base since most 'physical' things (e.g. a node, port) would need it.
and I potentially see a mix& match with other applications: - path finding, topology aggregation, domain control (NSI, provisioning) - monitoring, performance (NMC)
I expected and still expect each of the above to go in a different namespace.
The above leads to three scenario's in mind where it may be prudent to mix namespaces in a single message: - core topology with technology specific (Ethernet, IP, ..) namespaces. This is probably the weakest use case, as it is possible to add the core topology concepts to each technology specific namespace (e.g. using chameleon namespaces) - topology namespace with geo namespace - topology namespace with an application-specific namespaces (eg. topology + NSI for path finding, or topology + NMC for monitoring).
You describe all of the things that I (and others in SLC) were arguing for all along - the ability to extend the base ideas concepts into new use cases through the use of namespace extensions. I will make one quibble on the above - I would argue its important to make the 'base' set minimal, e.g. 'ethernet', 'ip', etc. are not *in* the base, these are extensions *of* the base. I am still not sure of your GEO use case, so perhaps you should clarify this and the others with some examples. The base NML concepts will most likely not be enough for some use cases (e.g. monitoring or circuit creation). It is important for these concepts to have the mechanisms available to extend the base concepts, yet still have some way to convert (downcast, etc.) to something that can be understood by other implementations. The prior art that is in use for the Control Plane schemas and even some perfSONAR topology representations is a good example of all of this. Thanks; -jason

Jason Zurawski wrote:
Last week's mail conversation drifted from XML syntax for NML relations to the use of namespaces in NML messages.
An important difference in view was identified. Jason assumed that a single NML messages would only contain one namespace.
I never said nor implied this in any way
Sorry if you feel I jumped to conclusions. You indeed only wrote:
to my knowledge a parser can only verify against a single schema at any given time.
Perhaps we still need to take a few steps back. Do you think that a NML messages may contain multiple namespaces? Do you agree with the following requirement I wrote earlier: 1. Be extensible 2. It should be possible to create a specific validator for each relation type. 3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element) If you have time to phone today, that would be great. Regards, Freek

Hi Freek; Answers inline: On 8/23/11 5:36 AM, thus spake Freek Dijkstra:
Jason Zurawski wrote:
Last week's mail conversation drifted from XML syntax for NML relations to the use of namespaces in NML messages.
An important difference in view was identified. Jason assumed that a single NML messages would only contain one namespace.
I never said nor implied this in any way
Sorry if you feel I jumped to conclusions. You indeed only wrote:
to my knowledge a parser can only verify against a single schema at any given time.
Perhaps we still need to take a few steps back.
Do you think that a NML messages may contain multiple namespaces?
Do you agree with the following requirement I wrote earlier: 1. Be extensible 2. It should be possible to create a specific validator for each relation type. 3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element)
If you have time to phone today, that would be great.
You are conflating several concepts, and using them interchangeably. I believe this is what is bringing in confusion. To be clear, I am going to ask once again that you please (*please*) attempt to read some of the prior art from NMC/perfSONAR. The reason I keep bringing this up is two fold: a) the examples are short, and easy to understand. Instead of going around and around on email we could make up a lot of ground starting from known examples. b) it is working in practice today, and mimics the needs of NML in the extensibility space Consider this "schema file": https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/sch... It represents the construction of one type of message (e.g. the "SetupDataRequest" message, specifically for utilization data). Note some interesting things about it: - It represents a single 'schema', e.g. it is one file that contains the definitions to verify one specific message type only. - It incorporates several other 'schema' definitions through the methods of inclusion (e.g. 'include xxx { ... }' ) - It features *several* namespaces, and elements in this same 'schema' file (or the other files) may use these namespaces - Example instances that can be verified against this schema can be found here: https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/req... To address your concerns above: a parser, and when I say parser I am imagining something like libxml, is allowed to verify an instance against one "schema file" at at time. This schema file may feature 'includes', thus expanding the available definition space, but there are not options (at least in my experience) that allow the programmer to give the parser some set of files and let the parser know that *"any"* of the possible files may contain the correct definition. In my opinion this would really defeat the purpose of syntactic checking if there were multiple options given. If we are going to play the 'cut and paste' game using prior statements, here is the entire context of what I said regarding this topic, so that you can see that this is what I said before as well:
On 8/16/11 4:54 PM, thus spake Jason Zurawski: [snip]
to my knowledge a parser can only verify against a single schema at any given time. To my knowledge it is possible for a parser to validate against multiple schema at the same time. In my experience (libxml, some older Java libraries) a single schema is loaded into the parser. It is possible to reference schema from each other, e.g. in relax:
include "something.rnc" { # include things ... } Trying to validate the same instance against different schemata simultaneously does not seem like a very fruitful exercise for a parser, unless there are multiple parsing passes being applied. If the latter is true, I would argue that more time is being spent in syntax checking than in the real guts of semantic evaluation.
To address your final concerns:
1. Be extensible
Yes, and the methods of NMC/perfSONAR we have been talking about all along enable this.
2. It should be possible to create a specific validator for each relation type.
Schema is schema, you can construct whatever type of validation system you wish to implement. I would question how far you would want to take this exercise because there are tradeoffs that sacrifice other desirable qualities. My statement from prior conversation still stands - if you wish to do strict syntactic validation, to the point of trying to use the parser as a semantic analyzer as well, you give up a portion of #1; this is the tradeoff that must be considered. For example: a) <relation type="something"> <link /> <link /> </relation> vs. b) <somethingrelation> <link /> <link /> </somethingrelation> vs c) <something:relation type="something"> <link /> <link /> </something:relation> I would argue that a) is our base, it is generic and minimal. It allows the construction of any number of relationship types that are required for most situations. Someone who needs something different/special, that cannot be done in the base, has 2 choices: b) or c). The b) option is the creation of a new element, something that *does not* derive from the base, and therefore cannot be cast into something different. This is not extension. For the simultaneous strict syntactic/semantic checking done by the parser alone, this allows someone to claim that the 'somethingrelation' is very much different than the 'relation', and perhaps this is what they need. The c) option is an extension namespace of the a) element. There is the opportunity to try and downcast this into the original element and the ability to add 'new' things that were not thought of in the base. Syntactic checking has the ability to add *some* semantics in this case, perhaps not as much as b). This is much more extensible, and I would claim desirable, for NML. It is what is used in NMC/pS today.
3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element)
Every parser is different in this respect, and I am not going to be able to give you a concrete answer. This is the exact reason why perfSONAR does not do strict syntactic checking at the parser level, and favors the use of semantic checks in the service itself. Relying on a strict schema that mandates syntax does not foster extensibility. There are two outcomes when an 'unknown' element comes in: a) Strict syntactic checking in most cases will reject the entire instance without comment. E.g. you have constructed your schema, and the parser knows of some number of elements, each having a possible namespace (or namespaces, depending on how the schema is constructed). If an unknwon element comes in, many parsers will simply reject the entire document. Certain types of event driven parsers may be able to panic parse around something like this, but I do not have much experience with them. I would estimate more time will be spent constructing a special parser in this case just to work with the strict schema than is healthy. b) Semantic checking, what we have the most experience with, takes all documents as is, does some combination of syntactic/semantic checking within the service itself, and can be made as permissive as required for certain situations. E.g. an unknown namespace on a common element (e.g. relation) can be rejected, or it can be downcast into the base schema - we normally do the latter). Hope this all helps; -jason

FYI, for those who follow our mail flood, replace these URLs to access the files without password:
https://svn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/sch... http://anonsvn.internet2.edu/svn/perfSONAR-PS/trunk/perfSONAR_PS-SNMPMA/etc/...
etc. Freek

Hi Jason, I don't claim to be an NM expert like yourself, but have of course read the NM message specification and the examples in the PerfSONAR-PS code. Jason Zurawski wrote:
For example:
a) <relation type="something"> <link /> <link /> </relation>
vs.
b) <somethingrelation> <link /> <link /> </somethingrelation>
vs
c) <something:relation type="something"> <link /> <link /> </something:relation>
vs d) <relations> <somethingrelation> <link /> <link /> </somethingrelation> </relations> (where every direct child element of <relations> MUST be a subclass of the base class "relation").
I would argue that a) is our base, it is generic and minimal.
I'm well aware that this is in use in NMC, and served 2 out of the 3 requirements, which is pretty good for most purposes.
Schema is schema, you can construct whatever type of validation system you wish to implement. I would question how far you would want to take this exercise because there are tradeoffs that sacrifice other desirable qualities. My statement from prior conversation still stands - if you wish to do strict syntactic validation, to the point of trying to use the parser as a semantic analyzer as well, you give up a portion of #1; this is the tradeoff that must be considered.
All I was trying to suggest is that it occurs to me that d) would serve 3 out of the 3 requirements, and I was asking if someone on the list saw any serious problems with it. The only problems I have heard so far were that it (1) deviated from the solution in use for NMC, and (2) . Both arguments have their merits, but have not convinced me for choosing a) over d). However, you just pose a new argument:
The b) option is the creation of a new element, something that *does not* derive from the base, and therefore cannot be cast into something different.
I must disagree. In the case of b) and d) above, the schema definition file should define "somethingrelation" as a subclass of "relation". (and I presume that they are). In that case, "somethingrelation" is derived from the base.
The c) option is an extension namespace of the a) element.
In the above examples a), b) and c), I have no preference as to which namespace the elements belong. The issue on whether the type should be a value in an attribute (a), or in the name of the element (b and d) seems orthogonal to the choice of namespace (b versus c).
3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element)
Every parser is different in this respect, and I am not going to be able to give you a concrete answer.
[... long and well written argument skipped ...]
Hope this all helps;
Thanks for your long answer, it was helpful. It occurs to me that an "strict syntactic checking" may entail two different concepts: a very strict schema (where any message with unknown elements is invalid and thus rejected by parsers) and a detailed scheme which does list all known details (but with provisions for unknown elemet (where parsers are able to parse the elements it is aware of, but ignore the elements it does not know). I'm much in favour of the second concept; you seem to argue about the first concept (which I personally (also?) do not like) in your email. To bring in the NM analogy, I very much like the RNC files in the perfSONAR-PS code, which is detailed and yet still flexible. I do not like the (unused?) WSDL files in the MDM code, which only define the message type, but not how it looks like. Compare the RNC files from your previous email to http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/perl/server/NMWG.wsdl I would even favour something which is even more detailed then the RNC files (and I agree that this is just a personal preference which we indeed have already debated enough just yet :) ).
If an unknown element comes in, many parsers will [...]
Your original sentence stated "reject the whole message". The bottom line is that we should DEFINE in the NML specification how parsers should behave in that case. I think we should add this item on the todo list for this WG and solicit input from any contributor on the list to propose such specification. Jason, would you agree this is a valid todo item for the group? Regards, Freek

Freek Dijkstra wrote:
All I was trying to suggest is that it occurs to me that d) would serve 3 out of the 3 requirements, and I was asking if someone on the list saw any serious problems with it. The only problems I have heard so far were that it (1) deviated from the solution in use for NMC, and (2)
Sorry, the following line was missing in my previous email: and (2) added one additional level of nesting to the XML messages. Freek

Hi Freek; Answers inline: On 8/23/11 11:15 AM, thus spake Freek Dijkstra:
Hi Jason,
I don't claim to be an NM expert like yourself, but have of course read the NM message specification and the examples in the PerfSONAR-PS code.
Jason Zurawski wrote:
For example:
a) <relation type="something"> <link /> <link /> </relation>
vs.
b) <somethingrelation> <link /> <link /> </somethingrelation>
vs
c) <something:relation type="something"> <link /> <link /> </something:relation>
vs
d)
<relations> <somethingrelation> <link /> <link /> </somethingrelation> </relations>
(where every direct child element of<relations> MUST be a subclass of the base class "relation").
I fail to understand why this is better than using a alternate namespace on existing element from base. It is unclear to me how you propose to facilitate this 'subclass' idea using the avalable tools and constructs of XML. You have introduced two new elements: 'relations' and 'somethingrelation', and both have no relationship to the base class that I am able to tell. If you are proposing that both of these *be* the base class, than my argument is that you will need to enumerate all possible children of 'relations' beforehand, otherwise there is no longer any expected extensibility. In my opinion, all your modifications have does is add complexity (e.g. now there is a spurious new 'level' by requiring that people use 'relations') and the notion of 'subclassing' that still cannot be enforced via syntactic means. It is not clear to me how you intend to enforce 'somethingrelation' having a relationship to 'relation' with just XML, these are completely different elements and XML isn't fully featured enough to allow what you want to do. This last point destroys the aspect of extension, and would force new designs to modify the NML base each time a new relationship is constructed.
I would argue that a) is our base, it is generic and minimal.
I'm well aware that this is in use in NMC, and served 2 out of the 3 requirements, which is pretty good for most purposes.
Schema is schema, you can construct whatever type of validation system you wish to implement. I would question how far you would want to take this exercise because there are tradeoffs that sacrifice other desirable qualities. My statement from prior conversation still stands - if you wish to do strict syntactic validation, to the point of trying to use the parser as a semantic analyzer as well, you give up a portion of #1; this is the tradeoff that must be considered.
All I was trying to suggest is that it occurs to me that d) would serve 3 out of the 3 requirements,
See my argument above, based on your proposal is not clear to be how you intend to foster extension with the d). You are introducing elements that have no relationship to each other. If I am go to on what you have provided, I am left to think that you are destroying the ability to extend by creating single use elements that cannot be extended. You get strict validation, but all use cases for the relationships must be known by the base. Perhaps your example makes perfect sense to you in, but I am not seeing any clear benefit.
and I was asking if someone on the list saw any serious problems with it. The only problems I have heard so far were that it (1) deviated from the solution in use for NMC, and (2) added one additional level of nesting to the XML messages. Both arguments have their merits, but have not convinced me for choosing a) over d).
However, you just pose a new argument:
The b) option is the creation of a new element, something that *does not* derive from the base, and therefore cannot be cast into something different.
I must disagree.
In the case of b) and d) above, the schema definition file should define "somethingrelation" as a subclass of "relation". (and I presume that they are). In that case, "somethingrelation" is derived from the base.
Please provide an example of how you intend to this in either RNC or XML Schema. Please be complete in your example so we can critique it, if you need help in using MSV/Jing/Trang, see this README: http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/schema/README.txt My statement on this matter is born out of experience, and I am happy to admit my faults and errors if you have some proof of operational soundness in doing this approach.
The c) option is an extension namespace of the a) element.
In the above examples a), b) and c), I have no preference as to which namespace the elements belong.
The issue on whether the type should be a value in an attribute (a), or in the name of the element (b and d) seems orthogonal to the choice of namespace (b versus c).
I am not following your argument here at all. I believe there is still a very serious disconnect between us and perhaps this is where it comes from. Based on your statements above, you want to allow the concept of subclassing for the 'relation' element, and you propose to do this by allowing 'somethingrelation' and 'somethingrelation2', etc. have this property. I am claiming it does not work this way, and that the features you wish to achieve (subclassing) must be done using the alternative namesapces I have been describing for more than a month. To summarize: Base = nml:relation Something sub class = something:relation Something 2 sub class = something2:relation The elements are the same, the namespaces are different. If you want to call them 'subclasses', so be it, but recall that we are not dealing with a programming language here. We are dealing with a markup language that probably was *never* intended to do all of the fully featured things that NMC or NML wish to do. I am well aware of your dislike of the 'type' element, you clearly stated this in SLC, and most of these emails. Whatever your reasons are, I am only going to note that your attempt to defeat it using the current approach of adding new elements is not working to convince me. I see clear extensibility problems, and I will continue to tell you about them each time you bring them up.
3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element)
Every parser is different in this respect, and I am not going to be able to give you a concrete answer.
[... long and well written argument skipped ...]
Hope this all helps;
Thanks for your long answer, it was helpful.
It occurs to me that an "strict syntactic checking" may entail two different concepts: a very strict schema (where any message with unknown elements is invalid and thus rejected by parsers) and a detailed scheme which does list all known details (but with provisions for unknown elemet (where parsers are able to parse the elements it is aware of, but ignore the elements it does not know).
I'm much in favour of the second concept; you seem to argue about the first concept (which I personally (also?) do not like) in your email.
You are entitled to do whatever you believe to be the correct answer. This group does not need to follow the NMC method if you feel it is fatally flawed, and I will still encourage you to start fully enumerating your proposed new approach beyond simple snippets of XML.
To bring in the NM analogy, I very much like the RNC files in the perfSONAR-PS code, which is detailed and yet still flexible. I do not like the (unused?) WSDL files in the MDM code, which only define the message type, but not how it looks like.
Compare the RNC files from your previous email to http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/perl/server/NMWG.wsdl
I am going to stop you there and note this is more than 7 years old, represents a demo service that is no longer in development, and is not maintained by anyone. It is not the best thing to use for a comparison.
I would even favour something which is even more detailed then the RNC files (and I agree that this is just a personal preference which we indeed have already debated enough just yet :) ).
If an unknown element comes in, many parsers will [...]
Your original sentence stated "reject the whole message". The bottom line is that we should DEFINE in the NML specification how parsers should behave in that case.
I think we should add this item on the todo list for this WG and solicit input from any contributor on the list to propose such specification.
Jason, would you agree this is a valid todo item for the group?
I do not agree, because crafting a new parser seems to be unnecessary busy work without any clear advantage for the group. This is an effort to create a standard, and I would think that incorporating existing technology when possible is a positive step. Given that, and peaking for my time requirements only - I cannot contribute any help to do this. If you still feel this is the direction you wish to take, I look forward to reviewing what you are able to produce as an alternative approach to what is on the table right now. Thanks; -jason

Stepping up a level here, what is the benefit of subclassing a relation? They're simple elements, just a type and pointers to one or more other elements. How would the subclass improve things? You could only very slightly change semantics while retaining backwards compatibility with the base namespace (at most 1 element, more than 1, etc.). I see it as similar to subclassing 'name' or 'description' or whatever. Cheers, Aaron On Aug 23, 2011, at 12:36 PM, Jason Zurawski wrote:
Hi Freek;
Answers inline:
On 8/23/11 11:15 AM, thus spake Freek Dijkstra:
Hi Jason,
I don't claim to be an NM expert like yourself, but have of course read the NM message specification and the examples in the PerfSONAR-PS code.
Jason Zurawski wrote:
For example:
a) <relation type="something"> <link /> <link /> </relation>
vs.
b) <somethingrelation> <link /> <link /> </somethingrelation>
vs
c) <something:relation type="something"> <link /> <link /> </something:relation>
vs
d)
<relations> <somethingrelation> <link /> <link /> </somethingrelation> </relations>
(where every direct child element of<relations> MUST be a subclass of the base class "relation").
I fail to understand why this is better than using a alternate namespace on existing element from base. It is unclear to me how you propose to facilitate this 'subclass' idea using the avalable tools and constructs of XML. You have introduced two new elements: 'relations' and 'somethingrelation', and both have no relationship to the base class that I am able to tell. If you are proposing that both of these *be* the base class, than my argument is that you will need to enumerate all possible children of 'relations' beforehand, otherwise there is no longer any expected extensibility.
In my opinion, all your modifications have does is add complexity (e.g. now there is a spurious new 'level' by requiring that people use 'relations') and the notion of 'subclassing' that still cannot be enforced via syntactic means. It is not clear to me how you intend to enforce 'somethingrelation' having a relationship to 'relation' with just XML, these are completely different elements and XML isn't fully featured enough to allow what you want to do. This last point destroys the aspect of extension, and would force new designs to modify the NML base each time a new relationship is constructed.
I would argue that a) is our base, it is generic and minimal.
I'm well aware that this is in use in NMC, and served 2 out of the 3 requirements, which is pretty good for most purposes.
Schema is schema, you can construct whatever type of validation system you wish to implement. I would question how far you would want to take this exercise because there are tradeoffs that sacrifice other desirable qualities. My statement from prior conversation still stands - if you wish to do strict syntactic validation, to the point of trying to use the parser as a semantic analyzer as well, you give up a portion of #1; this is the tradeoff that must be considered.
All I was trying to suggest is that it occurs to me that d) would serve 3 out of the 3 requirements,
See my argument above, based on your proposal is not clear to be how you intend to foster extension with the d). You are introducing elements that have no relationship to each other. If I am go to on what you have provided, I am left to think that you are destroying the ability to extend by creating single use elements that cannot be extended. You get strict validation, but all use cases for the relationships must be known by the base.
Perhaps your example makes perfect sense to you in, but I am not seeing any clear benefit.
and I was asking if someone on the list saw any serious problems with it. The only problems I have heard so far were that it (1) deviated from the solution in use for NMC, and (2) added one additional level of nesting to the XML messages. Both arguments have their merits, but have not convinced me for choosing a) over d).
However, you just pose a new argument:
The b) option is the creation of a new element, something that *does not* derive from the base, and therefore cannot be cast into something different.
I must disagree.
In the case of b) and d) above, the schema definition file should define "somethingrelation" as a subclass of "relation". (and I presume that they are). In that case, "somethingrelation" is derived from the base.
Please provide an example of how you intend to this in either RNC or XML Schema. Please be complete in your example so we can critique it, if you need help in using MSV/Jing/Trang, see this README:
http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/schema/README.txt
My statement on this matter is born out of experience, and I am happy to admit my faults and errors if you have some proof of operational soundness in doing this approach.
The c) option is an extension namespace of the a) element.
In the above examples a), b) and c), I have no preference as to which namespace the elements belong.
The issue on whether the type should be a value in an attribute (a), or in the name of the element (b and d) seems orthogonal to the choice of namespace (b versus c).
I am not following your argument here at all. I believe there is still a very serious disconnect between us and perhaps this is where it comes from.
Based on your statements above, you want to allow the concept of subclassing for the 'relation' element, and you propose to do this by allowing 'somethingrelation' and 'somethingrelation2', etc. have this property. I am claiming it does not work this way, and that the features you wish to achieve (subclassing) must be done using the alternative namesapces I have been describing for more than a month.
To summarize:
Base = nml:relation Something sub class = something:relation Something 2 sub class = something2:relation
The elements are the same, the namespaces are different. If you want to call them 'subclasses', so be it, but recall that we are not dealing with a programming language here. We are dealing with a markup language that probably was *never* intended to do all of the fully featured things that NMC or NML wish to do.
I am well aware of your dislike of the 'type' element, you clearly stated this in SLC, and most of these emails. Whatever your reasons are, I am only going to note that your attempt to defeat it using the current approach of adding new elements is not working to convince me. I see clear extensibility problems, and I will continue to tell you about them each time you bring them up.
3. Parsers should be able to recognise an unknown relation type as a relation subclass (rather then simply an unknown element)
Every parser is different in this respect, and I am not going to be able to give you a concrete answer.
[... long and well written argument skipped ...]
Hope this all helps;
Thanks for your long answer, it was helpful.
It occurs to me that an "strict syntactic checking" may entail two different concepts: a very strict schema (where any message with unknown elements is invalid and thus rejected by parsers) and a detailed scheme which does list all known details (but with provisions for unknown elemet (where parsers are able to parse the elements it is aware of, but ignore the elements it does not know).
I'm much in favour of the second concept; you seem to argue about the first concept (which I personally (also?) do not like) in your email.
You are entitled to do whatever you believe to be the correct answer. This group does not need to follow the NMC method if you feel it is fatally flawed, and I will still encourage you to start fully enumerating your proposed new approach beyond simple snippets of XML.
To bring in the NM analogy, I very much like the RNC files in the perfSONAR-PS code, which is detailed and yet still flexible. I do not like the (unused?) WSDL files in the MDM code, which only define the message type, but not how it looks like.
Compare the RNC files from your previous email to http://anonsvn.internet2.edu/svn/nmwg/trunk/nmwg/perl/server/NMWG.wsdl
I am going to stop you there and note this is more than 7 years old, represents a demo service that is no longer in development, and is not maintained by anyone. It is not the best thing to use for a comparison.
I would even favour something which is even more detailed then the RNC files (and I agree that this is just a personal preference which we indeed have already debated enough just yet :) ).
If an unknown element comes in, many parsers will [...]
Your original sentence stated "reject the whole message". The bottom line is that we should DEFINE in the NML specification how parsers should behave in that case.
I think we should add this item on the todo list for this WG and solicit input from any contributor on the list to propose such specification.
Jason, would you agree this is a valid todo item for the group?
I do not agree, because crafting a new parser seems to be unnecessary busy work without any clear advantage for the group. This is an effort to create a standard, and I would think that incorporating existing technology when possible is a positive step. Given that, and peaking for my time requirements only - I cannot contribute any help to do this.
If you still feel this is the direction you wish to take, I look forward to reviewing what you are able to produce as an alternative approach to what is on the table right now.
Thanks;
-jason
_______________________________________________ nml-wg mailing list nml-wg@ogf.org http://www.ogf.org/mailman/listinfo/nml-wg
Summer 2011 ESCC/Internet2 Joint Techs Hosted by the University of Alaska-Fairbanks http://events.internet2.edu/2011/jt-uaf

Hi Jason, We indeed seem to miscommunicate.
If an unknown element comes in, many parsers will [...]
Your original sentence stated "reject the whole message". The bottom line is that we should DEFINE in the NML specification how parsers should behave in that case.
I think we should add this item on the todo list for this WG and solicit input from any contributor on the list to propose such specification.
Jason, would you agree this is a valid todo item for the group?
I do not agree, because crafting a new parser seems to be unnecessary busy work without any clear advantage for the group.
I do not understand why you bring up the desire or non-desire to craft a new parser. I was only suggesting that it would be useful to define in the NML specification what a parser should do in case it encounters an element it does not understand. (e.g. write some text along the lines of either "a client SHOULD reject the message" or "a client SHOULD ignore the unknown element"). I did not propose any text yet, I also did not think you should do that (though I would certainly value your input), nor that it should be done now. (As a side note: the reason for bringing up the validation requirement is that I hope to re-use existing parsers and validators, and thus avoid the need to craft a new parser.) On the other topics:
I fail to understand why this is better than using a alternate namespace on existing element from base. It is unclear to me how you propose to facilitate this 'subclass' idea using the avalable tools and constructs of XML.
I don't know how to explain it in email without repeating what already has been said before. The only thing I can think of is actually defining a full schema and writing an implementation that parses it. Perhaps implementing is a good idea given the amount of talking that we did so far :). In any case, I'm now on holiday till September 12. I will leave coming weekend for two weeks, and will not be able to write that before that time, given that I'm better versed in RDF than in XSD. (and given that I'm not even a true RDF expert that means I'm an XSD-Noob ;) .) Regards, Freek

Hi Freek; Answers inline: On 8/23/11 6:21 PM, thus spake Freek Dijkstra:
Hi Jason,
We indeed seem to miscommunicate.
If an unknown element comes in, many parsers will [...]
Your original sentence stated "reject the whole message". The bottom line is that we should DEFINE in the NML specification how parsers should behave in that case.
I think we should add this item on the todo list for this WG and solicit input from any contributor on the list to propose such specification.
Jason, would you agree this is a valid todo item for the group?
I do not agree, because crafting a new parser seems to be unnecessary busy work without any clear advantage for the group.
I do not understand why you bring up the desire or non-desire to craft a new parser. I was only suggesting that it would be useful to define in the NML specification what a parser should do in case it encounters an element it does not understand. (e.g. write some text along the lines of either "a client SHOULD reject the message" or "a client SHOULD ignore the unknown element").
Changing how a parser behaves is akin to writing a new parser. You are altering something that was designed to handle the XML or XML schematic spec. Writing text in a specification is one thing that can be 'easily' done; but OGF standards are meant to apply to existing and future problems. Your modifications are not impossible to do, "a mere matter of code" perhaps, but I really question why? What is it gaining this group? Bringing the conversation back to the reality of implementation, it is strongly desirable to re-use existing tool when possible. This simplifies development and fosters adoption. All of my answers thus far are using the lens of experience with the tools, and experience with a 'living' data specification format. It is a non-trivial task, and using what is out there makes things much easier. To date there is nothing that needs to be specified in NML that cannot be encoded in XML (or RDF) given the existing paradigms and tools. The changes you are proposing to me do not sound implementable because its simply not the way that XML and XML schema work. If you want to try to do these ideas, I applaud your initiative, but I question what it is really gaining given that the current method has solved the extensibility and interoperability questions.
I did not propose any text yet, I also did not think you should do that (though I would certainly value your input), nor that it should be done now.
(As a side note: the reason for bringing up the validation requirement is that I hope to re-use existing parsers and validators, and thus avoid the need to craft a new parser.)
On the other topics:
I fail to understand why this is better than using a alternate namespace on existing element from base. It is unclear to me how you propose to facilitate this 'subclass' idea using the avalable tools and constructs of XML.
I don't know how to explain it in email without repeating what already has been said before. The only thing I can think of is actually defining a full schema and writing an implementation that parses it.
Perhaps implementing is a good idea given the amount of talking that we did so far :). In any case, I'm now on holiday till September 12. I will leave coming weekend for two weeks, and will not be able to write that before that time, given that I'm better versed in RDF than in XSD. (and given that I'm not even a true RDF expert that means I'm an XSD-Noob ;) .)
I look forward to reading your approach; -jason

Jason Zurawski wrote:
Changing how a parser behaves is akin to writing a new parser. You are altering something that was designed to handle the XML or XML schematic spec.
I don't understand. Change what? Altering what? I wrote:
The bottom line is that we should DEFINE in the NML specification how parsers should behave in [..] case [an unknown XML is encountered].
I don't recall that an existing NML parsers exists (I am aware of Pynt, which is a NDL parser, and perfSONAR-PS and perfSONAR-MDM, which are NM parsers). But let's for a moment assume such a parser exists, all I am saying is that it would be useful useful to DOCUMENT its behaviour. I'm not asking for whatever behaviour you think is desirable to change, altered or jump through loops. I'm asking for whatever is out there (or is going to be written) to be documented. Really. Please don't mix in other discussions. It is complicating enough getting the two of us on the same par. :) Freek

Freek; Answers inline: On 8/23/11 7:34 PM, thus spake Freek Dijkstra:
Jason Zurawski wrote:
Changing how a parser behaves is akin to writing a new parser. You are altering something that was designed to handle the XML or XML schematic spec.
I don't understand. Change what? Altering what?
Yes, it is very hard to understand when you completely cut all of the context from the mails. Here it is again:
I do not understand why you bring up the desire or non-desire to craft a new parser. I was only suggesting that it would be useful to define in the NML specification what a parser should do in case it encounters an element it does not understand. (e.g. write some text along the lines of either "a client SHOULD reject the message" or "a client SHOULD ignore the unknown element").
Your statement claims a desire to specify, in the NML spec document language, rules that dictates how a parser should behave when it encounters elements it is not aware of. Rules must be translated into code at some point, and unless you mean something completely different than this statement, you are implying that NML will be passing rules to a parser how to behave. In my experience, changing this behavior is non-trivial, and thus alters how it functions vs other forms of XML and XML schemata. My argument from the prior email is that this seems like a silly task to be concerned with given the entire purpose of this group is to specify a network markup language, not critique or modify the current state of web service tools. Our time should be spent on these concepts, not bickering about technology.
I wrote:
The bottom line is that we should DEFINE in the NML specification how parsers should behave in [..] case [an unknown XML is encountered].
I don't recall that an existing NML parsers exists (I am aware of Pynt, which is a NDL parser, and perfSONAR-PS and perfSONAR-MDM, which are NM parsers).
The current working version of "NML"*, for instance the Circuit monitoring work produced by Aaron and Roman, is based on XML - it is parsed by XML parsers. The prior generation of both perfSONAR and the IDCP use concepts that NML still pushes today (e.g. nodes, links, ports) is also based on XML. If NML is not recognizing these work items as being related or a useful product to start from, than perhaps we need to take several large steps back as a group. * = NML because the work was done by regular NML members, using NML concepts.
But let's for a moment assume such a parser exists, all I am saying is that it would be useful useful to DOCUMENT its behaviour.
I'm not asking for whatever behaviour you think is desirable to change, altered or jump through loops. I'm asking for whatever is out there (or is going to be written) to be documented.
Really.
Please don't mix in other discussions. It is complicating enough getting the two of us on the same par. :)
So far you have not convinced me that your alternative approaches to constructing subclasses are any better (in fact I have tried to show they are worse in key areas) than the methods utilized in NMC/perfSONAR. It is unlikely you will be moving me with your ideas, just as sure it is very unlikely that I will be moving you with mine. Things are entrenched, and that is fine. You seem to very keen to explore your alternate proposals for how the schema and instances should be constructed, and I do applaud you for that. Research is fun, and for people who have the time, very rewarding. I believe its best that we agree to disagree, and if you are able to construct your alternate methods the group as a whole can evaluate them. Until that day happens, there are services in existence using the NML concepts, in XML form, rather successfully including a circuit monitoring tool going into production on backbone/regional networks as well as several end sites. I believe it is important to tout these successes, and build upon this work as much as possible. If you feel that NML needs to take an alternative direction in this schematic design to make the work stronger overall, the group will listen to your proposal when it is prepared. Thanks; -jason

Hi all, I've discussed with Freek yesterday, and I think the main issue here is that there are different positions regarding validation and parsing of XML files. Jason has the position of a programmer using some XML library to parse XML files, create objects and general data out of it. The idea is that you take an XML file handed to the program, process it and make the best of it. There is no explicit validation. This seems reasonably similar to how browsers process HTML files for example. Freek on the other hand was thinking of how XML is handled in the SOAP/WSDL/Webservices world. There you have strict typing, explicit validation, code generation, et cetera. Everything has to adhere to reasonably strict schemas, otherwise most WS stacks refuse to work. While I understand that most of the NML files for monitoring will be processed by PerfSONAR and similar tools, I would prefer that the eventual schema would also be useful for WSDL style operations. I know that the datatypes used there are reasonably strict, does anyone know whether the current proposed XML schema is compatible with that context as well? Jeroen.

Hi Jeroen/All; On 8/25/11 10:42 AM, thus spake Jeroen van der Ham:
Hi all,
I've discussed with Freek yesterday, and I think the main issue here is that there are different positions regarding validation and parsing of XML files.
Jason has the position of a programmer using some XML library to parse XML files, create objects and general data out of it. The idea is that you take an XML file handed to the program, process it and make the best of it. There is no explicit validation. This seems reasonably similar to how browsers process HTML files for example.
Freek on the other hand was thinking of how XML is handled in the SOAP/WSDL/Webservices world. There you have strict typing, explicit validation, code generation, et cetera. Everything has to adhere to reasonably strict schemas, otherwise most WS stacks refuse to work.
While I understand that most of the NML files for monitoring will be processed by PerfSONAR and similar tools, I would prefer that the eventual schema would also be useful for WSDL style operations. I know that the datatypes used there are reasonably strict, does anyone know whether the current proposed XML schema is compatible with that context as well?
To some extent. Certain fields can be typed, others it won't make sense to do. The major objection (at least in prior mails) appears to be 'relation' being a generic chunk of XML that is hard to decode simply by syntactic validation. The only way you can deference this is via a self enumerated list of possible 'types'. The type string would dictate what/how many of specific elements could be in there. I can imagine a situation where you can get _limited_ syntactic checking, but the tradeoff is that you would need to pre-define lots of these beforehand. Let me self-assign an action to send a schema that describes this in some way. I am not sure it walk calm the entire discussion, but it will be a good exercise. Regarding WSDL, recall that there are different 'styles' of communication in web services world. Here is a good intro: http://www.ibm.com/developerworks/webservices/library/ws-whichwsdl/ perfSONAR/NMC/NM are 'document literal', e.g. basically a complete XML document that contains meaning that we will decode on our own (in the processing code). This makes it very hard to strongly type things to the same level that RPC varieties would. Typically the RPC varieties lend themselves to automatic stub generation and the like. Thanks; -jason

All; See inline a followup to an earlier promise I made: On 8/26/11 9:46 AM, thus spake Jason Zurawski:
Hi Jeroen/All;
On 8/25/11 10:42 AM, thus spake Jeroen van der Ham:
Hi all,
I've discussed with Freek yesterday, and I think the main issue here is that there are different positions regarding validation and parsing of XML files.
Jason has the position of a programmer using some XML library to parse XML files, create objects and general data out of it. The idea is that you take an XML file handed to the program, process it and make the best of it. There is no explicit validation. This seems reasonably similar to how browsers process HTML files for example.
Freek on the other hand was thinking of how XML is handled in the SOAP/WSDL/Webservices world. There you have strict typing, explicit validation, code generation, et cetera. Everything has to adhere to reasonably strict schemas, otherwise most WS stacks refuse to work.
While I understand that most of the NML files for monitoring will be processed by PerfSONAR and similar tools, I would prefer that the eventual schema would also be useful for WSDL style operations. I know that the datatypes used there are reasonably strict, does anyone know whether the current proposed XML schema is compatible with that context as well?
To some extent. Certain fields can be typed, others it won't make sense to do. The major objection (at least in prior mails) appears to be 'relation' being a generic chunk of XML that is hard to decode simply by syntactic validation. The only way you can deference this is via a self enumerated list of possible 'types'.
The type string would dictate what/how many of specific elements could be in there. I can imagine a situation where you can get _limited_ syntactic checking, but the tradeoff is that you would need to pre-define lots of these beforehand. Let me self-assign an action to send a schema that describes this in some way. I am not sure it walk calm the entire discussion, but it will be a good exercise.
See the following minor chunk of (unverified/unchecked) schema:
namespace nml = "http://ogf.org/schema/nml/base/20110830/"
NMLRelation = element nml:relation { ( attribute type { "specifictype1" } & NMLLink & NMLLink ) | ( attribute type { "specifictype2" } & NMLLink & NMLPort ) | ( attribute type { xsd:string } & # content ) }
NMLLink = element nml:link { # content }
NMLPort = element nml:port { # content }
The basic idea is that I have defined 2 'well known' relationships, but I have this 'anything else' sort of relationship to accept things I *don't* know about. See the following minor chunk of XML:
<!-- what the schema had in mind ... --> <nml:relation type="specifictype1" xmlns:nml="http://ogf.org/schema/nml/base/20110830/"> <nml:link /> <nml:link /> </nml:relation>
<!-- also what the schema had in mind ... --> <nml:relation type="specifictype2" xmlns:nml="http://ogf.org/schema/nml/base/20110830/"> <nml:link /> <nml:port /> </nml:relation>
<!-- not what the schema had in mind, but works ... --> <nml:relation type="specifictype1" xmlns:nml="http://ogf.org/schema/nml/base/20110830/"> <nml:link /> <nml:port /> </nml:relation>
<!-- something the schema shouldn't are about, works --> <nml:relation type="garbage" xmlns:nml="http://ogf.org/schema/nml/base/20110830/"> <nml:link /> <nml:link /> <nml:link /> <nml:link /> </nml:relation>
#1 and #2 give the strict checking that appears to be desired. But note that #3 is not what we want (still 'works' due to the backdoor of the 'anything' check), and #4 is exercising our extension mechanism. With one other change we can do this to the schema above:
( attribute type { "unclassified" } & # content )
And force that anything we don't know about has to use a specific string in the type field. Re-sending the XML through a verification field would mean that now #3 gets kicked out (as it should), and we would need to modify #4 to do the following:
<!-- modified to pass the schema check ... we can do anything we want inside ... --> <nml:relation type="unclassified" xmlns:nml="http://ogf.org/schema/nml/base/20110830/"> <nml:link /> <nml:link /> <nml:link /> <nml:link /> </nml:relation>
Personally I think this is a lot of work just to get semantic verification at the schema validation level, as a schema designer I would not want to do this. Semantic validation is still better handled at the service level in my opinion, but I won't re-hash that argument now. Thanks; -jason
Regarding WSDL, recall that there are different 'styles' of communication in web services world. Here is a good intro:
http://www.ibm.com/developerworks/webservices/library/ws-whichwsdl/
perfSONAR/NMC/NM are 'document literal', e.g. basically a complete XML document that contains meaning that we will decode on our own (in the processing code). This makes it very hard to strongly type things to the same level that RPC varieties would. Typically the RPC varieties lend themselves to automatic stub generation and the like.
Thanks;
-jason

On 30 Aug 2011, at 19:00, Jason Zurawski wrote:
See the following minor chunk of (unverified/unchecked) schema:
namespace nml = "http://ogf.org/schema/nml/base/20110830/"
NMLRelation = element nml:relation { ( attribute type { "specifictype1" } & NMLLink & NMLLink ) |
This definition doesn't seem to incorporate directionality, right? Or is this meant as a relation element that specifically takes a list of only two elements as argument? Thinking about how we now define lists in a very implicit way, it seems to leave a lot of room for error in interpretation. Once you have a relation, and their targets, the interpreter has to explicitly check how many arguments there are. If more than two, you have to check both for a "next" relation and make sure one points to the other. Also, this way of defining lists makes it near impossible to extend this to lists within lists in the future. Jeroen.

Hi Jeroen/All; Just one note before I address your comments - remember that this is only an example and wasn't meant to reflect the current reality of talks on specific items or interactions. It really should not be used "as is". On 8/31/11 5:19 AM, thus spake Jeroen van der Ham:
On 30 Aug 2011, at 19:00, Jason Zurawski wrote:
See the following minor chunk of (unverified/unchecked) schema:
namespace nml = "http://ogf.org/schema/nml/base/20110830/"
NMLRelation = element nml:relation { ( attribute type { "specifictype1" }& NMLLink& NMLLink ) |
This definition doesn't seem to incorporate directionality, right?
No, its several lines if schema I wrote, from memory, didn't check via automated tools, and probably won't save anywhere, to make a simple point.
Or is this meant as a relation element that specifically takes a list of only two elements as argument?
Yes, its a minimalist way to get some semantics into the syntactic checking step. I have never had to do this before, so I am sure it could have been done in different ways. YMMV.
Thinking about how we now define lists in a very implicit way, it seems to leave a lot of room for error in interpretation.
My point exactly - semantic checking is best handled by other means, not the syntax parser.
Once you have a relation, and their targets, the interpreter has to explicitly check how many arguments there are. If more than two, you have to check both for a "next" relation and make sure one points to the other.
It can be done this way, it is much harder and doesn't gain you much in my opinion.
Also, this way of defining lists makes it near impossible to extend this to lists within lists in the future.
I can't say, I haven't tried. I leave that as an exercise for others who really want to see this idea pursued. I stand by my earlier statements that semantic checking in the code is much more efficient than something along these lines. Thanks; -jason

Jason Zurawski wrote:
The above leads to three scenario's in mind where it may be prudent to mix namespaces in a single message: - core topology with technology specific (Ethernet, IP, ..) namespaces. This is probably the weakest use case, as it is possible to add the core topology concepts to each technology specific namespace (e.g. using chameleon namespaces) - topology namespace with geo namespace - topology namespace with an application-specific namespaces (eg. topology + NSI for path finding, or topology + NMC for monitoring).
You describe all of the things that I (and others in SLC) were arguing for all along - the ability to extend the base ideas concepts into new use cases through the use of namespace extensions. I will make one quibble on the above - I would argue its important to make the 'base' set minimal, e.g. 'ethernet', 'ip', etc. are not *in* the base, these are extensions *of* the base.
I think we are in agreement with these points from the start. What I'm not sure we agree upon is how to implement this. The way this is done in NMC (using chameleon namespaces) is certainly not the only way to accomplish this. On top of my head, I can think of the following design patterns that accomplish the same thing: * Base + extensions * Object composition * Instances can be instance-of multiple classes * Subclassing * Pointer to base class in schema description * Chameleon namespaces * Pointer to base class in messages itself For example, in NDL we first played with a method where the technology-specific schema were INSTANCES of the base schema, thus not a SUBCLASS of the base schema, and network resources has-a property consisting of one of these technology-specific instances (we could do this, because in RDF, a Resource can be a Class and an Instance at the same time). Later, we settled on a method where network resources where INSTANCES of two distinct classes: both an instance of a base class (eg. rdf:type ndl:Interface) as well as an instance of a technology-specific class (e.g. rdf:type wdm:FiberNetworkElement). (In RDF, a Resource can be a instance of multiple classes, kind of like the Object composition design pattern). My gut feeling is that Subclasses is a very common method, so I'm inclined to use that for NML. However, since Chameleon namespaces are uncommon, I would rather settle for a more common design pattern. Now, a problem may be that we're lacking a real ontology designer. We're all network experts here. It seems we have know how to make ontologies (me and Jeroen in RDF, you and Roman in XML), and have found a few things that worked well for us. However, I known I'm not well versed with all the ins and outs of all ontology design concepts (franky, I had to Google the names of a few of the above design pattern, and fear I still have it wrong, so I resorted to use my own wording). I'm a bit at loss right now what the best way forward is -- we seem to argue a great deal about the syntax, not about the underlying network concepts. Unfortunately, it does matter, since the choices we make in syntax have consequences for e.g. the extensibility. Would it help to more clearly describe our requirements? E.g. what kind of extensibility we want (e.g. make a backward- or forward-compatible version 2 of the base later; add more technologies without touching the base schema; add more relation with or without touching the base schema; mix with other schemata, ...) Regards, Freek

Hi Freek; I do not wish to contribute to another out-of-sync email storm today by replying to two threads at the same time. I believe that everything you are discussing below is covered in a prior mail, so lets not re-hash it. We are doing a lot of talking, but not a lot of implementing. If we are to advance this conversation beyond the current state, there needs to be examples constructed that show why you think the current proposals on the table for NML are insufficient, and how you would alter them to get the exact functionality/look you think is required. Thanks; -jason On 8/23/11 5:47 AM, thus spake Freek Dijkstra:
Jason Zurawski wrote:
The above leads to three scenario's in mind where it may be prudent to mix namespaces in a single message: - core topology with technology specific (Ethernet, IP, ..) namespaces. This is probably the weakest use case, as it is possible to add the core topology concepts to each technology specific namespace (e.g. using chameleon namespaces) - topology namespace with geo namespace - topology namespace with an application-specific namespaces (eg. topology + NSI for path finding, or topology + NMC for monitoring).
You describe all of the things that I (and others in SLC) were arguing for all along - the ability to extend the base ideas concepts into new use cases through the use of namespace extensions. I will make one quibble on the above - I would argue its important to make the 'base' set minimal, e.g. 'ethernet', 'ip', etc. are not *in* the base, these are extensions *of* the base.
I think we are in agreement with these points from the start.
What I'm not sure we agree upon is how to implement this. The way this is done in NMC (using chameleon namespaces) is certainly not the only way to accomplish this.
On top of my head, I can think of the following design patterns that accomplish the same thing: * Base + extensions * Object composition * Instances can be instance-of multiple classes * Subclassing * Pointer to base class in schema description * Chameleon namespaces * Pointer to base class in messages itself
For example, in NDL we first played with a method where the technology-specific schema were INSTANCES of the base schema, thus not a SUBCLASS of the base schema, and network resources has-a property consisting of one of these technology-specific instances (we could do this, because in RDF, a Resource can be a Class and an Instance at the same time). Later, we settled on a method where network resources where INSTANCES of two distinct classes: both an instance of a base class (eg. rdf:type ndl:Interface) as well as an instance of a technology-specific class (e.g. rdf:type wdm:FiberNetworkElement). (In RDF, a Resource can be a instance of multiple classes, kind of like the Object composition design pattern).
My gut feeling is that Subclasses is a very common method, so I'm inclined to use that for NML. However, since Chameleon namespaces are uncommon, I would rather settle for a more common design pattern.
Now, a problem may be that we're lacking a real ontology designer. We're all network experts here. It seems we have know how to make ontologies (me and Jeroen in RDF, you and Roman in XML), and have found a few things that worked well for us. However, I known I'm not well versed with all the ins and outs of all ontology design concepts (franky, I had to Google the names of a few of the above design pattern, and fear I still have it wrong, so I resorted to use my own wording).
I'm a bit at loss right now what the best way forward is -- we seem to argue a great deal about the syntax, not about the underlying network concepts. Unfortunately, it does matter, since the choices we make in syntax have consequences for e.g. the extensibility. Would it help to more clearly describe our requirements? E.g. what kind of extensibility we want (e.g. make a backward- or forward-compatible version 2 of the base later; add more technologies without touching the base schema; add more relation with or without touching the base schema; mix with other schemata, ...)
Regards, Freek
participants (4)
-
Aaron Brown
-
Freek Dijkstra
-
Jason Zurawski
-
Jeroen van der Ham