On 5/29/12 10:26 AM, Radek Krzywania wrote:
Hi,
I think Jeroen was thinking on OSPF mechanisms, not the OSPF itself (at least I did). So use the algorithms and behaviour but adopted to our needs. We can use Web Services for that and introduce security. It's up to us. We don’t need to use Quagga or whatever other routing software for that purpose.
Fair enough...   But it still comes down to "what do we want it to do?"   We have on several occasions looked at using OSPF routing protocol (and GMPLS variants) only to decide that it would not work well in inter-domain environment for a number of architectural reasons never mind the shift from XML based data descriptions to TLVs.

I do agree we should consider the aspects of OSPF style flooding.  But even so it is a non-trivial implementation.

May I suggest the following topology exchange process:

1) Bootstrap:  Lets assume that each NSA is initialized with a local topology.   This initial topology is likely a manually specified "local" topology (e.g. the RDF/OWL we use now.)   Assume this topology is announced via a public "topology URL".  Further, assert that the topology description contains SDPs that reference the topology URLs of the adjacent networks. 

2) Given this initial info, an agent/NSA would "pull" information from each URL referenced in the SDPs, iteratively fetching each new topology URL as they are encountered.   The topology information from each remote topology URL is merged with the local topology to create a comprehensive "world view".  

This is simple, but poses the N^2 scaling problem: all NSAs contact all other NSAs.   This is addressed as follows:

3) As each NSA merges another topology file into their world view topology, they post that merged world view to the topology URL for the local domain.   (Thus the local advertized topology includes both local topology and other topology the NSA now knows about the world.)

4) An NSA only explores (fetches) "hanging" SDPs.  Hanging SDPs have not had their topology URL fetched and merged into the world view. 

5) Finally, each NSA periodically refreshes its world view.    But because the topology URLs are providing world view topology (not just local topology), an NSA only needs to  fetch the topology from immediate neighbor(s) to get updated world views.  Thus addressing the N^2 problem and updates propagate to all topology agents

I think this process is exceedingly simple - it essentially only requires a periodic HTTP GET of topology from one (or a few) URLs.   An NSA can still directly walk the entire global topology itself if it desires, but it is not necessary - it need only fetch topology from its immediate neighbors.   Updates will flood (propagate) out as a result of periodic refresh.   Thus, topology will converge with time.

A nice feature of this proposed process is that it allows for NSAs to act as a topology servers.   An agent/NSA could walk the topology from any starting domain and construct a world view.   And this same server could continuously monitor these domains for updates.  Clients or other NSAs can fetch the world view from the topology server first.    Such a server effectively reduces the diameter of the propagation domain thus allowing the topology to converge quicker.   The process allows multiple [redundant] servers to exist.

One concern I had with this process is the update propagation mechanism - it is based on periodic pull rather than an event driven push.   A push is only useful if the time between update events is large - propagating updates as they occur rather than waiting for some periodic schedule.       If, however, updates are happening frequently, then a periodic pull is just as effective as a push, maybe more so if it allows updates to be batched.  Given the complexity we have seen due to firwalls and NATs, I think the complexity of a push mechanism is more sophisticated tahn we need for now - and we have our hands full getting CS v2.0 code working as it is.    So, for now, I believe a periodic pull will be fine and so simple as to be trivial.

Further, a pull allows an NSA to selectively pull updates only from particular key other networks, or from just particular other servers.  I.e. a hierarchichal server tree could be constructed (ala DNS.)

One aspect that still needs some discussion is the "merge" process.   A sophisticated validation processes will be challenging (shelve this for now).   I also think an intelligent update of a topology by construct elements will be non-trivial (shelve this for now).   I suggest we keep it simple: a merge simply uses the latest known version of a domain's topology.  We just wholesale replace a domain's topology text with the most current version of the text file in its entirety.   Not a surgical strike, but should work for near future.    We simply need a topology construct that timestamps the topology file.   We could also define expiration date element as well - thus telling other NSAs how often they should refresh.

Thoughts?
J





Best regards
Radek

________________________________________________________________________
Radoslaw Krzywania                      Network Research and Development
                                           Poznan Supercomputing and  
radek.krzywania@man.poznan.pl                   Networking Center
+48 61 850 25 26                             http://www.man.poznan.pl
________________________________________________________________________


-----Original Message-----
From: nsi-wg-bounces@ogf.org [mailto:nsi-wg-bounces@ogf.org] On Behalf
Of Jerry Sobieski
Sent: Tuesday, May 29, 2012 3:45 PM
To: Jeroen van der Ham
Cc: NSI WG
Subject: Re: [Nsi-wg] Topology section

This bootstrapping process is exactly what I described:
How do you learn what the world looks like topologically when the NSA starts
up?
And how do you do this in a global distributed system of autonomous
administrative domains and millions of untrusted [topology] consumers?

the problem with OSPF is that it indeed floods link state announements.  But
this is a scaling problem in a large multidomain environments and can take a
significant time for such protocols to converge..if ever they ever do.   And
even with OSPF not all toplogy is expressed...there are summarized LSA and
areas...etc.   all these hide topology in an attempt to make it scale better.

and ospf is non trivial... It does not use IP.   It has no authorization security
either.   And it is not exactly a web service:-).    it has multiple timers just to
tell when a neighbor is present and how/when to flood link state
announcements.  yet no way to determine if a LSA is to be trusted.   And you
still need to discover or configure the local topology...ospf can only do this in
very limited cases...even in gmpls this was done by LMP...yet another
protocol.  further, the topology technologies/layering are defined in the
protocol, rather than a separate topology standard...thus you need to
change the protocol in order to enhance the topology descriptions.  And the
open source implementations (quaga, zebra, ...) are not small packages easily
modified...

These are some of the reasons why you never see OSPF in interdomain use.
And rarely see it even in multilayer use.   And why many networks have
migrated to ISIS for intradomain ip routing.   I think these are also good
opportunities for us to consider how we want to manage topology
distribution in the futre rather than just blindly adopting what was defined In
the past for ip networks.

So it seems to me we ought to discuss and understand the fundamental
process that we want to see occur before we decide what the proper
mechanism is to accomplish that process.

Perhaps we should review the NSI requirements for topology distribution...I
dont think we ever actually came up with one that was vetted by the
group...I made some bullets that were presented at OGF in Lyon, but those
were just talking points and i think were oriented more to topology
representation rathe rthan distribution and discovery processes.

thoughts?
jerry






Sent from my iPad3g

On May 29, 2012, at 7:14 AM, Jeroen van der Ham <vdham@uva.nl> wrote:

Hi,

On 16 May 2012, at 19:46, Jerry Sobieski wrote:
The key is that we are exchanging world views - or updates to world
views, not simply local topologies.
try this protocol sequence:
[...]

Instead of thinking up all kinds of scenarios, and exchange mechanisms,
could we just please restrain this to referring to other implementations?
Most of what I've seen so far is all supported by OSPF. It has limited
peering, abstraction, and simple update mechanisms. I propose we use that
same mechanism, if there is anything that is wrong with that, please write
what needs to be changed, and why.
The only thing we don't have at our disposal is multicast, but I think we can
solve that by using a peer-to-peer overlay network. That requires some
bootstrapping, but you need to coordinate with your neighbor(s) already, so
that can become part of that exchange too.
Jeroen.

_______________________________________________
nsi-wg mailing list
nsi-wg@ogf.org
https://www.ogf.org/mailman/listinfo/nsi-wg