
On Mon, Oct 28, 2019 at 05:19:08PM -0300, Punk - Stasi 2.0 wrote:
(I won't focus on errors in this paper, e.g.: - the description of the return path packet encryption (dest to origin) appears in error - but that's not interesting afaics. - "Anonymity in this scheme is asymmetric - the caller is anonymous, but not the receiver" seems an incorrect assertion, since N0 is known to N1 at least, albeit N0's content may well not be known to N1, and N0's destination point may not be known to N1. ) This design doc is most useful conceptually for pondering possible elements of our network design, since it's an origin document, usefully laying out some concepts at issue: It introduces the onion concept (if not by name), where node N0 requests N1 to link to N2 on behalf of N0, and key establishment between N0 and N2 is (presumably) hidden from N1: "4. Establish a key (K2) with N2 through N1." It introduces link negotiation: "3. Request that N1 establish a link id (S2) with N2." It also introduces the packet switching concept, where in at least one version of such switching, N1 (or N2 etc) could randomize routing on behalf of N0: "The second node shuffles the packets it receives during a time unit and forwards them in random order to others." Exactly how this is achieved is not yet clear. Possibles: A. N0 establishes with N1 (by usual request/ contract proto) multiple links from N1 to nodes N2, N3, N4 etc., and N0 also or thereafter requests of N1, randomized outgoing packet shuffling for N0's packets (sent from N0 to N1). - this leaves ultimate logical routing control in the hands of N0 - latency escalation (over a multi hop route) should be estimable by N0 - ultimate (effective) network topology may be simpler to reason about, control and analyze B. N0 links to N1, and simply hands off all routing decisions for all packets to N1. - This might be viable if N1 is a known friend node. - In this routing protocol, N0 still needs to nego QoS requests with N1, to establish what total volume (in and/ or out) and what b/w rates, N1 is willing to make available to N0, and for what durations. - We must always keep in mind that meat space 'known friends', may well be using hardware/ software which is compromised (unknown to the friend). - These protocols don't have to operate mutually exclusively to one another - they can be used in parallel, along with other routing protocols, such as strict N0 controlled end to end routes. - We must not mistake the feeling of control ("ACKed requests"), with actual control. When we say N0 ultimately makes and therefore "controls" all routing decisions/ routing types used, what we really mean is, N0 "specifies" all routing types it is willing to use, within each of its respective "link establishment requests". - We of course must also always keep in mind that we are talking virtual links, not physical links, and also quite possibly adversarial peer nodes. In the virtual (let alone phys) networking space, a node N1 (at least for suitable QoS link requests if non adversarial) may of its own accord make "randomized" routing decisions or aka "routing decisions, for N0's packets, outside of any specific requests by N0", and such "N1 primary authority" decisions may be adversarial to N0, supportive of N0, or have some other basis. Of course, iqnets core does the right/ assumed best thing by default - we simply consider all possibilities which may be ultimately faced in any actual network. Re randomized fan outs, here is a bit of a conundrum/ potential opportunity - in the balance between various options available to us: - Does it make sense for N0 to leave certain routing decisions to another node in its route? - Is the "fan out + randomize" concept identifiably useful for certain use cases? - For say N2 to do a randomized fan out in on incoming packets from N0 (say via N1), N2 will have to buffer the incoming packets over time period units of T, so that it has > 1 packet to on- send in a randomized fashion; this naturally introduces latency - which of course is acceptable, even desirable, depending on use case - we're now conceptually heading into random latency/ high latency mix net design territory. Latency - an important consideration which the above paper effectively raises is: - the latency effect on route establishment, and - the latency effect on packet traversal through established routes, for different switching/ routing models. This consideration needs more thought, especially in relation to various networking (i.e. end user app) use cases. -------------------------- Alert: incoming thought, must get it down before it flees my lonely neurone. Headroom, or rather resource, reservation requests: - N0 could make "headroom" reservation requests of another node. - Is this the same as simply a chaff filled link? No. A resource reservation request is an "in advance of being used" request for a node to reserve or keep aside some resource, on my behalf until I need to use that resource, according to params e.g. "reserve for T time period", resource magnitude X, etc E.g.: - bandwidth reservation (I want to d/l a 4GiB movie, I just don't have time right now, please reserve that for me, which I would like to use in the next 5 days) - low latency link reservation (I want you to always reserve at least 1 telephone call worth of low latency link, on my behalf, for when I want to make phone calls (and of course hand out the rest as you choose) - cache reservation, although without further thought, I prefer the undertaking/promise model - such reservation requests perhaps make most sense between meat space friends, but there's of course no reason to limit such reservation requests to any particular node type