Fwd: Admela Jukan's presentation

Interesting thread that has fallen off the GHPN reflector for no good reason. -franco ---------- Forwarded message ---------- From: Bill St.Arnaud <bill.st.arnaud@canarie.ca> Date: Apr 1, 2005 5:47 PM Subject: RE: Admela Jukan's presentation To: Gigi Karmous-Edwards <gigi@mcnc.org>, travos@ieee.org Cc: chyoun@icu.ac.kr, ebslee@ntu.edu.sg, "Masum Z. Hasan" <masum@cisco.com>, Leon Gommans <lgommans@science.uva.nl>, imonga@nortel.com, Admela Jukan <jukan@uiuc.edu>, "Gkarmous@Mcnc. Org" <gkarmous@mcnc.org>, Cees DeLaat <delaat@science.uva.nl> Bill, I would like to understand more about what you mean by "bringing the network into the application" - are you referring to vertical integration from the application down to the physical network resources? No A lot of high end applications on HPC machines cannot be decomposed across multiple machines or grids because they depend on pipeline or shared memory architectures. Bringing the network into the machine that can emulate shared memory and/or pipelining would allow many existing HPC applications to become more distributed. TCP off load, RDMA, are some of the underlying technologies that would enable this. RDMA, in particular requires consistent network performance and behaviour – and so is unsuitable for shared IP or fast switched networks. I think this tight integration is what differentiates the Grid computing from other applications. (this requires a constant feedback loop between network resources and the applications (or grid middleware in between)) Also, 20k Euros for a nailed up 10 Gbps wavelength from what to what? Some of the e-science applications are interested in host to host Not necessarily aggregator to aggregator.. Agreed. But many institutions are building private reseach networks directly from researcher to researcher and bypassing the aggregator. Some great presentations on this topic at recent UKERNA meeting . A number of universities in the UK are building separate private research networks on their campuses that run in parallel to the normal aggregated production network. These private research networks for e-Science, grids etc are linked by dedicated private lightpaths bypassing the normal wide area networks Bill . So dynamic reconfiguration of such pipes will be necessary. Right? I am referring to applications that choose not to use the routed network for the various reasons (i.e. Lack of determinism, poor behavior of TCP over long distances and large data sets, etc.) Thanks, Gigi On 3/31/05 3:13 PM, "Bill St.Arnaud" <bill.st.arnaud@canarie.ca> wrote: I enjoyed Admela's presentation on control plane issues. I think it is a good summary of most of the issues. However I would suggest there are some areas that may be worth further exploring: (a) in addition to applications needing to interact with the network physical layer for large data flows, there are some situations where it would be advantageous to bring the network into the application. This is a lot different than the network being "aware" of the application. There is a lot of work going on in the HPC community to "decompose" large data applications into smaller modules which then can be relocated anywhere on the network. However in some cases the application modules may still be on the same physical machine interconnected by a "virtual" network or pipeline. Extending HPC pipeline architectures into network pipes would be clearly advantageous. (b) I remain skeptical about reservation and scheduling of bandwidth or lightpaths. The cost of wavelengths continues to plummet – and it is now cheaper to nail up the bandwidth and leave it there sitting idle, rather than paying the high OPEX costs for scheduling, reservation, billing etc. For example I have been informed by reliable sources that the annual cost of a 10 Gbps wavelength on the new Geant network will be in the order of 20K Euros. You couldn't hire a graduate student for that price to do the scheduling and reservation. The counter argument is that there will be applications where data transfers are infrequent, and buying nailed up wavelengths, even at 20k Euros, can't be justified – in that case I say use a general purpose routed network. Given that the data transfers are so infrequent, I suspect the slightly longer delays of using the routed network can be tolerated. But I suspect most large data flow applications will be from well known and often used sources and sinks – so the need for scheduling and reservation will be very limited Bill -----Original Message----- From: owner-ghpn-wg@ggf.org [mailto:owner-ghpn-wg@ggf.org] On Behalf Of Franco Travostino Sent: Thursday, March 31, 2005 2:44 PM To: ghpn-wg@gridforum.org Cc: chyoun@icu.ac.kr; ebslee@ntu.edu.sg; Masum Z. Hasan; Leon Gommans; imonga@nortel.com; Admela Jukan; Gigi Karmous-Edwards; Cees de Laat Subject: [ghpn-wg] Fwd: Seoul material is on-line I've been informed that Admela's presentation could not be opened with PowerPoint. It turns out that the handoff between Admela and me has altered the file's content somehow. I have now replaced the file in forge.gridforum.org. For further reference: /cygdrive/D/GGF13 (19) sum Admela* 59184 2731 Admela Correct File.ppt 11383 2731 Admela Damaged File.ppt -franco Date: Wed, 30 Mar 2005 13:08:06 -0500 To: ghpn-wg@gridforum.org From: Franco Travostino <travos@ieee.org> Subject: Seoul material is on-line Cc: chyoun@icu.ac.kr, ebslee@ntu.edu.sg, "Masum Z. Hasan" <masum@cisco.com>, Leon Gommans <lgommans@science.uva.nl>, "inder [BL60:418:EXCH] Monga" <imonga@AMERICASM06.nt.com>, Admela Jukan <jukan@uiuc.edu>, Gigi Karmous-Edwards <gkarmous@mcnc.org>, Cees de Laat <delaat@science.uva.nl> The whole GHPN production for GGF13 is available at: https://forge.gridforum.org/docman2/ViewCategory.php?group_id=53&category_id=941 <https://forge.gridforum.org/docman2/ViewCategory.php?group_id=53&category_id=941> We've had a lively meeting (we went 10' past the end of our slot actually). I hope you will take the time to peruse the minutes and the material. The State of the Drafts that I prepared is thought to be up to date (alert me if not) ... it also covers a couple of drafts that have been announced even though they didn't make the GGF13 cutoff date. See https://forge.gridforum.org/docman2/ViewProperties.php?group_id=53&category_id=941&document_content_id=3603 <https://forge.gridforum.org/docman2/ViewProperties.php?group_id=53&category_id=941&document_content_id=3603> The GGF13 program featured a couple of interesting BOFs with strong network connotation. Kindly enough, both referenced GHPN material. One was the Firewall and NAT BOF. The room consensus was that it should be chartered as a RG. The other one was the VPN BOF. On behalf of the GHPN, I invite these groups to use the GHPN community as a sounding board for their work. If they don't get the nod from the GFSG, they can also consider using the GHPN as the temporary home where to incubate their work further. -franco -------------------------------------------- Gigi Karmous-Edwards Principal Scientist Advanced Technology Group MCNC Grid Computing and Network Services RTP , NC, USA + 1 919-248-4121 gigi@mcnc.org -------------------------------------------- -- http://www.francotravostino.name
participants (1)
-
Franco Travostino