planning for OGF-30 interop demo

Hi all, as most of you should now by now, we plan a SAGA interop demo for OGF30 in Brussels (http://www.ogf.org/OGF30/). In order to pull that off successfully, we need to start to define - participants - scope - implementations to be used - interop demo scenario From the top of my head, I would think that the following parties might be interested to team up resources (endpoints, code, people) to get things in place: - VU Amsterdam - IN2P3, France - CCT, LSU - KEK/Naregi - RAL, UK It would be great if each party could let us know explicitely if they are interested in participating! Did I miss anybody? We have the following bits and pieces which may, or may not, play a role in the interop demo implementations: JavaSAGA JSAGA SAGA-C++ PySAGA (over JavaSAGA) PySAGA (over JSAGA) SAGA-Python (over SAGA-C++) command line tools for most of the implementations functionality: Job Submission (all) Data transfer / access (all) Advert Service (not JSAGA I think, not in PySAGA) Replica Management (not JSAGA?) Service Discovert (not in JSAGA? not in PySAGA) backends: local (all) globus (all) ssh (all) aws (all?) glite (all?) bes (all?) Infrastructures local institutions teragrid loni naregi What about European Grids? Again, is there something I miss? PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30. A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc. The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario. I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki. Best, Andre. -- Nothing is ever easy.

Andre Merzky a écrit :
Hi all,
Hi Andre,
as most of you should now by now, we plan a SAGA interop demo for OGF30 in Brussels (http://www.ogf.org/OGF30/). In order to pull that off successfully, we need to start to define
- participants - scope - implementations to be used - interop demo scenario
From the top of my head, I would think that the following parties might be interested to team up resources (endpoints, code, people) to get things in place:
- VU Amsterdam - IN2P3, France
Yes, we are interested in participating.
- CCT, LSU - KEK/Naregi - RAL, UK
It would be great if each party could let us know explicitely if they are interested in participating! Did I miss anybody?
We have the following bits and pieces which may, or may not, play a role in the interop demo
implementations: JavaSAGA JSAGA SAGA-C++ PySAGA (over JavaSAGA) PySAGA (over JSAGA) SAGA-Python (over SAGA-C++) command line tools for most of the implementations
functionality: Job Submission (all) Data transfer / access (all) Advert Service (not JSAGA I think, not in PySAGA)
Not JSAGA indeed.
Replica Management (not JSAGA?)
JSAGA supports Replica Management package for gLite-LFC, iRODS and SRB.
Service Discovert (not in JSAGA? not in PySAGA)
Not yet in JSAGA.
backends: local (all) globus (all) ssh (all)
OK.
aws (all?)
Not in JSAGA.
glite (all?)
OK for us.
bes (all?)
Not in JSAGA.
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
This example will not work with some adaptors because JSAGA adaptors sometimes add in job description some information that is not be used by the middleware, but used by the adaptor in order to overcome some limitations in middleware features, in particular for reaping output. However, if we restrict to the features that are natively supported by the middleware (e.g. save output on gsiftp for gLite), it should work.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
Best regards, Sylvain

Dear Andre, all,
-- backends: -- local (all) -- globus (all) -- ssh (all) -- aws (all?) -- glite (all?) -- bes (all?)
In terms of using ogsa-bes, we can provide you with an UNICORE endpoint and I'm quite sure that I can get you an ARC, OMII-UK & GENESIS endpoint also. Please contact me and Steve Crouch (cc) once you agree to use this backend interface with SAGA. Take care, Morris
-- -----Ursprüngliche Nachricht----- -- Von: saga-rg-bounces@ogf.org [mailto:saga-rg-bounces@ogf.org] Im Auftrag von Andre Merzky -- Gesendet: Montag, 9. August 2010 11:45 -- An: SAGA RG -- Cc: Hartmut Kaiser; Julien Devemy -- Betreff: [SAGA-RG] planning for OGF-30 interop demo -- -- Hi all, -- -- as most of you should now by now, we plan a SAGA interop demo for -- OGF30 in Brussels (http://www.ogf.org/OGF30/). In order to pull -- that off successfully, we need to start to define -- -- - participants -- - scope -- - implementations to be used -- - interop demo scenario -- -- >From the top of my head, I would think that the following parties -- might be interested to team up resources (endpoints, code, people) -- to get things in place: -- -- - VU Amsterdam -- - IN2P3, France -- - CCT, LSU -- - KEK/Naregi -- - RAL, UK -- -- It would be great if each party could let us know explicitely if -- they are interested in participating! Did I miss anybody? -- -- We have the following bits and pieces which may, or may not, play a -- role in the interop demo -- -- implementations: -- JavaSAGA -- JSAGA -- SAGA-C++ -- PySAGA (over JavaSAGA) -- PySAGA (over JSAGA) -- SAGA-Python (over SAGA-C++) -- command line tools for most of the implementations -- -- functionality: -- Job Submission (all) -- Data transfer / access (all) -- Advert Service (not JSAGA I think, not in PySAGA) -- Replica Management (not JSAGA?) -- Service Discovert (not in JSAGA? not in PySAGA) -- -- backends: -- local (all) -- globus (all) -- ssh (all) -- aws (all?) -- glite (all?) -- bes (all?) -- -- Infrastructures -- local institutions -- teragrid -- loni -- naregi -- What about European Grids? -- -- -- Again, is there something I miss? -- -- PySAGA appears to be a great integration point, and SAGA-C++ intents -- to support it very soon, too - but it is not sure that we manage to -- do that before OGF30. -- -- A simple interop demo would be to submit the same job (NOT -- /bin/date) to a set of resources in the various infrastructures -- discovered via SD, from various tools. A job submitted via python -- for example should be monitorable from C++ tools, and output could -- be reaped via PySAGA-over-JSAGA, etc. -- -- The above is just an initial input, to get the discussion and -- planning started. Please feed back, and complete the item lists -- above. Once we have those lists complete, we should be able to come -- up with some more or less realistic scenario. -- -- I'll mirror thiss list on our wiki at GridForge, so that we can edit -- things in place. Feel free to discuss on the list though, I'll try -- to keep the thread in sytnc with the wiki. -- -- Best, Andre. -- -- -- -- Nothing is ever easy. -- -- -- saga-rg mailing list -- saga-rg@ogf.org -- http://www.ogf.org/mailman/listinfo/saga-rg

Dear Andre,
From the top of my head, I would think that the following parties might be interested to team up resources (endpoints, code, people) to get things in place:
- VU Amsterdam - IN2P3, France - CCT, LSU - KEK/Naregi - RAL, UK
OK for KEK to use NAREGI. However,
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
Using SD to discover NAREGI resources will be difficult at the demo. Certainly, we are developing SD adaptor for NAREGI now but the development and test cannot be completed by that time. In my understanding, SAGA has a prototype SD adaptor only for glite at this moment. Do you know how about other middlewares?
Replica Management (not JSAGA?)
JSAGA supports Replica Management package for gLite-LFC, iRODS and SRB. Great. Andre, Sylvain already has an iRODS replica adaptor for their JSAGA. Do you still need that adaptor for SAGA C++ at the demo? If so, we will try to complete the adaptor by then. Best regards, Yutaka Yutaka Kawai 河井 裕 High Energy Accelerator Research Organization (KEK) Computing Research Center Tel: +81-(0)29-864-5200 (Ext: 4503) Fax: +81-(0)29-864-4402 E-Mail : yutaka.kawai@kek.jp

On 11 August 2010 01:19, Yutaka Kawai <yutaka.kawai@kek.jp> wrote:
Dear Andre,
>> From the top of my head, I would think that the following parties > might be interested to team up resources (endpoints, code, people) > to get things in place: > > - VU Amsterdam > - IN2P3, France > - CCT, LSU > - KEK/Naregi > - RAL, UK
OK for KEK to use NAREGI. However,
> A simple interop demo would be to submit the same job (NOT > /bin/date) to a set of resources in the various infrastructures > discovered via SD, from various tools. A job submitted via python > for example should be monitorable from C++ tools, and output could > be reaped via PySAGA-over-JSAGA, etc.
Using SD to discover NAREGI resources will be difficult at the demo. Certainly, we are developing SD adaptor for NAREGI now but the development and test cannot be completed by that time. In my understanding, SAGA has a prototype SD adaptor only for glite at this moment. Do you know how about other middlewares?
The only SD adapter that I am aware of is the one for gLite (recently deployed in WLCG) so we can't demonstrate interoperability until we get another one. Steve

Yutaka Kawai a écrit :
Dear Andre,
From the top of my head, I would think that the following parties might be interested to team up resources (endpoints, code, people) to get things in place:
- VU Amsterdam - IN2P3, France - CCT, LSU - KEK/Naregi - RAL, UK
OK for KEK to use NAREGI. However,
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
Using SD to discover NAREGI resources will be difficult at the demo. Certainly, we are developing SD adaptor for NAREGI now but the development and test cannot be completed by that time. In my understanding, SAGA has a prototype SD adaptor only for glite at this moment. Do you know how about other middlewares?
Replica Management (not JSAGA?)
JSAGA supports Replica Management package for gLite-LFC, iRODS and SRB.
Great. Andre, Sylvain already has an iRODS replica adaptor for their JSAGA. Do you still need that adaptor for SAGA C++ at the demo? More generally, rather than selecting for demo the middlewares that are supported by all SAGA implementations, I think that it would be more interesting for users to demonstrate that they can use several SAGA implementations from a single application to access a more wide
Hi all, diversity of middlewares in a uniform way. This is one of the motivation for having developed JPySAGA. Best regards, Sylvain
If so, we will try to complete the adaptor by then.
Best regards, Yutaka
Yutaka Kawai 河井 裕 High Energy Accelerator Research Organization (KEK) Computing Research Center Tel: +81-(0)29-864-5200 (Ext: 4503) Fax: +81-(0)29-864-4402 E-Mail : yutaka.kawai@kek.jp
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg

Hi Andre, On Aug 9, 2010, at 11:44 AM, Andre Merzky wrote:
Hi all,
as most of you should now by now, we plan a SAGA interop demo for OGF30 in Brussels (http://www.ogf.org/OGF30/). In order to pull that off successfully, we need to start to define
- participants - scope - implementations to be used - interop demo scenario
From the top of my head, I would think that the following parties might be interested to team up resources (endpoints, code, people) to get things in place:
- VU Amsterdam - IN2P3, France - CCT, LSU - KEK/Naregi - RAL, UK
It would be great if each party could let us know explicitely if they are interested in participating! Did I miss anybody?
We have the following bits and pieces which may, or may not, play a role in the interop demo
implementations: JavaSAGA JSAGA SAGA-C++ PySAGA (over JavaSAGA) PySAGA (over JSAGA) SAGA-Python (over SAGA-C++) command line tools for most of the implementations
functionality: Job Submission (all) Data transfer / access (all) Advert Service (not JSAGA I think, not in PySAGA) Replica Management (not JSAGA?) Service Discovert (not in JSAGA? not in PySAGA)
backends: local (all) globus (all) ssh (all) aws (all?) glite (all?)
We (as in LSU SAGA) currently support only gLite CREAM (job adaptor with X.509 contexts). What specific gLite components are you referring to? What do other SAGA implementations support? Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg

Ole Weidner a écrit :
Hi Andre,
On Aug 9, 2010, at 11:44 AM, Andre Merzky wrote:
Hi all,
as most of you should now by now, we plan a SAGA interop demo for OGF30 in Brussels (http://www.ogf.org/OGF30/). In order to pull that off successfully, we need to start to define
- participants - scope - implementations to be used - interop demo scenario
From the top of my head, I would think that the following parties
might be interested to team up resources (endpoints, code, people) to get things in place:
- VU Amsterdam - IN2P3, France - CCT, LSU - KEK/Naregi - RAL, UK
It would be great if each party could let us know explicitely if they are interested in participating! Did I miss anybody?
We have the following bits and pieces which may, or may not, play a role in the interop demo
implementations: JavaSAGA JSAGA SAGA-C++ PySAGA (over JavaSAGA) PySAGA (over JSAGA) SAGA-Python (over SAGA-C++) command line tools for most of the implementations
functionality: Job Submission (all) Data transfer / access (all) Advert Service (not JSAGA I think, not in PySAGA) Replica Management (not JSAGA?) Service Discovert (not in JSAGA? not in PySAGA)
backends: local (all) globus (all) ssh (all) aws (all?) glite (all?)
We (as in LSU SAGA) currently support only gLite CREAM (job adaptor with X.509 contexts). What specific gLite components are you referring to? What do other SAGA implementations support?
I can answer for JSAGA : Job management : WMS, CREAM, (LCG-CE not achived) Data management: SRM (DPM and dCache supported), LFC Security: VOMS, MyProxy ...but as I said in my previous mail, for the demo I think it is more interesting to show that SAGA implementations can complement each other rather than showing that they can do the same thing! Cheers, Sylvain
Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg

Quoting [Sylvain Reynaud] (Aug 12 2010):
I can answer for JSAGA : Job management : WMS, CREAM, (LCG-CE not achived) Data management: SRM (DPM and dCache supported), LFC Security: VOMS, MyProxy
...but as I said in my previous mail, for the demo I think it is more interesting to show that SAGA implementations can complement each other rather than showing that they can do the same thing!
That is likely true. Also, it seems unlikely that we get a set of resources which are accessible by all implementation groups. The most sensible (and also easiest) way forward would then be, IMHO, that each group is preparing their own set of demos, against the set of backends they use anyway, and we run one demo after the other, presentation style. For some backends we could also consider to set up a demo resource via AWS, bu preparing an image which runs globus, glite, etc. But the effort for that is hart to estimate (for me). Is there a need/use for that? Any volonteers who could help with setup? Best, Andre.
Cheers, Sylvain
Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- Nothing is ever easy.

Hi Andre, How much time should we spend on each set of demos ? In our set of demos, we would like to show how we can use two implementations of SAGA into the same application (SAGA-Cpp/JSAGA with Python, and if technically possible Java-SAGA/JSAGA with Java). -> is there any URL where we can download the PySAGA wrapper for SAGA-Cpp (even if it is not yet finished)? -> can we use SAGA-Cpp Service Discovery API extension from Python ? Cheers, Sylvain Andre Merzky a écrit :
Quoting [Sylvain Reynaud] (Aug 12 2010):
I can answer for JSAGA : Job management : WMS, CREAM, (LCG-CE not achived) Data management: SRM (DPM and dCache supported), LFC Security: VOMS, MyProxy
...but as I said in my previous mail, for the demo I think it is more interesting to show that SAGA implementations can complement each other rather than showing that they can do the same thing!
That is likely true. Also, it seems unlikely that we get a set of resources which are accessible by all implementation groups.
The most sensible (and also easiest) way forward would then be, IMHO, that each group is preparing their own set of demos, against the set of backends they use anyway, and we run one demo after the other, presentation style.
For some backends we could also consider to set up a demo resource via AWS, bu preparing an image which runs globus, glite, etc. But the effort for that is hart to estimate (for me). Is there a need/use for that? Any volonteers who could help with setup?
Best, Andre.
Cheers, Sylvain
Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg

Quoting [Sylvain Reynaud] (Oct 05 2010):
Hi Andre,
How much time should we spend on each set of demos ?
I think that 15 to 30 minutes would be sensible. We do have enough session time scheduled though to go beyond that time limit, if required (see separate mail).
In our set of demos, we would like to show how we can use two implementations of SAGA into the same application (SAGA-Cpp/JSAGA with Python, and if technically possible Java-SAGA/JSAGA with Java).
Sounds great! :-D
-> is there any URL where we can download the PySAGA wrapper for SAGA-Cpp (even if it is not yet finished)?
AFAIK, http://gforge.cs.vu.nl/gf/project/pysaga/scmsvn/?action=browse&path=%2Fimpl%2F points to the svn repository containing the relevant sources.
-> can we use SAGA-Cpp Service Discovery API extension from Python ?
The SD package is available, see https://svn.cct.lsu.edu/repos/saga/bindings/python/trunk/packages/sd I do not know how well tested that package is. Hartmut, any input? Best, Andre.
Cheers, Sylvain
Andre Merzky a écrit :
Quoting [Sylvain Reynaud] (Aug 12 2010):
I can answer for JSAGA : Job management : WMS, CREAM, (LCG-CE not achived) Data management: SRM (DPM and dCache supported), LFC Security: VOMS, MyProxy
...but as I said in my previous mail, for the demo I think it is more interesting to show that SAGA implementations can complement each other rather than showing that they can do the same thing!
That is likely true. Also, it seems unlikely that we get a set of resources which are accessible by all implementation groups.
The most sensible (and also easiest) way forward would then be, IMHO, that each group is preparing their own set of demos, against the set of backends they use anyway, and we run one demo after the other, presentation style.
For some backends we could also consider to set up a demo resource via AWS, bu preparing an image which runs globus, glite, etc. But the effort for that is hart to estimate (for me). Is there a need/use for that? Any volonteers who could help with setup?
Best, Andre.
Cheers, Sylvain
Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- Nothing is ever easy.

On Thu, Oct 7, 2010 at 6:08 AM, Andre Merzky <andre@merzky.net> wrote:
Quoting [Sylvain Reynaud] (Oct 05 2010):
Hi Andre,
How much time should we spend on each set of demos ?
I think that 15 to 30 minutes would be sensible. We do have enough session time scheduled though to go beyond that time limit, if required (see separate mail).
Thank you for clarification on this. Regarding on this, we, KEK would give a demonstration at OGF30. We would much appreciate if you could assign us to appropriate session 1-4. We are okay to show up either session. For a moment we are planing to take a 30 mins for demo, that can be arranged either longer or shorter though. Regards, Go
In our set of demos, we would like to show how we can use two implementations of SAGA into the same application (SAGA-Cpp/JSAGA with Python, and if technically possible Java-SAGA/JSAGA with Java).
Sounds great! :-D
-> is there any URL where we can download the PySAGA wrapper for SAGA-Cpp (even if it is not yet finished)?
AFAIK, http://gforge.cs.vu.nl/gf/project/pysaga/scmsvn/?action=browse&path=%2Fimpl%2F points to the svn repository containing the relevant sources.
-> can we use SAGA-Cpp Service Discovery API extension from Python ?
The SD package is available, see https://svn.cct.lsu.edu/repos/saga/bindings/python/trunk/packages/sd I do not know how well tested that package is. Hartmut, any input?
Best, Andre.
Cheers, Sylvain
Andre Merzky a écrit :
Quoting [Sylvain Reynaud] (Aug 12 2010):
I can answer for JSAGA : Job management : WMS, CREAM, (LCG-CE not achived) Data management: SRM (DPM and dCache supported), LFC Security: VOMS, MyProxy
...but as I said in my previous mail, for the demo I think it is more interesting to show that SAGA implementations can complement each other rather than showing that they can do the same thing!
That is likely true. Also, it seems unlikely that we get a set of resources which are accessible by all implementation groups.
The most sensible (and also easiest) way forward would then be, IMHO, that each group is preparing their own set of demos, against the set of backends they use anyway, and we run one demo after the other, presentation style.
For some backends we could also consider to set up a demo resource via AWS, bu preparing an image which runs globus, glite, etc. But the effort for that is hart to estimate (for me). Is there a need/use for that? Any volonteers who could help with setup?
Best, Andre.
Cheers, Sylvain
Cheers, Ole
bes (all?)
Infrastructures local institutions teragrid loni naregi What about European Grids?
Again, is there something I miss?
PySAGA appears to be a great integration point, and SAGA-C++ intents to support it very soon, too - but it is not sure that we manage to do that before OGF30.
A simple interop demo would be to submit the same job (NOT /bin/date) to a set of resources in the various infrastructures discovered via SD, from various tools. A job submitted via python for example should be monitorable from C++ tools, and output could be reaped via PySAGA-over-JSAGA, etc.
The above is just an initial input, to get the discussion and planning started. Please feed back, and complete the item lists above. Once we have those lists complete, we should be able to come up with some more or less realistic scenario.
I'll mirror thiss list on our wiki at GridForge, so that we can edit things in place. Feel free to discuss on the list though, I'll try to keep the thread in sytnc with the wiki.
Best, Andre.
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
-- Nothing is ever easy. -- saga-rg mailing list saga-rg@ogf.org http://www.ogf.org/mailman/listinfo/saga-rg
participants (7)
-
Andre Merzky
-
Go Iwai
-
Morris Riedel
-
Ole Weidner
-
Steve Fisher
-
Sylvain Reynaud
-
Yutaka Kawai