Laurent,
Thank you for answering my question. I think this is going to be a fruitful discussion. Since we will not enough time at the GGF F2F meeting, we should continue this on the ML.
... Yes. thank you for taking time for this.
But I think this is essentially because of dynamic nature of GridRPC and by just defining the semantics of 'grpc_function_handle_dafault' will not solve the problem. I agree. This was just to be sure of the possible semantics for 'grpc_function_handle_dafault' as it was not clear for me in the document. I had to check that I'm not wrong on this dynamic choice of
Users will have freedom to dynamically create a povray handle *AFTER* the kmc process is invoked. even if users explicitly specify a server for povray, there is no way to know it before calling kmc process. You are right. Now, we assume that we have several KMC severs. This is
So, I think we should admit this is a very complicated problem and there is no simple answer. In my opinion there is two way to solve the problem.
- Assuming some 'magical' global data management system behind, define a simple interface.
- Assuming no background support, define a set of explicit data transfer method and explicit data management (may be with soft state lifetime managment). I think the two solutions are not at the same level, but they both need functions to access them. I think we should first complete the GridRPC interface with data handles and data management functions. without assuming anything on the way they are implemeneted. By just defining functions, we do not assume anything on the underlying support. These functions may be used to access/interface some 'magical' global data management as well as a background support. This will depend on the
Hidemoto, Hidemoto Nakada wrote: the server nor this is a "particular" feature. true, these physians want to test several parameters on their KMC code. As one execution takes several days, they test several parameter sets in parallel and so use several KMC servers. When they want to submit a new parameter set, they call the GridRPC platform and the platform will choice the best available sever at the time of the call. So, the client does not known at that time which server it will use so he is not able to indicates where to leave the generated data before calling povray. For these reasons, we have to express where the data comes from (in case of input data) and what to do with the data after computation (for output data). platform. By defining data management functions in GridRPC, we will provide an homogenous and complete API to clients.
I love the former one, because
A) there are several such 'magical' systems actually emerging, like AIST's gfarm, B) data transfer method is already defined (or at least on the way its definition) in other WG and clearly out of scope of our WG.
Yes, we should not rewrite data transfert service, we must just interface it with GridRPC. We also have some kind of magical system in DIET which is called DTM. However, the work done in the GridRPC WG to normalize access to servers and define a common API to our platforms could also be done for data.... If we agree on that : just an API not a service, then we should start to discuss on what data structure and what function to put in this API. Comments ? Laurent -- Laurent PHILIPPE http://lifc.univ-fcomte.fr/~philippe philippe@lifc.univ-fcomte.fr Laboratoire d'Informatique (LIFC) tel: (33) 03 81 66 66 54 route de Gray fax: (33) 03 81 66 64 50 25030 Besancon Cedex - FRANCE