
Hi; One of the advantages of trying to leverage CIM is that there is already lots of support and tooling out there for it, so generating CIM-based solutions may be pragmatically desirable. Regarding data staging, my argument for making it an extension is that there are lots of systems where explicit data staging is unnecessary and hence that's why it's not in the HPC base use case. Anyone who needs to deal with data staging -- whether as part of a job, part of a larger workflow, or part of a data staging service -- will need to employ the data staging extension to JSDL and the HPC profile. Regarding the question of who "owns" which part of the problem space within GGF, that's something I really want to stay away from. As co-chair of the HPC profile working group, with a mandate to get things done by the end-of-summer, I need to arrive at concrete specifications irrespective of by whom or how they are generated. I'm happy to work with anyone and I'm more than happy to share credit with everyone. :-) Marvin. -----Original Message----- From: owner-ogsa-bes-wg@ggf.org [mailto:owner-ogsa-bes-wg@ggf.org] On Behalf Of Karl Czajkowski Sent: Friday, June 09, 2006 9:48 PM To: Marvin Theimer Cc: Donal K. Fellows; JSDL Working Group; ogsa-bes-wg@ggf.org; Ed Lassettre; Ming Xu (WINDOWS) Subject: Re: [ogsa-bes-wg] RE: [jsdl-wg] Questions and potential changes to JSDL, as seen from HPC Profile point-of-view On Jun 09, Marvin Theimer modulated: ...
Agreed. Also, one possibility is to explicitly specify some of the commonly occurring "semi-bound" scenarios, such as "any x86" architecture. I'm not familiar enough with the CIM world to know if they can provide us with guidance on how to solve the problem in general.
It is a flat enumeration set, I believe, so some logical composition model would be required, e.g. "give me cpu type i386 OR i486 OR pentium OR pentium-m OR opteron ..." BTW, I sometimes wonder if CIM is the right model here. The above values are a subset of those available to GCC to control ABI and processor-specific optimizations. They seem pretty aggressive about introducing new processor support in a backwards-compatible manner. Perhaps a slightly "weaker-typed" constraint ought to be defined to accept strings which are interpreted as a compilers-specific concept? I haven't thought about this too much, so maybe I am off base. But in some sense, compiler settings are closest to what the user/app really cares about for predictability. On the negative side, it means having to support multiple compiler-specific namespaces for these constraints, and having reasonable best practices for translating between them in service implementations.
If we narrow the definitions of mountpoint and mountsource enough and precisely describe their semantics then we might arrive at something that could be fairly widely used. I'm thinking of things like saying that you can't navigate "out" of a file system via "cd ..", etc. This is definitely something to explore.
Since the HPC profile base case treats data staging as being out-of-scope, the base interface profile will exclude these; but that can be done independently of anything else. (And, of course, the data staging extension to the HPC profile will need to deal with this subject in any case, even if it's ignored in the base case.)
I'm not sure if scoping away staging means we can ignore storage system structure. How will one compose a staging service activity with a compute activity if there is no underlying model for how the storage is manipulated and named?
Any advice on this subject would be greatly appreciated. As I said above, I have to deal with this subject one way or the other and would prefer to do so with the minimum of feather-ruffling (while still making progress that results in a usable HPC profile by the end of the summer).
Personally, I think it is silly to say that a "space" can be claimed by others! Anyone with good ideas and willingness to contribute should feel free to develop ideas. If they are great, maybe the others will realized they would benefit by working together. If the others are unsure, maybe there will have to be competing proposals and a decision made in the marketplace... top-down architecture really doesn't work very well with voluntary, collaborative projects. karl -- Karl Czajkowski karlcz@univa.com