
Hi Andre, Pfew, these mails tend to get long. Here we go: On Sat, 2009-10-17 at 22:25 -0600, Andre Merzky wrote:
1. Exception handling in engines with late binding is a pain.
Agree, it is painful. But what can you do? At best, the engine is able to emply some heuristics to extract the most relevant exception and push that to the top level. Your application should then only print that message to stderr, by default.
The only real 'solution' would be to disable late binding... Or do you see any other way?
Mainly: restrict the number of backends tried as much as possible (see below). Furthermore, catch generic errors in the engine instead of in each adaptor separate (e.g. illegal flags, negative port numbers etc) so the user gets one exception instead of 10 identical ones from all adaptors tried.
2. The any:// scheme is evil.
Well, then they should use the backend specific URLs, not 'any'! 'Any' is exactly for those cases where the backend is *not* known - in all other cases, it does not make sense to use it.
Agreed.
A much cleaner design (e.g. followed in IbisDeploy) to alleviate problems 1 and 2 is to define a number of Grid sites that each have certain backends and for which you have certain credentials. A SAGA engine can then, per site, only try the adaptors that make sense in the first place.
Hmm, isn't that what is happening? The default session should contain saga contexts for those backends you have security credentials for. As the adaptors live in that session, only those adaptors should get active for which a context exists.
You probably mean that the other adaptors still throw an AuthorizationFailed exception if no context is available? Well, one can disable the adaptors.
So, I guess what I try to say is that the saga::session can be used to specify the backends to use, via the contexts.
This limits the backends tried to the ones explicitly specified by the user, which makes it much more comprehensible what is going on. It is also faster, since not all adaptors have to be tried. Currently, there is no generic way in SAGA to limit which adaptors or credentials are used for a site. JavaGAT does have such functionality.
The generic way is to create a session with those contexts (aka credentials) attached which you want to use. Say, you want to limit the set of active adaptors to the globus adaptors, do
saga::session s; saga::context c ("globus"); s.add_context (c);
saga::filesystem::file f (s, url);
This should get you only the globus adaptor - all others will bail out, right? (sorry if my answer is a repetition from above)
Not really. The other adaptors will still throw an exception in their constructor. Say the Globus adaptor fails for some reason: the user then still has to wade through all the other exceptions to find the one that matters. That's confusing and annoying.
3. Sessions with multiple contexts of the same type should be forbidden. Trying them all may have weird and unwanted side-effects (e.g. creating files as a different user, or a security lockout because you tried to many passwords). It confuses the user. This issue is related to point 2.
This is a tough one. The problem here is that a context type is not bound to a backend type. Like, both glite and globus use X509 certs. Both AWS and ssh use openssl keypairs. Both local and ftp use Username/Password, etc. I don't think this is something one can enforce.
We had the proposal to have the context types not bound to the backend *technology* (x509), but to the backend *name* (teragrid). This was declined as it makes it difficult to run your stuff on a different deployment using the same cert.
Hmm, in your adaptor-selecting example you do exactly that: using a context type specific to a single backend ("globus") to select a specific adaptor. If the context should have a type "x509", how do I then select only the Globus adaptor? And how do I differentiate between multiple Globus adaptors for different versions of Globus? There should be a better way of selecting adaptors...
4. URL schemes are ill-defined. Right now, knowing which schemes to use is implementation-dependent voodoo (e.g. what is the scheme for running local jobs? Java SAGA uses 'local://', C++ SAGA used 'fork://'). There is no generic way of knowing these schemes other than 'read the documentation', which people don't do. Essentially, these schemes create an untyped dependency of a SAGA app to a SAGA implementation, causing SAGA apps not to be portable across implementations unless they all have the same adaptors that recognize the same schemes.
Correct. Schema definition is not part of the spec. I argue it should not be either, as that can only be a restrictive specification, which would break use cases, too. Only solution right now is to create a registry - simply a web page which lists recommendations on what scheme to use for what backend. Would that make sens to you?
That would certainly help to bring the various SAGA implementations closer together. However, the more general problem is that SAGA users should be able to limit the adaptors used in a late-binding implementation. The two main reasons are: - speed (always trying 10 adaptors takes time) - clarity (limit the amount of exceptions) The current two generic mechanisms are context types and URL schemes. Both are not very well suited. Each adaptor would have to recognize a unique context type and scheme to allow the selection of individual adaptors. Even then, selecting two adaptors is already hard: you cannot have two schemes in a URL, and using two contexts only works if both adaptors recognize a context in the first place. A solution could be to add some extra functionality to a Session. A user should be able to specify which adaptor may be used, e.g. something similar to the Preferences object in JavaGAT. Ideally, you could also ask which adaptors are available. Specifying this in the API prevents each implementation from creation its own mechanism via config files, system properties, environment variables etc.
5. Bulk operations are hard to implement and clumsy to use. Better would be to include bulk operations directly in the API where they make sense. It's much simpler to implement adaptors for that, and much easier for users to use and comprehend.
Oops - bulk ops were designed to be easy to use! Hmmm...
About the hard to implement: true, but iff they are easy to use, then that does not matter (to the SAGA API spec).
Why were bulk ops not explicitely added to the spec is obvious: it would (roughly) double the number of calls, and would lead to some pretty complex call signatures:
list <list <url> > listings = dir.bulk_list (list <url>); list <int> results = file.bulk_read (list <buffer>, list <sizes>);
Further, this would lead to even more complex error semantics (what happens if one op out of a bulk of ops fails?).
This all is avoided by the current syntax
foreach url in ( list<url> ) { tc.add_task (dir.list <Async> (url)); } tc.wait (All);
Not that difficult to use I believe?
First, how do I figure out which list came from which URL? The get_object() call of each task will only return the 'dir' object, but you need the 'url' parameter to make sense of the result. Doesn't this make the current bulk ops API useless for all methods that take parameters? Second, does each bulk operation requires the creation of another task container? If I want to do dir.get_size(url) and dir.is_directory(url) for all entries in a directory, can I put all these tasks in one container, or should I create two separate containers? The programming model does not restrict me in any way. An engine will have a hard time analyzing such task containers and converting them to efficient adaptor calls... best regards, Mathijs