Fwd (kielmann@cs.vu.nl): Re: Fwd (andre@merzky.net): Re: Fwd (andre@merzky.net): Re: [saga-rg] context problem
and now my mail didn't make it to the list :-(
----- Forwarded message from Thilo Kielmann
Date: Sun, 16 Jul 2006 19:05:53 +0200 From: Thilo Kielmann
To: Andre Merzky Subject: Re: Fwd (andre@merzky.net): Re: Fwd (andre@merzky.net): Re: [saga-rg] context problem Merging 2 mails from Andre:
very good points, and indeed (1) seems cleanest. However, it has its own semantic pitfalls:
saga::file f (url); saga::task t = f.write saga::task::Task ("hello world", ...);
f.seek (100, saga::file::SeekSet);
t.run (); t.wait ();
If on task creation the file object gets copied over, the subsequent seek (sync) and write (async) work on different object copies. In particular, these copies will have different state - seek on one copy will have no effect on where the write will occur.
I cannot see a problem here: With object copying, you will simply have the same file open twice. And given the operations you do, this might even be the right thing... This example is very academic: can you show an example where the sharing of state between tasks is useful, actually?
I should have added that I'd prefer 3:
3. when creating a task, all parameter objects are passed "by reference" + no enforced copying overhead - all objects are shared, lots of potential error conditions
The error conditions I could think of are:
- change state of object while a task is running, hence having the task doing something differently than intended Change of state, like destruction of objects or change of objects. Not to speak of synchronization conditions: supposed you have non-atomic write operations (which is everything that writes more than a single word to memory): do you thus also enforce object locking by doing this? If not, you can have inconsistent object state that can be seen by one task, just because another task is halfway through writing the object... (all classical problems of shared-memory communication apply)
- limited control over resource deallocation
this is the same thing as above
The problem really is that there is no "object lifecycle" defined. There is no way to define which task or thread might be responsible or even allowed to destroy objects or change objects. Is it???
The advantages I see:
- no copy overhead (but, as you say, that is of no concern really)
ok, but minor point.
- simple, clear defined semantics no, it is the the most dangerous of the three versions
- tasks keep objects they operate on alive - objects keep sessions they live in alive - sessions keep contexts they use alive
what is the maening of "alive" here??? Now that you have outruled memory management...
- sync and asyn operations operate on the same object instance.
Let's forget about "sync" here: it is the task that is running in the current thread, so multiple tasks share object instances.
Either way (1, 2 or 3), we have to have the user of the API thinking while using it - neither is not idiot proof.
Well, we should strive to limit the mental load on the programmer as much as possible...
I think (2) is most problematic, if I understant your 'hand-over' correctly: that would mean you can't use the object again until the task was finished?
No, it means you will never ever again be allowed to use these objects. (hand over includes the hand over of the responsibility to clean up...)
Thilo -- Thilo Kielmann http://www.cs.vu.nl/~kielmann/
----- End forwarded message ----- -- Thilo Kielmann http://www.cs.vu.nl/~kielmann/
participants (1)
-
Thilo Kielmann