
Steve, I snipped a part. Steve Loughran wrote:
As long as references to content can be supplied as URLS, urls that the programs/JVMs on the hosts can resolve, then we could do (2) and (3) without a CDDLM implementation needing to know about how those URLs are supported. That does imply HTTP/FTP/File, but since both .NET and Java have a way of letting you plug in new URL handlers. If you had a new url, something like
acs://app/124/component/12
then we could handle it, though I would strongly advocate using HTTP as way of retrieving things. not only do all apps support it out the box, it is easier to debug in a web browser
Does this mean the HTTP is your first recommendation for the component to pull the files?
There are two more use cases,
-your asset store is used as the back end by an implementation of the CDDLM services. That is, someone uses <addFile> to add a file, and the request is forwarded to the ACS repository to add a new file to the application. Would that work?
I think it's among the doable possibilities. We however understood <addFile> is descried as an interim solution which can be not used if an external asset store is used. The asset store can have its own repository interface other than <addFile>. Your point is to keep <addFile> in common among the implementations and the components use HTTP to pull the things from there. So you expect external repositories implement <addFile> as an native interface. Is this correct understanding?
-the results of a job are somehow placed into the asset store, for later retrieval by the submitter. This is out the scope of CDDLM; whatever you deploy needs to handle that submission process.
We have discussed about storing the "output" of the job, but that is pending, since the output can be variable per execution. I personally doubt if this is sufficiently persistant or stable information that worth stored in the repository.
Asset stores are a trouble spot with me in the past; they have caused inordinate amounts of problems, at least when you are trying to use one built on MSSQL and classic IIS/ASP. Here are some things that I recall being troublesome -many file systems cannot have > 65535 files in a single dir, so you had better not use too flat a filesystem -if the NAS filestore is set to the GMT tz and the server in PST, it doesnt make any difference whether or not the clocks themselves are syncrhonized; the auto-cleanup process is going to delete new files under the assumption that they are out of date. -its very hard to secure stuff -any HTTP data provider must support HTTP/1.1 or at least content-length headers, so that the caller can determine that the supplied content was incomplete
As with most things, everything worked in development, it is only when you go to production, put the asset store 1200km away from the rendering service and keep the files on a different host from the database that things start to go wrong.
Let us think about http a little more. I'm looking forward to see you at our joint session. -Keisuke