In preparation for the CDDLM/ACS joint session at GGF14

Folks in the cddlm-wg, As you know we will have a joint session at GGF14 between CDDLM and ACS. This is in preparation for the session from acs-wg. After the GGF13 inside the acs-wg, we studied and discussed about the possible interactions between CDDLM and ACS, especially in terms of "File upload" section and AddFile() in the deployment API document. Sequence diagrams in the attachment describe our interpretation about how CDDLM works, and our proposal for the possible interactions in the case that the ACS co-exists in the system. We believe our proposal goes along with what the current set of CDDLM specifications define and doesn't require change in the original definitions. We are looking forward to discuss about this at the joint session for CDDLM/ACS at GGF14. It is very much appreciated if we get responses before the joint session at GGF14 in case that important overlook in our understanding is found. Please feel free to make comments or questions. FYI, At GGF14 joint session, we'd like to start our discussion with this diagram and if we don't find critical issues, we may go down to the detail of the interface and/or advanced interaction. We'd also like to hear requirements on ACS in terms of the event notification and asynchronous invocation of the interfaces, if the time permitting. Thanks in advance for you efforts on this! Best Regards, Keisuke Fukui ACS-WG

Keisuke Fukui wrote:
Folks in the cddlm-wg,
As you know we will have a joint session at GGF14 between CDDLM and ACS. This is in preparation for the session from acs-wg. After the GGF13 inside the acs-wg, we studied and discussed about the possible interactions between CDDLM and ACS, especially in terms of "File upload" section and AddFile() in the deployment API document. Sequence diagrams in the attachment describe our interpretation about how CDDLM works, and our proposal for the possible interactions in the case that the ACS co-exists in the system. We believe our proposal goes along with what the current set of CDDLM specifications define and doesn't require change in the original definitions.
We are looking forward to discuss about this at the joint session for CDDLM/ACS at GGF14. It is very much appreciated if we get responses before the joint session at GGF14 in case that important overlook in our understanding is found. Please feel free to make comments or questions.
FYI, At GGF14 joint session, we'd like to start our discussion with this diagram and if we don't find critical issues, we may go down to the detail of the interface and/or advanced interaction. We'd also like to hear requirements on ACS in terms of the event notification and asynchronous invocation of the interfaces, if the time permitting.
Thanks in advance for you efforts on this!
Best Regards, Keisuke Fukui ACS-WG
thank you, I will comment briefly. -The File upload stuff was very much written to be a transient/interim solution in the absence of a real repository, which is why it is so minimal and barely functional. I didnt want to be dependent upon anything not yet designed, but didn't want to do a repository myself -the current revision allows the sender to declare the URL schema to use. That could be file: for a shared filesystem and http: or https for HTTP access. It could also be fancy custom stuff; there is some java project whose name escapes me that provides a multicast URL resolution system, the file could be stored across multiple machines without ever having to give them a hostname, which is very good for fault-tolerance. -There is also support for adding metadata to a request, but nothing to do searches, retrieval, or even introspection on what stuff is supported. I've left that for implementations. -There is the perennial problem of how to get files up over SOAP. SwA is only available in java distrubutions, and then not consistently, DIME is in .NET WSE and Axis 1,x, but even more unpopular. As for MTOM, well, who implements that yet? As a workaround I've put in stuff for having the endpoint actually retrieve the files themselves, but this is a bit unsatisfactory, because it requires the files to be broadly visible on the 'net, and introduces race conditions. Otherwise, data goes inline in base-64 encoded form, unless/until MTOM lets you pretend that the file attached to the message is really inline base 64. here are two example mesages and responses from the XSD-validation tests <api:addFileRequest> <api:name>urn://45</api:name> <api:mimetype>application/x-pdf</api:mimetype> <api:schema>file</api:schema> <api:uri>http://example.org/files/source.pdf</api:uri> </api:addFileRequest> </t:test> <api:addFileResponse> <item>file://nas1/temp/source.pdf</item> <item>file://nas2/4fgdbb.tmp</item> </api:addFileResponse> <api:addFileRequest> <api:name>urn://46</api:name> <api:mimetype>application/x-ssh-key</api:mimetype> <api:schema>http</api:schema> <api:data> AAAAB3NzaC1yc2EAAAABIwAAAIEAwVmUkPzXdWEyJZ8nCR8GvdrDtO00RI4Z Bg3Gyviuz5IrWj2C6b2BdcKn+S/swDV1fiEFY4+ewYHUfmg+UKm2T8Lfksjn Hinks0GoVvkwy3bF48U5yVk1akAzR5YbSLJa6Naj8XS9681xVzWpbjxrV3KR QNWvEqI0MqRE34MzT4M= </api:data> <api:metadata> <x:expires date="2005-07-18" xmlns:x="http://example.org/expiry" /> </api:metadata> </api:addFileRequest> <api:addFileResponse> <item>http://server/job5/files/1</item> </api:addFileResponse> On a related note, I see that Fujitsu are listed as one of the interested parties in JSR 277, the java modules/repository proposal. Are you involved in that? -steve

Hi Steve, Thanks for your comments. Although not complete, here are some comments: - We understand the File upload stuff is meant to be a transient/interim and a real repository is expected. - We have one use case with commercial data center for ACS, where clients exist outside of the data center and submit their task to the system, then the task is deployed and executed in the system. In this case, it might require much and/or indeterministic latency for component to pull the data outside of the system. Thus, we assumed that it is more reasonable to have data repository inside the system, even though it may not be what the CDDLM specification requires. We assumed that the purpose of having AddFile is to "push" the data from outside to inside of the system in advance. Then components can "pull" the data easier from there. - As such, the file server in the case 1 diagram may not be a primary or sole implementation of CDDLM, but we understand it is among possible implementations of larger system under the current CDDLM specification. (We may have had to color it neutral one rather than light blue.) - I understood all topics are about the case 1 diagram. Do you have any issues on case 2 or 3? BTW, Thanks for a heads up for JSR 277. We will keep an eye for it. It sounds like the improvement of the jar itself and pretty much java focus. -Keisuke Steve Loughran wrote:
Keisuke Fukui wrote:
Folks in the cddlm-wg,
As you know we will have a joint session at GGF14 between CDDLM and ACS. This is in preparation for the session from acs-wg. After the GGF13 inside the acs-wg, we studied and discussed about the possible interactions between CDDLM and ACS, especially in terms of "File upload" section and AddFile() in the deployment API document. Sequence diagrams in the attachment describe our interpretation about how CDDLM works, and our proposal for the possible interactions in the case that the ACS co-exists in the system. We believe our proposal goes along with what the current set of CDDLM specifications define and doesn't require change in the original definitions.
We are looking forward to discuss about this at the joint session for CDDLM/ACS at GGF14. It is very much appreciated if we get responses before the joint session at GGF14 in case that important overlook in our understanding is found. Please feel free to make comments or questions.
FYI, At GGF14 joint session, we'd like to start our discussion with this diagram and if we don't find critical issues, we may go down to the detail of the interface and/or advanced interaction. We'd also like to hear requirements on ACS in terms of the event notification and asynchronous invocation of the interfaces, if the time permitting.
Thanks in advance for you efforts on this!
Best Regards, Keisuke Fukui ACS-WG
thank you, I will comment briefly.
-The File upload stuff was very much written to be a transient/interim solution in the absence of a real repository, which is why it is so minimal and barely functional. I didnt want to be dependent upon anything not yet designed, but didn't want to do a repository myself
-the current revision allows the sender to declare the URL schema to use. That could be file: for a shared filesystem and http: or https for HTTP access. It could also be fancy custom stuff; there is some java project whose name escapes me that provides a multicast URL resolution system, the file could be stored across multiple machines without ever having to give them a hostname, which is very good for fault-tolerance.
-There is also support for adding metadata to a request, but nothing to do searches, retrieval, or even introspection on what stuff is supported. I've left that for implementations.
-There is the perennial problem of how to get files up over SOAP. SwA is only available in java distrubutions, and then not consistently, DIME is in .NET WSE and Axis 1,x, but even more unpopular. As for MTOM, well, who implements that yet?
As a workaround I've put in stuff for having the endpoint actually retrieve the files themselves, but this is a bit unsatisfactory, because it requires the files to be broadly visible on the 'net, and introduces race conditions. Otherwise, data goes inline in base-64 encoded form, unless/until MTOM lets you pretend that the file attached to the message is really inline base 64.
here are two example mesages and responses from the XSD-validation tests
<api:addFileRequest> <api:name>urn://45</api:name> <api:mimetype>application/x-pdf</api:mimetype> <api:schema>file</api:schema> <api:uri>http://example.org/files/source.pdf</api:uri> </api:addFileRequest> </t:test>
<api:addFileResponse> <item>file://nas1/temp/source.pdf</item> <item>file://nas2/4fgdbb.tmp</item> </api:addFileResponse>
<api:addFileRequest> <api:name>urn://46</api:name> <api:mimetype>application/x-ssh-key</api:mimetype> <api:schema>http</api:schema> <api:data> AAAAB3NzaC1yc2EAAAABIwAAAIEAwVmUkPzXdWEyJZ8nCR8GvdrDtO00RI4Z Bg3Gyviuz5IrWj2C6b2BdcKn+S/swDV1fiEFY4+ewYHUfmg+UKm2T8Lfksjn Hinks0GoVvkwy3bF48U5yVk1akAzR5YbSLJa6Naj8XS9681xVzWpbjxrV3KR QNWvEqI0MqRE34MzT4M= </api:data> <api:metadata> <x:expires date="2005-07-18" xmlns:x="http://example.org/expiry" /> </api:metadata> </api:addFileRequest> <api:addFileResponse> <item>http://server/job5/files/1</item> </api:addFileResponse>
On a related note, I see that Fujitsu are listed as one of the interested parties in JSR 277, the java modules/repository proposal. Are you involved in that?
-steve

Keisuke Fukui wrote:
Hi Steve,
Thanks for your comments. Although not complete, here are some comments: - We understand the File upload stuff is meant to be a transient/interim and a real repository is expected. - We have one use case with commercial data center for ACS, where clients exist outside of the data center and submit their task to the system, then the task is deployed and executed in the system. In this case, it might require much and/or indeterministic latency for component to pull the data outside of the system. Thus, we assumed that it is more reasonable to have data repository inside the system, even though it may not be what the CDDLM specification requires. We assumed that the purpose of having AddFile is to "push" the data from outside to inside of the system in advance. Then components can "pull" the data easier from there. - As such, the file server in the case 1 diagram may not be a primary or sole implementation of CDDLM, but we understand it is among possible implementations of larger system under the current CDDLM specification. (We may have had to color it neutral one rather than light blue.) - I understood all topics are about the case 1 diagram. Do you have any issues on case 2 or 3?
BTW, Thanks for a heads up for JSR 277. We will keep an eye for it. It sounds like the improvement of the jar itself and pretty much java focus.
-Keisuke
Yes, having a real repository in the data centre is how things should really work. As long as references to content can be supplied as URLS, urls that the programs/JVMs on the hosts can resolve, then we could do (2) and (3) without a CDDLM implementation needing to know about how those URLs are supported. That does imply HTTP/FTP/File, but since both .NET and Java have a way of letting you plug in new URL handlers. If you had a new url, something like acs://app/124/component/12 then we could handle it, though I would strongly advocate using HTTP as way of retrieving things. not only do all apps support it out the box, it is easier to debug in a web browser There are two more use cases, -your asset store is used as the back end by an implementation of the CDDLM services. That is, someone uses <addFile> to add a file, and the request is forwarded to the ACS repository to add a new file to the application. Would that work? -the results of a job are somehow placed into the asset store, for later retrieval by the submitter. This is out the scope of CDDLM; whatever you deploy needs to handle that submission process. Asset stores are a trouble spot with me in the past; they have caused inordinate amounts of problems, at least when you are trying to use one built on MSSQL and classic IIS/ASP. Here are some things that I recall being troublesome -many file systems cannot have > 65535 files in a single dir, so you had better not use too flat a filesystem -if the NAS filestore is set to the GMT tz and the server in PST, it doesnt make any difference whether or not the clocks themselves are syncrhonized; the auto-cleanup process is going to delete new files under the assumption that they are out of date. -its very hard to secure stuff -any HTTP data provider must support HTTP/1.1 or at least content-length headers, so that the caller can determine that the supplied content was incomplete As with most things, everything worked in development, it is only when you go to production, put the asset store 1200km away from the rendering service and keep the files on a different host from the database that things start to go wrong. -steve

Steve, I snipped a part. Steve Loughran wrote:
As long as references to content can be supplied as URLS, urls that the programs/JVMs on the hosts can resolve, then we could do (2) and (3) without a CDDLM implementation needing to know about how those URLs are supported. That does imply HTTP/FTP/File, but since both .NET and Java have a way of letting you plug in new URL handlers. If you had a new url, something like
acs://app/124/component/12
then we could handle it, though I would strongly advocate using HTTP as way of retrieving things. not only do all apps support it out the box, it is easier to debug in a web browser
Does this mean the HTTP is your first recommendation for the component to pull the files?
There are two more use cases,
-your asset store is used as the back end by an implementation of the CDDLM services. That is, someone uses <addFile> to add a file, and the request is forwarded to the ACS repository to add a new file to the application. Would that work?
I think it's among the doable possibilities. We however understood <addFile> is descried as an interim solution which can be not used if an external asset store is used. The asset store can have its own repository interface other than <addFile>. Your point is to keep <addFile> in common among the implementations and the components use HTTP to pull the things from there. So you expect external repositories implement <addFile> as an native interface. Is this correct understanding?
-the results of a job are somehow placed into the asset store, for later retrieval by the submitter. This is out the scope of CDDLM; whatever you deploy needs to handle that submission process.
We have discussed about storing the "output" of the job, but that is pending, since the output can be variable per execution. I personally doubt if this is sufficiently persistant or stable information that worth stored in the repository.
Asset stores are a trouble spot with me in the past; they have caused inordinate amounts of problems, at least when you are trying to use one built on MSSQL and classic IIS/ASP. Here are some things that I recall being troublesome -many file systems cannot have > 65535 files in a single dir, so you had better not use too flat a filesystem -if the NAS filestore is set to the GMT tz and the server in PST, it doesnt make any difference whether or not the clocks themselves are syncrhonized; the auto-cleanup process is going to delete new files under the assumption that they are out of date. -its very hard to secure stuff -any HTTP data provider must support HTTP/1.1 or at least content-length headers, so that the caller can determine that the supplied content was incomplete
As with most things, everything worked in development, it is only when you go to production, put the asset store 1200km away from the rendering service and keep the files on a different host from the database that things start to go wrong.
Let us think about http a little more. I'm looking forward to see you at our joint session. -Keisuke

Keisuke Fukui wrote:
Steve,
I snipped a part.
Steve Loughran wrote:
As long as references to content can be supplied as URLS, urls that the programs/JVMs on the hosts can resolve, then we could do (2) and (3) without a CDDLM implementation needing to know about how those URLs are supported. That does imply HTTP/FTP/File, but since both .NET and Java have a way of letting you plug in new URL handlers. If you had a new url, something like
acs://app/124/component/12
then we could handle it, though I would strongly advocate using HTTP as way of retrieving things. not only do all apps support it out the box, it is easier to debug in a web browser
Does this mean the HTTP is your first recommendation for the component to pull the files?
if you are deploying to a fabric with a shared filesystem, my preference is for file://, because any client that looks for file: urls will know it wont need to download and cache stuff. otherwise, HTTP is good because (a) most things know about it (b) it is easy to debug behaviour by hand just by constructing the URL and tapping it in to your browser. (c) it goes through firewalls for remote download. smartfrog already uses the maven2 repository, initially at build time, but later on at runtime, where you can declare dependencies on different versions of files. In the build I declare the files I want, and the default versions <target name="m2-files" depends="m2-tasks"> <m2-libraries pathID="m2.classpath"> <dependency groupID="org.ggf" artifactID="cddlm" version="${cddlm.version}"/> <dependency groupID="commons-lang" artifactID="commons-lang" version="${commons-lang.version}"/> <dependency groupID="commons-logging" artifactID="commons-logging-api" version="${commons-logging.version}"/> <dependency groupID="log4j" artifactID="log4j" version="${log4j.version}"/> <dependency groupID="org.smartfrog" artifactID="sf-xml" version="${Version}"/> <dependency groupID="xom" artifactID="xom" version="${xom.version}"/> <dependency groupID="xalan" artifactID="xalan" version="${xalan.version}"/> </m2-libraries> </target> There is a properties file somewhere that sets the version of everything, but you can override this on a particular machine, which is good for a staged adoption of a new file version. At deploy-time, we can also parse a descriptor to build up components that are nothing but declarations of dependencies first I declare a library with the default settings (directory is ${user.home}/.maven2/repository; layout policy is maven2) library extends Maven2Library { } then I delcare multiple artifacts that come from that repository, with version and checksums commons-logging extends JarArtifact { library LAZY PARENT:library; project "commons-logging"; version "1.0.4"; sha1 "f029a2aefe2b3e1517573c580f948caac31b1056"; md5 "8a507817b28077e0478add944c64586a"; } axis extends JarArtifact { library LAZY PARENT:library; project "axis"; version "1.1"; sha1 "edd84c96eac48d4167bca4f45e7d36dcf36cf871"; } finally I can declare a component that uses them. tcpmonitor extends Java { classname "org.apache.axis.utils.tcpmon"; classpath [ LAZY axis:absolutePath, LAZY commons-logging:absolutePath]; } This particular repository caches stuff locally, and generates file:// references. Any program on the local system can share the same files, so there is a lot less downloading than otherwise, and better offline support. Maven2 also does transitive dependencies, but I have disabled that in smartfrog, as I do not believe that the developers know best. For example, Xom depends on Jaxen, and that depends on dom4j. with transitive dependendencies. I'd get stuff on my classpath that I do not want, namely dom4j. There is another think to think about which is using WebDAV as a means of uploading stuff. I have mixed feelings about this, but note that it is standard in the content management system world as so many tools are webdav aware, up to and including the winXP filesystem (which lets you mount webdav trees drives with a NET USE x: \\repository.example.org\tree )
There are two more use cases,
-your asset store is used as the back end by an implementation of the CDDLM services. That is, someone uses <addFile> to add a file, and the request is forwarded to the ACS repository to add a new file to the application. Would that work?
I think it's among the doable possibilities. We however understood <addFile> is descried as an interim solution which can be not used if an external asset store is used. The asset store can have its own repository interface other than <addFile>.
Your point is to keep <addFile> in common among the implementations and the components use HTTP to pull the things from there. So you expect external repositories implement <addFile> as an native interface. Is this correct understanding?
No, I'd expect a CDDLM service written to offload all repository work to an ACS asset store using whatever operations they mutually agreed on.
-the results of a job are somehow placed into the asset store, for later retrieval by the submitter. This is out the scope of CDDLM; whatever you deploy needs to handle that submission process.
We have discussed about storing the "output" of the job, but that is pending, since the output can be variable per execution. I personally doubt if this is sufficiently persistant or stable information that worth stored in the repository.
Asset stores are a trouble spot with me in the past; they have caused inordinate amounts of problems, at least when you are trying to use one built on MSSQL and classic IIS/ASP. Here are some things that I recall being troublesome -many file systems cannot have > 65535 files in a single dir, so you had better not use too flat a filesystem -if the NAS filestore is set to the GMT tz and the server in PST, it doesnt make any difference whether or not the clocks themselves are syncrhonized; the auto-cleanup process is going to delete new files under the assumption that they are out of date. -its very hard to secure stuff -any HTTP data provider must support HTTP/1.1 or at least content-length headers, so that the caller can determine that the supplied content was incomplete
As with most things, everything worked in development, it is only when you go to production, put the asset store 1200km away from the rendering service and keep the files on a different host from the database that things start to go wrong.
Let us think about http a little more.
My experiences in the past are all well documented: http://www.iseran.com/Steve/papers/when_web_services_go_bad.html HTTP/1.1 download is more universal and useful than SOAP-based download. Upload is a more complex beast, search even more troublesome, but URL-based retrieval simple and effective. In that maven2 example above, we transform (project-name,version,artifact-name, artifact-extension) to something like http://ibiblio.org/maven2/${project-name}/${artifact-name}/${version}/${artifact-name}.${artifact-extension} for the artifact, and http://ibiblio.org/maven2/${project-name}/${artifact-name}/${version}/${artifact-name}.pom for the metadata, including dependency info. The rules are simple, and you can browse by hand to see what is there, for example under http://www.ibiblio.org/maven2/xom/xom/1.1b2/ to see the stuff for that version. -steve

Steve, Thanks for your update on this. Considering your comments below, we discussed a WS interface utilizing the uri identification as you proposed. Please take a look at the attachment to this e-mail. This depicts how the ACS Repository Interface "GetContents()" can work with multiple protocols. It is very similar with the AddFile() as it returns an uri. In our interface, Register() that pushes files into the repository returns EPR to the entry, then the GetContents() with protocol and keys to the subpart of the archive entry selects what and how the files can be retrieved from the repository. It will allow the http to be used among the other protocols supported by the implementation of the repository. Do you think this work for your use case? -Keisuke Steve Loughran wrote:
if you are deploying to a fabric with a shared filesystem, my preference is for file://, because any client that looks for file: urls will know it wont need to download and cache stuff.
otherwise, HTTP is good because (a) most things know about it (b) it is easy to debug behaviour by hand just by constructing the URL and tapping it in to your browser. (c) it goes through firewalls for remote download.

Steve, # Let me re-send this since I forget the attachment:-) Thanks for your update on this. Considering your comments below, we discussed a WS interface utilizing the uri identification as you proposed. Please take a look at the attachment to this e-mail. This depicts how the ACS Repository Interface "GetContents()" can work with multiple protocols. It is very similar with the AddFile() as it returns an uri. In our interface, Register() that pushes files into the repository returns EPR to the entry, then the GetContents() with protocol and keys to the subpart of the archive entry selects what and how the files can be retrieved from the repository. It will allow the http to be used among the other protocols supported by the implementation of the repository. Do you think this work for your use case? -Keisuke Steve Loughran wrote:
if you are deploying to a fabric with a shared filesystem, my preference is for file://, because any client that looks for file: urls will know it wont need to download and cache stuff.
otherwise, HTTP is good because (a) most things know about it (b) it is easy to debug behaviour by hand just by constructing the URL and tapping it in to your browser. (c) it goes through firewalls for remote download.

Keisuke Fukui wrote:
Steve,
# Let me re-send this since I forget the attachment:-)
Thanks for your update on this.
Considering your comments below, we discussed a WS interface utilizing the uri identification as you proposed. Please take a look at the attachment to this e-mail. This depicts how the ACS Repository Interface "GetContents()" can work with multiple protocols. It is very similar with the AddFile() as it returns an uri. In our interface, Register() that pushes files into the repository returns EPR to the entry, then the GetContents() with protocol and keys to the subpart of the archive entry selects what and how the files can be retrieved from the repository. It will allow the http to be used among the other protocols supported by the implementation of the repository.
Do you think this work for your use case?
-Keisuke
slide1. Yes, this should work. I like the listing of supported protocols, it eliminates much uncertainty. slid2: the use case should show that the http request is being made by a separate application (not the client), which GETs the URL that is extracted from the GetContents response message and then handed to the application. This emphasises that an application at a different location (and with no cookie history, a different IP address,...) will be doing the retrieval. slide3: The contents of the fetch would include a message which contains an identifier/uri to the actual attachment. Otherwise the app is assuming that only one attachment comes in a message, which is not always the case. MTOM is the future of soap attachments, even if support today is patchy-to-nonexistent. Its operation should look similar, except that the data will appear by the time it comes through the soap stack to be an inline base64 element of very large size (and presumably different copy/clone semantics or the implementation will leak files the way axis1.x does) I'd recommend a response message that can take either a uri or inline data, so the same request/response can work for the different types. 4. Caching/performance. If the URL-fetching app is written with the assumption that the repository is some distance away, or intermittently unreachable, it will want to download the resource and cache it. So an HTTP HEAD request may be needed, or a GET with a lastModifiedSince (or better yet, an etag). -steve

Hi Steve, I think I got your point on slide 2. Thanks for a heads up. I modified the diagram so that it convey the intent more precisely. GetResourceProperty can be called either the client or component. I assumed that in preparation to register the application contents, the client will do the call and prepare for it in the components. Does it look better in terms of your comments? As for the comments on the slide 3, I'm not very sure about your proposal. I understand that we can include multiple files as an attachment using this framework. It may include appropriate number of URIs needed, which refers to the attached contents. We need more study on this, but we are not intended to generate any extension to the existing standards on this area. Steve Loughran wrote:
4. Caching/performance. If the URL-fetching app is written with the assumption that the repository is some distance away, or intermittently unreachable, it will want to download the resource and cache it. So an HTTP HEAD request may be needed, or a GET with a lastModifiedSince (or better yet, an etag).
I guess the ACS repository would be the one that offer a local cache in the system. That's the largely the motivation to have the ACS standards. -Keisuke
participants (2)
-
Keisuke Fukui
-
Steve Loughran