
Gary,
The UML doesn't reflect the reality of the AtomPub meta-model, which is in fact far more flexible. I don't care if you call your logical collections "groups", "clusters", "virtual data centers" or "gaggles". I assumed its more flexible, but I wasn't sure of the terminology and
Sam Johnston wrote: division of responsibilities. The DMTF model included virtual metal like network adapters and HBAs. Others define disk drives and disk files. I don't see any managing QOS for virtual metal. The problem with all this vmetal is it exposes the underlying fabric or a vitalized view of the underlying fabric. ie switches, gateways, routers, paths, storage multipathing, disk arrays and luns. It can be a can o' worms. We still need to figure out how to handle an IP address in one cloud provider on one net segment and migrating to another provider on another net segment.
The categories have terms ("win2k3"), labels ("Windows 2003 Server") and schemes ("http://purl.org/occi/category#os") and the very first category we would be looking to define is of course the resource types (what Google call "kinds") - that is, "compute", "storage", "network" (and according to NIST's draft definition, in future "application" and "service" - though I'm not sure about that).
In part, I'm trying to itemize the terms/properties.
Hierarchical grouping is something to talk about - do we use something like a '/' separator ala /myvdc/myvrack, have pointers to "parent" objects, or something else.
Agreed, I personally would like to see different views of the system. The end user should see /Domain/Federation/Cloud/containers/resources . The cloud provider should see /Cloud/Federation/Domain/containers/resources or /Cloud/Domain/Federation/containers/resources. Privacy laws and business practices may restrict one cloud provider from viewing a domain's other cloud providers and their resources. The roles are also different. The Domain would be interested in a view of its multiple cloud providers. The cloud providers would be interested in the Domains and their utilization.
Ok have a flight to catch to London - see [some of] you there.
Safe trip... -gary
Sam
On Wed, May 20, 2009 at 5:38 AM, Gary Mazz <garymazzaferro@gmail.com <mailto:garymazzaferro@gmail.com>> wrote:
Yes, thanks.. I'm looking at it and using it as a guide to the other specs.
I'll explain why I'm confused and it look like I'm pursuing more than the scope
In the occi system model, the IaaS fits between the PaaS and fabric. Looking at the occi UML it shows the compute resource as aggregated dependency of cluster<ag>>domain<ag>>cloud. The example defines "groups" as racks, pools, data center, etc., real physical assets. Based on the example, I think of clusters as organization of physical compute resources. If the intent was to keep domain and cloud logical elements, it may be better to get rid of the cluster as a class and convert it to a property defining a quality of service for the user. Some may disagree, but I don't believe the user cares if there is a cluster configured for round robin, random distribution, active all or primary/spare failover. I'm assuming they'll care about workload capacity and service availability. (and costs).
Currently cluster<ag>>domain<ag>>cloud looks more like a fabric than a logical components. Fabrics need a different set of capabilities, like events.
-gary
Alexis Richardson wrote:
Gary
Have you seen the interface comparison spreadsheet?
http://spreadsheets.google.com/ccc?key=pGccO5mv6yH8Y4wV1ZAJrbQ
This is our core focus for interop. To achieve commonality right here right now. No invention just interop.
a
On Tue, May 19, 2009 at 9:51 PM, Gary Mazz <garymazzaferro@gmail.com <mailto:garymazzaferro@gmail.com>> wrote:
Well since this is a interoperability interface, I'm assuming there will be gateways to other technologies like fabrics. Events, event delivery and event management are important patterns and are supported by others. I don't believe we'll be able to get away without supporting them for very long. One of the big drawbacks to snmp and cimoms are the lack of event support and an infrastructure to support event message persistence.
I'm also not sure where we are drawing the line in terms of interoperability. There was a general consensus that occi should be focusing on integration points in the cloud, but I didn't see a clear definition of an integration point. (I was out of the loop for a while) In the occi model the platform can be considered a container (loosely, a vm) with infrastructure resources provisioned. The container life cycle and resource provisioning are "management" integration points, although there are no verbs published yet. Will portions of occi interface be permitted to permeate the container boundary ? It is still unclear the level of interaction, if any, between the occi and the container contents. Maybe I missed the definition.
-gary
Alexis Richardson wrote:
Indeed and XMPP and HTTP should not be overlooked either.
On Tue, May 19, 2009 at 7:49 PM, Sam Johnston <samj@samj.net <mailto:samj@samj.net>> wrote:
On Tue, May 19, 2009 at 7:13 PM, Alexis Richardson <alexis.richardson@gmail.com <mailto:alexis.richardson@gmail.com>> wrote:
Interesting point.
Speaking as someone who is professionally involved in messaging and events my STRONG advice would be to completely leave them for now. Implementation of the planned draft will naturally bring up use cases suited to the various eventing technologies and protocols, none of which are fully baked by the way. This will be good fodder for future work but currently is **** not in scope ****.
Agreed, and I don't know AMQP well enough to say how it could fit here.
The use case we need to take away from it is that OCCI messages aren't necessarily going to be ephemeral - they may well be long lived, queued, serialised, saved to file, etc.
Sam
_______________________________________________ occi-wg mailing list occi-wg@ogf.org <mailto:occi-wg@ogf.org> http://www.ogf.org/mailman/listinfo/occi-wg