
Gary, On Mon, Oct 26, 2009 at 5:27 AM, Gary Mazz <garymazzaferro@gmail.com> wrote:
I believe we had discussed this issue some months ago (CCIF ?) and reached agreement that none of us wanted to be business of formulating cloud benchmarks. :-)
I think cloud benchmarks are relatively safe provided they are not considered "universal" - there are a myriad tests for PC performance today and each represents a different workload. Microsoft have got about as close as you will ever get with their Windows 7 performance indexes, but even they are specific to the task of running an interactive windowing interface.
Is something like TCPC effective or any of the apache server workload stuff applicable, I really couldn't say. I like the approach the UNSW took, it looks like a good starting point. I don't have time to contact UNSW this week, but it may be work while to approach them.
I think the best approach is to have trusted third-parties (like Anna's team) conducting batteries of tests with the approval (but probably not direct knowledge of which accounts/when) of the cloud providers. It's certainly not realistic to have every tire kicker running their own suite of tests and indeed to do so without prior notice could reasonably be prohibited by terms of service. Each service would then have a set of figures and users could use those most appropriate for their workload. Agreed on keeping the clones to identical characteristics, I'm not sure how
feasible that is today. But, it a good, practical way to initially define it.
I'd be satisfied with a "should" requirement level for clones being identical - it's almost always going to be better that a request be satisfied with a mix of hardware than not at all. Sam
Sam Johnston wrote:
Gary,
I think you've touched on an interesting point there which ties in to the "need" for a universal compute unit. More specifically, "cores" aren't a standard unit of measurement (at least without arch and speed), and in any cloud that's not brand new you're going to end up with a mix of core speeds depending on what presented the best value at build/replacement/expansion/failure time.
If you have a mix of core speeds at a given tier without sufficiently intelligent load balancing (e.g. response time based) then you'll end up with some cores being underutilised and/or finishing jobs faster, and others being unable to keep up. If you're applying the buffalo theory (e.g. round robin) then you're only as fast as your slowest machines.
Simple fix is to ensure that "clones" or "shadows" of a given compute resource are all identical, but it's worth keeping in mind nonetheless.
Sam
On Mon, Oct 26, 2009 at 4:22 AM, Gary Mazz <garymazzaferro@gmail.com<mailto: garymazzaferro@gmail.com>> wrote:
'horizontal' and 'vertical' dials is a good idea to define.
@Andy, I'm a little confused on the definition of horizontal saleability. Aren't the cpus in a single operating image a vertical workload capacity much like the amount RAM . If the number of images scaled, that would be horizontal because there is no necessity for the images to be the same workload set.
I would prefer to see the dials tied to a standard "meter of work". An efficiency metric instead of an "equivalence" of cpu count and ghz and RAM amount. Juggling these dials may not be as effectual as the consumer perceives when a provider decide to throttle back performance and starts dropping workload requests. Without a referenced "effective workload" metric, it may be tough to ascertain if the dials effect anything, other than the charge to the customer.
gary
Randy Bias wrote:
On Oct 25, 2009, at 5:38 PM, Sam Johnston wrote:
A better approach to scalability is to have a single object
which you
can both adjust the resources of (vertical scalability) and adjust the number of instances of (horizontal scalability). That is, you start a single instance with 1 core and 1Gb, then while it's running you crank it up to 2 cores and 2Gb. Eventually you max out at say 8 cores and 16Gb so you need to go horizontal at some point. Rather than create new unlinked instances the idea is that you would simply adjust the
I agree. This is the future. Dials for 'horizontal' and for 'vertical', probably attached to a given tier of an application.
Just as an FYI, I think 'scale-up' VMs are going to be more and more common. We'll see VMs with a *lot* more RAM and cores very soon now. Most of the modern OSes handle hotplug of CPU/RAM pretty well.
Best,
--Randy
Randy Bias, Founder & Cloud Strategist, Cloudscaling +1 (415) 939-8507 [m], randyb@cloudscaling.com <mailto:randyb@cloudscaling.com> <mailto:randyb@neotactics.com <mailto:randyb@neotactics.com>>
------------------------------------------------------------------------
_______________________________________________ occi-wg mailing list occi-wg@ogf.org <mailto:occi-wg@ogf.org>
_______________________________________________ occi-wg mailing list occi-wg@ogf.org <mailto:occi-wg@ogf.org>