Tag Archives: Infrastructure Stories

The CVL on R@CMon Phase 2

Monash is home to the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a national facility for the imaging and characterisation community. An important and rather novel feature of the MASSIVE compute cluster is the interactive desktop visualisation environment available to assist users in the characterisation process. The MASSIVE desktop environment provided part of the inspiration for the Characterisation Virtual Laboratory (CVL), a NeCTAR VL project combining specialist software visualisation and rendering tools from a variety of disciplines and making them available on and through the NeCTAR research cloud.

The recently released monash-02 zone of the NeCTAR cloud provides enhanced capability to the CVL, bringing a critical mass of GPU accelerated cloud instances. monash-02 includes ten GPU capable hypervisors, currently able to provide up to thirty GPU accelerated instances via direct PCI passthrough. Most of these are NVIDIA GRID K2 GPUs (CUDA 3.0 capable), though we also have one K1. Special thanks to NVIDIA for providing us with a couple of seed units to get this going and supplement our capacity! After consultation with various users we created the following set of flavors/instance-types for these GPUs:

Flavor name#vcoresRAM (MB)/dev/vda (GB)/dev/vdb (GB)

R@CMon has so far dedicated two of these GPU nodes to the CVL, and this is our preferred method for use of this equipment, as the CVL provides a managed environment and queuing system for access (regular plain IaaS usage is available where needed). There were some initial hiccups getting the CVL’s base CentOS 6.6 image working with NVIDIA drivers on these nodes, solved by moving to a newer kernel, and some performance tuning tasks still remain. However, the CVL has now been updated to make use of the new GPU flavors on monash-02, as demonstrated in the following video…

GPU-accelerated Chimera application running on the CVL, showing the structure of human follicle-stimulating hormone (FSH) and its receptor.

If you’re interested in using GPGPUs on the cloud please contact the R@CMon team or Monash eResearch Centre.

MCC-on-R@CMon Phase 2 – HPC on the cloud

Almost a year ago, the Monash HPC team embarked on a journey to extend the Monash Campus Cluster (MCC), the university’s internal heterogeneous HPC workhorse, onto R@CMon and the wider NeCTAR Australian Research Cloud. This is an ongoing collaborative effort between the R@CMon architects and tech-crew, and the MCC team, which has long-standing and strong engagements with the Monash research community. Recently, this journey has been further enriched by the close coordination with the MASSIVE team, which will enhance the sharing of technical artefacts and learnings between the two teams.

By September 2014, the MCC-on-the-Cloud has grown to over 600 cores, spanning across three nodes on the Australian Research Cloud. Its size was only limited because the Research Cloud was full and awaiting a wave of new infrastructure to be put in place. Nevertheless, Monash researchers from Engineering, Science, and FIT have collectively used over 850,000 CPU-core hours. Preferring the “MCC service”, they have offered their NeCTAR allocations to be managed by the MCC team, rather than building a cluster and installing the software stack by themselves. From the researchers’ perspective, this has the twofold benefit of providing a consistent user experience to that of the dedicated MCC and freeing them from the burden of managing cloud instances, software deployment, queue management, etc.

Deploying a usable high-performance/high-throughput computing (HPC/HTC) service on the cloud poses many challenges. Users expect a certain robustness and guaranteed service availability typical of traditional clusters. All this must be achieved despite the fluidity and heterogeneity of the cloud infrastructure and nuances in service offerings across the Research Cloud nodes. For example, one user reported that jobs were cancelled by the scheduler because they exceeded the specified wall time limits, and we subsequently discovered that some MCC “cloud” compute nodes were running on oversubscribed hosts (contrary to NeCTAR architecture guidelines). Nevertheless, we can declare that our efforts have paid off – MCC-on-the-cloud is now operating and delivering the reliable HPC/HTC computing service wrapped in the classic MCC look-and-feel that Monash researchers have come to depend on. Despite the many challenges, we are convinced that this is a good way to drive the federation forward.

Now with R@CMon Phase 2 coming online, we have taken a step closer towards realising this aim of “high-performance” computing on the cloud. Equipped with Intel Ivy Bridge Xeon processors, R@CMon Phase 2 hardware stands out amidst the cloud of commodity hardware on most other NeCTAR nodes. These specialist servers are already proving invaluable for floating-point intensive MPI applications. In production runs of a three-dimensional Spectral-Element method code, we observed performance of nearly double on these Xeons as compared to the AMD Opteron nodes across most of the rest of the cloud, even when hyper-threading is enabled. By pinning the guest vCPUs to a range of hyper-threaded cores on the host, we achieved a further 50% performance improvement; this is effectively over 2.6x improvement to the “commodity” AMD nodes. We look forward to implement this vCPU pinning feature once it is natively supported in OpenStack Juno, the RC’s next version.

Measured performance improvement with a production 3D Spectral Element code R@CMon Phase 1: AMD Opteron 6276 @ 2.3 GHz                 Phase 2: Intel Xeon E5-4620v2 @ 2.6 GHz

Measured performance improvement with a production 3D Spectral Element code
R@CMon Phase 1: AMD Opteron 6276 @ 2.3 GHz
Phase 2: Intel Xeon E5-4620v2 @ 2.6 GHz

Thus, our journey continues… Once RDMA (Remote Direct Memory Access) is enabled on Phase 2, accelerated networking will make it feasible to run large-scale, multi-host MPI workloads. Achieving this will take us even closer to a truly high-performance computing environment on the cloud. Look out for MCC science stories and infrastructure updates soon!

R@CMon Phase 2 is here!

Back in 2012 our submission to NeCTAR planned R@CMon as being delivered in two phases. First a commodity phase, letting the ideals of en masse computing dominate technical choices. We have been operating phase 1 since May 2013. Our new specialist second phase went live in October! R@Cmon phase 2 (R@CMon RDC cell) scales out high-performing and accelerating hardware as driven by the demands of the precinct. Often ‘big data’ is just not possible without ‘big memory’ to hold the problem space without going to disk (x100 slower). Often ‘more memory’ is the barrier, not ‘more cores’. Often ‘I need to interact with a 3D model’. And so on. R@CMon is truly now a scalable and critical mass of self-service, on-demand computing infrastructure. It is also the play-pit where research leaders can build their own 21st century microscopes.

NeCTAR monash-02 rack-rear

One of the four racks of NeCTAR monash-02. From top to bottom: Mellanox 56G switches, management switch, R820 compute nodes, R720 Ceph storage nodes

In addition to phase 1, phase 2 has –

  • 2064 new Intel virtual cores
  • 3 nodes with 1TB of RAM
  • 10 nodes with GPUs for 3d desktops
  • 3 nodes (the large memory ones) with high-performance PCIe SSD
  • All standard compute nodes mix SAS & SSD for low-latency local ephemeral storage
  • All nodes with RDMA (Remote Direct Memory Access – the stuff that makes fast, large-scale, multi-node HPC jobs possible) capable networking

As with phase 1, the entire infrastructure is orchestrated through OpenStack and presented on the Australian Research Cloud. R@CMon is once again pioneering research cloud infrastructure, virtualising all these specialist resources.

Over the next week we’ll blog with emerging examples of GPUs, SSDs and 1TB memory machines…

R820 1TB RAM compute node

One of the specialist nodes – a quad-socket R820 with 1TB RAM and high-performance PCIe-attached flash

MyTardis + Salt on R@CMon

The Australian Synchrotron generates terabytes of data daily from a range of scientific instruments. In the past, this crucially important data often ended up on old disk drives, unlabelled and offline. This makes sharing and referencing of the data very difficult if not impossible. MyTardis was developed as a web application geared towards receiving data from scientific instruments such as those at the Australian Synchrotron in an automated fashion. This enables researchers to facilitate the organisation, long-term archival and sharing of data with collaborators. MyTardis also allows researchers to cite their data. This has been done for publications in high impact journals such as Science and Nature. Refer to this previous post highlighting Australian Synchrotron’s new data service using MyTardis, R@CMon and VicNode

A data publication for Astronomy

A data publication for Astronomy

Deploying MyTardis for development use or on a multi-node cloud setup is made easier with the use of automated configuration management tools. MyTardis is now using Salt for its configuration management, which is gaining popularity for its accessibility and simplicity. More information about deploying MyTardis on various platforms and how it uses Salt is available in this story. The following diagram provides an overview of MyTardis deployment architecture on the NeCTAR Research Cloud.

MyTardis Deployment Architecture

MyTardis Deployment Architecture

Since its creation, development and deployment of MyTardis have expanded to other universities, scientific facilities and institutions to fulfil the data management requirements across research areas such as microscopy, microanalysis, particle physics, next-generation sequencing and medical imaging. An up-to-date list of scientific instruments integrated with MyTardis is included in this story on the MyTardis site.

Ceph Enterprise – a disruptive period in the storage marketplace

Recently R@CMon signed an agreement for Inktank Ceph Enterprise (aka ICE). Inktank is an open-source development and professional services company that spun out of DreamHost two years ago. And boy have they have been pretty busy since then – not only building an international open-source community around Ceph, but also solidifying the product through several major releases in conjunction with exploring enterprise support business models. It’s both the innovation rate and the professional maturity that drew us to Ceph. The next release of ICE (due this month) includes support for Erasure Coding (think distributed RAID) and cache-tiering (think SSD performance for near spinning-disk cost)!

The R@CMon – Ceph journey began in September last year where we introduced block storage based on Ceph to the NeCTAR Research Cloud. Others have been paying attention too and now several other NeCTAR Nodes (NCI, TPAC and QCIF so far) are using Ceph to meet their block storage needs. Monash now has just shy of 400TB raw capacity colocated with the monash-01 compute zone/cell and another 500TB coming with monash-02 deployment. With other developments in the pipeline we expect to have well over 1PB of storage managed by Ceph by the end of the year!

One of the things that came with the release of ICE was a closed-source management tool, Calamari, which provides the sort of graphical status, monitoring and configuration dashboard you might expect from “enterprise” software (how that term makes us cringe!). Calamari is starting to look pretty slick and is certainly one piece of the puzzle transitioning a storage solution adopted and built in the eResearch Healthy Hot-House environment to a robust service delivery practice – getting woken up at 2am in the morning if a storage node dies is not in my contract!

Ceph Calamari GUI

Calamari OSD Workbench

Not long after we’d signed up for ICE, Inktank was acquired by Red Hat for a cool $175mil. This announcement was surprising in that it was unexpected – Red Hat already has an iron in the software-defined storage fire in Red Hat Storage Server (GlusterFS wearing a fedora). But once digested this appears to be a very astute move by Red Hat as Ceph has been the sweetheart of storage for OpenStack deployers, thanks largely to its ability to converge object/block/filesystem, something that took the NAS-oriented Gluster a while to catch up on. It seems Red Hat now have a firm grip on software-defined storage!

Red Hat’s acquisition of Inktank is good news for us as it will ultimately mean a supported version of Ceph on an enterprise Linux distribution we have a broad skills base in, with local technical support personnel to back it all up, and in a much shorter timeframe than Inktank could have delivered on its own. And true to their upstream-first philosophy, Red Hat has already open-sourced Calamari.

It’s an exciting and disruptive period in the storage market!

Cloud to your taste?!

Executive Summary

Now after 500 allocated projects (excluding trials) over two years of the RC, we propose an evolution in the NeCTAR Research Cloud (herein “RC”) flavor offerings, to:

  • introduce new RC-wide classes of compute and memory optimised flavors tailored more specifically to particular workloads;
  • have standard/base flavors more aligned with the commercial cloud alternatives;
  • give the RC Nodes greater flexibility in establishing utilisation-improvement strategies.

We believe this approach will provide immediate utilisation gains without memory overcommitment, whilst still providing scope for Nodes to pursue further overcommitment/consolidation as may be appropriate for their workloads and architecture. The remainder of this document details the background and proposes an RC-wide implementation strategy and some example Node deployment scenarios. Skip to Core recommendations below for the proposal.


As the RC continued to gain popularity and new users over the course of 2013 we got our first taste of what happens once cloud Nodes (capital ‘N’ indicating a zone) get full. The operational Nodes have also had a chance to take stock of the new paradigm and look for opportunities to improve ROI and outcomes for their users. We are now at a juncture where it is appropriate to revisit and refine the basic RC menu – the virtual machine hardware footprints (commonly known as flavors or instance-types) which make up the primary resource consumption patterns of the RC.

With the capacity pressure Nodes have discussed and (some) implemented resource overcommitment (i.e., allocating more virtual machine memory and/or CPU cores than the physical host actually has). This is possible thanks to various kernel and hypervisor technology, and valuable because instances generally do not use all their assigned memory or CPU resources at any one time. Whilst a notional level of overcommitment is likely to be safe and even seems to be quite common in private cloud deployments, the technique is a blunt tool with many caveats including: non-standardised adoption in the RC; general unpopularity amongst the end-user community; and perhaps most significantly, an element of inherent risk due to the fact that unmitigated memory pressure can result in OOM’d instances or crashed systems. A complementary strategy, requiring less overcommitment to achieve the same utilisation, is to fit flavors to workloads. This will (hopefully!) result in greater workload consolidation and empower users to select the best virtual hardware to match their application/s.

The general idea of this proposal is to further develop the RC flavors to provide variety of configuration and QoS. We assume CPU overcommitment for general-purpose/standard flavors, this is a standard practice in virtualised environments and experience to date shows ample headroom for doing so in the RC context. Importantly, we propose to use recent middleware features (cgroups CPU share quotas) to limit CPU contention and guarantee relative CPU QoS for different flavors, whilst also specifying a class of compute flavors with no overcommitment recommended.


  • High demand and slow capacity ramp up.
  • Improve efficiency.
  • Current flavors do not offer any variety of CPU-to-RAM ratio and so don’t cater specifically to the many and varied workloads of the RC, this translates to missed opportunities for consolidation and potential for higher utilisation.
  • Current flavors do not align with commercial offerings – other on-shore IaaS providers (AWS, Microsoft Azure, Rackspace) have already done the work of determining good typical offerings. We should leverage this and provide standard flavors more comparable to the rest of the market whilst also noting that particular research computing requirements can be met with other flavor classes.
  • Operational experience with the existing flavors shows significant opportunity for improvement of hardware utilisation (typically loads of free CPU and plenty of memory even when a host is “full” in terms of vCPUs allocated to it).
  • Provide wider scope for HTC workloads to a “use-up” idle CPU capacity.
  • The automatically allocated ‘pt-*’ projects have been a very popular and extremely useful tool in on-boarding users, however as a no-barrier free offering they are very generous and there is no incentive for users to release the resources before the project end. The ultimate goal of providing a simple method to test-drive the RC can be achieved in a lighter footprint whilst giving the user more flexibility.
  • Since the RC’s inception, new features in successive releases of OpenStack have added capabilities which can help us achieve and manage a diversified offering.

What to do?


We propose revamping the RC flavors/instance-types based on:

  • The typical nova-compute hardware footprint (1 “core” / 4 GB RAM) and the range of hardware already in the wild and ordered, from 2P x 12″core” AMD to 4P x 16″core” Intel. Related to this, see the box-packing problem in Other considerations below.
  • The standard offerings of commercial providers operating on-shore (competitors / partners / burst-locations).
  • A desire to cater more directly to a variety of use-cases with differing compute/memory requirements, e.g., web servers/services, data services (presentation, transformation, capture), remote desktops (e.g., training or SaaS), data-mining, high-throughput compute, loosely coupled high-performance compute.
  • A desire to improve hardware utilisation allowing a greater volume and variety of workloads to co-exist on the RC. And the assumption that flavor variety in and of itself will translate to improved utilisation without the temptation for aggressive resource overcommitment.
  • The assumption that reliability and consistency are intricately linked, and both should trump overall hardware utilisation.
  • Given limited resource-quotas users will tend to fit their workloads into the best perceived flavor match, with the incentive that it leaves quota available for them to start new instances.
  • Nodes must be able to flexibly define their own specific private flavors.
  • Any attempt to standardise CPU and/or IO performance for the various flavors across Nodes is considered out-of-scope due to different hardware and storage architectures already deployed.
  • Nodes must have flexibility in how they choose to implement these recommendations based on the specific needs of their local user communities and their operational ability/agility.

Core recommendations

  1. Creation of three new RC-wide flavor classes: standard (“m2”, following on from “m1”), compute (“c2”), memory (“r2”, r for RAM). The compute and memory flavors are referred to as optimised instances. Class is defined purely as the naming prefix from the user’s perspective (whether they differ with respect to overcommit or resource share in the back-end is the discretion of the Node).
    • The compute class is used to calibrate CPU share values (as per https://wiki.openstack.org/wiki/InstanceResourceQuota) in order to maintain an approximate 1-to-1 vCPU to core/thread/module ratio for compute workloads. Flavors with CPU shares less than 2048 represent a fractional share, noting that share values only come into effect with CPU contention, so such flavors can still get all clock cycles when they are available.
    • compute and memory flavors are offset +/-33% from the current 1 core / 4GB RAM.
  2. Adjust standard root ephemeral (vda) size to be more accommodating of larger snapshots and OSes like Windoze. 30GB vda as standard with secondary ephemeral (vdb) adjusted down or removed completely to accommodate.
  3. Proposed new flavors:

    Flavors 2.0

    ClassName#vCoresRAM size (GB)primary/vda disk size (GB)secondary/vdb (min) disk size (GB)cpu.shares
  4. Standard flavors are available to all, whilst compute and memory flavors will be made available to projects that justify a technical requirement for them. Access to these optimised flavor classes would then be granted in the normal project creation/amendment workflow.
  5. Refinements to the allocation review process that involve technical appraisal of a project’s need for optimised flavors. This also provides scope for the target Node (if any) to evaluate the request relative to availability of optimised resources.
  6. pt’s (project trial accounts) have a 4GB RAM & 4 vCPU quota limit. Thus limiting them to the m2.tiny, m2.xsmall, m2.small, m2.medium and m1.small flavors, yet they can now run up to four m2.tiny instances.
  7. Nodes may choose to overcommit CPU and/or memory resources underpinning these proposed flavor classes, however we do not recommend overcommitment of optimised flavors until operational experience is gained with the new flavor classes. It should be noted that we expect significant utilisation improvements with the new standard flavors even without overcommitment. Additionally, Nodes will need to carefully consider their public IPv4 address capacity.
  8. Nodes use nova host-aggregates for flavor classes and each individual flavor, allowing class-based and fine-grained per flavor control of instance-types that can be scheduled to any particular nova-compute hypervisor/node.
  9. Nodes overcommitting the standard flavor class use nova per-aggregate core and memory allocation ratio settings (see the relevant blueprint and filter scheduler docs for details). This will allow, e.g., nova-compute nodes in the same Cell to be dedicated to overcommitted standard flavors whilst other nova-compute nodes service optimised flavors. By using host-aggregates in this manner the distribution of compute nodes can be changed dynamically, this is not possible with per-Cell allocation ratios (unless the node is “drained”, removed from production, and then reconfigured).

Other considerations/notes

  • The box-packing problem – unbalanced allocation of CPU and memory resources on a hypervisor, i.e., all CPUs allocated but memory available and vice-a-versa (this was one of the primary drivers for a fixed ratio of resources in the “m1” flavors).
    • We assume higher CPU utilisation is the priority in terms of consolidation as memory is cheaper and there is always available HTC workload that can utilise spare CPU.
    • Because we assume CPU overcommitment for the standard flavors and have introduced additional lower memory footprint flavors, this problem is diminished somewhat in the standard flavors.
    • Pathological scheduling with the optimised flavors could result in a hypervisor being entirely filled by “r2” instances, thus resulting in 33% spare CPU cores, however the average case (i.e., assuming even distribution of compute and memory instances) would see no change.
  • Specialised capability flavors (e.g., GPU, very high-memory, high-IOPS) should be standardised where possible. A potential example is for GPU flavors, there are only a couple of GPU types likely to be deployed across the RC, these options could be combined with the memory and compute optimised flavor configurations to produce new ‘mg2’ and ‘cg2’ flavor classes.
  • Public IPv4 address capacity. One of the obvious issues when considering fitting more instances into the existing hardware footprint is whether we have enough IPv4 addresses to match our hardware capacity, and this is one reason resource overcommitment should be very carefully managed. With the existing flavor menu the average instance is 2.5 cores, so that’s 4 public IPv4 addresses needed for every 10 cores, there’s good reason to believe this will gap will shrink if these recommendations are implemented. (At Monash we’re making provisions to “borrow” another /21 until we can deploy SDN)
  • The box-“filling” problem. Particularly with no spaces available for large memory VMs. Possible solution – implement a new scheduler filter that will hard-limit the number of instances of any particular flavor running on a compute node, so as to guarantee a minimum capacity of certain flavors
  • Instigate a process whereby the utilisation of compute and memory instances is monitored on a per-project basis, projects determined not to be making effective use of the optimised capabilities may lose access to those flavor classes.

Node Deployment Examples and Trade Offs

These are illustrative scenarios designed to show the possibilities afforded by adoption of these recommendations, many of these scenarios could be mixed and matched within a Node’s deployment in order to achieve the Node’s goals.

  1. All flavors can run on any compute node. (At a certain aggregate usage level no compute nodes would be able to launch any new larger memory instances, despite there being plenty of aggregate free memory capacity. Can be somewhat mitigated by use of a fill-first scheduling approach).
  2. Standard flavors separate from optimised (compute and memory) flavors, whilst the latter two coexist on the same hardware. Standard flavors overcommitted, optimised flavors not overcommitted.
  3. As for #2, standard flavors separate from optimised (compute and memory) flavors. Standard flavors on AMD based compute nodes, compute and memory optimised flavors on Intel based compute nodes.
  4. Separate compute nodes for, e.g., all flavors with >30GB RAM (independent of flavor class), whilst remaining compute nodes can run any flavor. This provides a means to guarantee a certain capacity of high-memory instances.
  5. Mix of #3 and #4.


None yet – will update with notable comments.

Best Practices for Overcommit

The following is intended to seed a set of living suggestions/guidelines to help Nodes implement resource overcommitment in the safest possible manner, so as not to compromise stability of user services (it must be remembered that positive sentiment is hard to win and easy to lose, people are much more likely to share poor experiences widely than they are to rave about what great infrastructure we’re providing them with).

  • If you have local direct-attached storage for your nova-compute ephemeral disks then use a the io_ops scheduler filter to ensure that instance-spawning does not overwhelm your host’s disks. Also set upper disk IOPS and throughput limits on the various flavor classes so that noisy neighbour effects are limited.
  • Make sure you have enough swap space (plus a generous amount extra for migrations) on your nova-compute nodes to hold overflow memory pages of instances if they all suddenly started trying to use their full RAM footprint (we really don’t want to see the OOM Killer).
  • Make sure your swap device/file is in some way fenced from the rest of your disk IO concerns so that hypervisor-level swapping does not contend with instance IO, e.g., separate swap device or file on the root block devices.
  • Use cgroups to reserve memory for the host.
  • Monitor and test relevant new Kernel features such as zswap.

ATAQ (Answers To Anticipated Questions)

Q: Why not mix overcommitted and non-overcommitted instances on the same compute node, e.g., have host-aggregates with different ram_allocation_ratios assigned to a single compute node.

A: Whilst we can assign multiple aggregates, the Aggregate*Filter scheduler filters will pick and use the smallest *_allocation_ratio from all relevant aggregates the node is a member of. There’s a good reason for this – it just doesn’t make sense to do anything else. Because the *_allocation_ratio settings are a soft accounting metric, i.e., they only affect the scheduling decision of whether a node can support a new instance, they have no effect on the absolute or relative share of resources each instance has on any particular node. E.g., if the scheduler applied different ram_allocation_ratios on a single node based on the flavor of instances launched, the effective ram_allocation_ratio would end up as something of a weighted average and all instances on the node would be equally affected by ram contention.

Q: What’s with the names / why don’t we copy AWS names?

A: There appears to be no good naming convention to mimic (probably due to English lacking specific and unambiguous words to describe size in various dimensions). AWS have made a hash of their own convention as their flavor offerings have proliferated, it’s now virtually impossible to even guess what kind of instance you’re looking at, e.g., cr1.8xlarge is apparently a memory optimised instance – go figure! Rackspace simply use the memory size as the name, which is not a bad idea but breaks if you want different vCPU counts with the same memory size. Azure uses A1 – A7, which is possibly the single most most intuitive aspect of Azure since it was released (clearly marketing wasn’t involved with that ;-). We’ve made suggestions which continue the tradition of somewhat ambiguous descriptors for the standard flavors but give more precise names to the compute and memory flavors which hopefully speak for themselves. A previous draft used Aussie beer sizes for the standard flavors: pony, small, pot, halfpint, pint, jug, partykeg…

Q: Why the strange/not-round memory sizes for the optimised flavors versus the standard?

A: To account for virtualisation overheads on the host (page tables, block cache, etc.). Take a browse through the AWS and Azure offerings and you’ll see they do this too. The reason for the discrepancy is because we do not expect much if any overcommitment with the optimised flavors, whereas we believe it will be commonplace for conservative overcommitment with the standard flavors (so there’s little reason to adjust for the virt overheads of these).

Parking Lot

Ideas / pipe-dreams go here (mostly requiring dev and ops work)…

  • Introduce a spot-instance flavor class and modify the filter scheduler to introduce a retry-style filter that iteratively removes running spot-instance metrics (in batches of some configurable spot-instance scheduling time-slice) from the scheduling meter data until running spot-instances are entirely ignored by the scheduler (thus logically preempted). The final scheduling outcome may then include a linked action to terminate some spot-instance/s on the chosen compute node.
  • Extend the existing Nova resource quota feature to set memory-cgroup parameters. Judicious use of the limit_in_bytes and soft_limit_in_bytes controls (particularly the latter) could be used to mix standard and memory optimised instances on the same compute node, e.g., a standard instance launched in a 1.5 ram_allocation_ratio aggregate would have a corresponding memory soft_limit = virtual_ram_size / 1.5, whereas a memory instance in a 1.0 ram_allocation_ratio aggregate would have memory soft_limit = virtual_ram_size.
  • Use containers (e.g., lxc) to fence/partition relevant resources on the compute nodes such that overcommit can be implemented consistently amongst classes of flavor. This might mean splitting a physical compute node into multiple compute containers, each acting as a nova-compute node.
  • Implement a new scheduler filter that will hard-limit the number of instances of any particular flavor running on a compute node – a possible work-around for the box-packing problem.

R@CMon IDC Spotlight – AMD & DELL

Late in 2013 Monash was approached by AMD and Dell to provide information for an IDC report on the Monash Node of the NeCTAR Research Cloud . We were more than happy to contribute and highlight the success of our Node and NeCTAR as a whole, plus share lessons learned along the way.

Download (PDF, Unknown)

Whilst it contains the requisite commercial “fluff” (the R@CMon tech team speaking here), we were keen to share our experience with enterprise customers who might take note of such reports and thus further assist OpenStack’s penetration in the enterprise sector.

Note that the report contains some outdated information about the full capability of the Monash Node, notably that we’ll have over 4500 cores capacity, which we’ll correct next week when we post about the progress of phase 2!