Author Archives: Steve Quenette

Monash University, NVIDIA and ARDC partner to explore the offloading of security in collaborative research applications

Collaboration in the research sector (universities) has an impact on infrastructure that is a microcosm for the future Internet. 

Why is this? Researchers are increasingly connected, increasingly participating in grand challenge problems, and increasingly reliant on technology. Problem solving for big global challenges, as distinct from fundamental research, can involve large-scale human-related data, which is sensitive and sometimes commercial-in-confidence. Researchers are rewarded to be first to discovery. One way to accelerate discovery is to be the “first to market” with disruptive technology. That is, develop the foundational research discovery tool (think software or instrument that provides the unique lens to see the solution, a “21st century microscope” so to speak). If we think of research communities as instrument designers and builders, they must then build the scientific applications that span the Internet (across local infrastructure, public cloud and edge devices). 

What is an example 21st century microscope for a mission-based problem? To prove the effectiveness of an experimental machine learning based algorithm running on an NVIDIA Jetson-connected edge device controlling a building’s battery. It’s informed by bleeding-edge economics theory, participates in a microgrid of power generators (e.g. solar), storage and consumers (buildings) at the scale of a small city, and is itself connected to the local power grid. Through the Smart Energy City project within the Net Zero Initiative we are doing just that.

A tension is observed between mission-based endeavours involving researchers from any number of organisations, and the responsibility for data governance, which ultimately resides with each researcher’s organisation. Contemporary best practices in technological and process controls adds more work to researchers and technology alike, potentially slowing research down. And yet cyber threats are an exponential reality. It cannot be ignored. How do we make it safe and easy for researchers to explore and develop instruments in this ecosystem? How do we create an environment that scales to any number of research missions? 

What is the technological and process approach that enables a globe’s worth of individual research contributions to mission-based problems that will also scale with the evolving cyber landscape?

In February, NVIDIA, Monash University’s eResearch Centre, Monash University’s Cyber Risk & Resilience team and the Australian Research Data Commons (ARDC), commenced a partnership to explore the role DPUs play in this microcosm. Monash now hosts ten NVIDIA BlueField-2 DPUs residing in its Research Cloud, essentially a private cloud, which itself forms part of the ARDC Nectar Research Cloud, Australia’s federated research cloud, which is funded through the National Collaborative Research Infrastructure Strategy (NCRIS). The partnership is to explore the paradigm of off-loading (what is ultimately) micro-segmentation onto DPUs, thus removing the burden of increased security from CPUs, GPUs and top-of-rack / top-of-organisation security appliances. Concurrently Monash is exploring a range of contemporary appliances, microsegmentation software and automations of research data governance.

Steve Quenette, Deputy Director of the Monash eResearch Centre and lead of this project states:

“Micro-segmenting per-research application would ultimately enable specific datasets to be controlled tightly (more appropriately firewalled) and actively & deeply monitored, as the data traverses a researcher’s computer, edge devices, safe havens, storage, clouds and HPC. We’re exploring the idea that the boundaries of data governance are micro-segmented, not the organisation or infrastructures. By offloading technology and processes to achieve security, the shadow-cost of security (as felt by the researcher, e.g. application hardening and lost processing time) is minimised, whilst increasing the transparency and controls of each organisation’s SOC. It is a win-win to all parties involved.”

Dan Maslin, Monash University Chief Information Security Officer:

“As we continue to push the boundaries of research technology, it’s important that we explore new and innovative ways that utilise bleeding edge technology to protect both our research data and underpinning infrastructure. This partnership and the exploratory use of DPUs is exciting for both Monash University and the industry more broadly.”

Carmel Walsh, Director eResearch Infrastructure & Service, ARDC:

“To support research at a national and international level requires investment in leading edge technology. The ARDC is excited to partner with the Monash eResearch Centre and NVIDIA to explore how to apply DPUs to research computing and how to scale this technology nationally to provide our Australian researchers with the competitive advantage.”

This is an example of the emerging evolution in security technology to security everywhere or distributed security. By shifting the security function as orthogonal to the application (including the operating system), the data centre (Monash in this case) can affect it’s own chosen depth introspection and enforcement, at the same rate that clouds and applications are growing.

“The transformation of the data center into the new unit of computing demands zero-trust security models that monitor all data center transactions in real time,” said Ami Badani, Vice President of Marketing at NVIDIA. “NVIDIA is collaborating with Monash University on pioneering cybersecurity breakthroughs powered by the NVIDIA Morpheus AI cybersecurity framework, which uses machine learning to anticipate threats with real-time, all-packet inspection.”

We are presently forming the team involving cloud and security office staff, and performing preliminary investigations in our test cloud. We’re expecting to communicate findings incrementally over the year.

Disruptive change in the clinical treatment of pancreatic cancer

Professor Jenkins’ research focuses on pancreatic cancer, an inflammation-associated cancer and the fourth most common cause of cancer death worldwide, with an extremely low 5% five-year survival rate. Typically studies look at gene expression patterns between normal pancreas and cancerous pancreas in order to identify unique signatures, which can be indicative of sensitivity or resistance to specific chemotherapeutic treatments.

“Using next generation gene sequencing, involving big instruments, big data and big computing – allows near-term disruptive change in the clinical treatment of pancreatic cancer.” Prof. Jenkins, Monash Health..

To date, gene expression studies have largely focused on samples taken from open surgical biopsy; a procedure known to be very invasive and only possible in 20% of pancreatic cancers. Prof Jenkins’ group, in collaboration with Dr Daniel Croagh from the Department of Upper Gastrointestinal and Hepatobiliary Surgery at Monash Medical Centre, recently trialled an alternative less invasive process available to nearly all pancreatic cancer patients known as endoscopic ultrasound-guided fine-needle aspirate (EUS-FNA) which uses a thin, hollow needle to collect the samples of cells from which genetic material can be extracted and analysed. The challenge then becomes to ensure gene sequencing from EUS-FNA samples is comparable to open surgical biopsy such that established analysis and treatment can be used.


Twenty-four EUS-FNA-derived genetic samples from normal and cancerous pancreas were sequenced at the MHTP Medical Genomics Facility producing a total amount of 40Gb of raw data. Those data were securely transferred onto R@CMon by the Monash Bioinformatics Platform for processing, statistical analysis and computational exploration using state-of-the-art Bioinformatics methods.

super_computer

Results thus far from this study show that data from EUS-FNA-derived samples were of high quality and also allowed the identification of gene expression signatures between normal and cancerous pancreas. Professor Jenkins’ group is now confident that EUS-FNA-derived material not only has the potential to capture nearly all of pancreatic cancer patients (compared to ~20% by surgery), but to also improve patient management and their treatment in the clinic.

“The current clinical genomics research space requires specialized high performance computational and storage infrastructure to support the processing and long term storage of those so-called “big data”. Thus R@CMon plays a major role in the discovery and development of new therapies and the improvement of Human health care in general.” Roxane Legaie, Senior Bioinformatician, Monash Bioinformatics Platform

 

R@CMon hosted Australia’s first Ceph Day

Ceph Days are a series of regular events in support of the Ceph open source community. They now occur at locations all around the world. In November, R@CMon hosted Australia’s first Ceph Day. The day hosted 70-odd guests, many of which  were from interstate and a few from overseas. There participants were from the research sector, private industry and ICT providers.  It was a fantastic culmination of Australia’s growing Ceph community.

If you don’t already know, Ceph is basically an open-source technology for software-defined cluster-based storage.  It means our storage backend is essentially infinitely scalable, and our focus can shift to the access mechanisms for data.

Check out the promo:

R@CMon has pioneered the adoption of Ceph for accessible research data storage and at mid-2013 was the first NeCTAR Research Cloud node to provide un-throttled volume storage. R@CMon has also worked closely with was InkTank and now Redhat to develop the support model for such an enterprise (see Ceph Enterprise – a disruptive period in the storage marketplace).

The day began with the Ceph Community Director – Patrick McGarry. His presentation included information about the upcoming expanded Ceph metrics platform, what the Ceph User Committee has been up to, new community infrastructure for a better contributor experience, and revised open source governance.

Undoubtedly the highlight of the day was the joint talk given by R@CMon’s very own director – Steve Quenette and technical lead – Blair Bethwaite. Here we explain Ceph in the context of the 21st century microscope – the tool each researcher creates to do modern day research. We also explain how we technically approached creating our fabric.

R@CMon announced as a Mellanox “HPC Center of Excellence”

At SuperComputing 2015 in Austin our network/fabric partner Mellanox announced R@CMon (Monash University) as a “HPC Centre of Excellence. A core goal of the HPC CoE is to drive the technological innovations required for the next generation (exascale) supercomputing, whilst also ensuring that such an exascale computer is relevant to modern research. R@CMon is a stand out pioneer at converging cloud, HPC and data, all of which are key to the “next generation”.

“We see Monash as a leader in Cloud and HPC on the Cloud with Openstack, Ceph and Lustre on our Ethernet CloudX platform.” Sudarshan Ramachandran, Regional Sales Manager, Australia & New Zealand

From a fabric innovation point of view, it has been a very productive and exciting 24months for R@CMon. By early 2014 the internal Monash University HPC system “MCC” was burst onto the Research Cloud, allowing a researcher’s own merit the be leveraged with institutional investment. It also represents a shift towards soft HPC, where the size of a HPC system changes regularly with time. Earlier this year we announced our early adoption of RoCE (RDMA over Converged Ethernet) using Mellanox technologies. The meant the same fabric used for cloud networking could also be used for HPC and data storage backplanes.  In turn MCC on the R@CMon also enabled RDMA communications, that is, real HPC performance but on an otherwise orchestrated cloud.

 

Finally at the Tokyo OpenSack summit 2015, Mellanox announced R@CMon as debuting the World’s first 100G End-to-End Cloud. This technology eases scaling and heterogeneity of performance aspects. In particular, it sets the basis for processor and storage performance for peak and converged cloud/HPC needs. Watch this space!

 

 

R@CMon Storage

Our journey towards R@CMon Storage (Storage-as-a-Service)…

In May 2013 R@CMon went live with an OpenStack cell within the NeCTAR (Australian) Research Cloud confederation. It was an innovation in its own right, targeting the commodity end of both the fundamental and translational research needs of Australia (see R@CMon IDC Spotlight – AMD & DELL). Our technical partner, Dell, has successfully applied the design pattern to many other subsequent Research Cloud nodes, and many other OpenStack based private cloud deployments both nationally and internationally. Shortly after the launch of this initial IaaS compute cell, we introduced Ceph based volume storage, becoming the first volume storage service on the Research Cloud, and in doing so, instigated a collaboration with InkTank (now Redhat). By November 2014 R@CMon launched the “Phase 2” Specialist IaaS cell, an “e”-resource motivated by research that pushes boundaries. Within this cell R@CMon added an RDMA-able interconnect to our storage and compute fabric, instigating an innovative technical collaboration with Mellanox.

Thus R@CMon is an environment to build what we call “21st Century Microscopes” – where researchers orchestrate the instruments, compute, storage, analysis and visualisation themselves, looking down and tuning this 21st century lens, using big data and big computing to make new discoveries.

And accordingly, R@CMon is an environment for innovative data services for the long-tail (if you like – more ICT like). Unashamedly – Our instances of Ceph is what we can “enterprise”, whilst each user or tenant has their own needs on file protocol, capacity and latency.

R@CMon Storage is a collection of storage access methods and underlying storage infrastructure products. Why do we present storage as both front-ends and infrastructure? Because most users want access methods – it should just work, but most microscope builders want infrastructure – it should be a building block. R@CMon Storage is also the Monash operating centre to VicNode – where we explain some of these products.

We now have a series of R@CMon Storage products and services available – ranging from infrastructure products, access methods and data management.

 

Australia’s Largest University Selects Mellanox CloudX Platform and Open Ethernet Switch Systems for Nationwide Research Initiative

Yesterday Mellanox made the following press release – “Australia’s Largest University Selects Mellanox CloudX Platform and Open Ethernet Switch Systems for Nationwide Research Initiative“. Through Monash University’s own co-investment into R@CMon, the Mellanox Cloudx products were chosen as the networking technology to Phase 2, providing RDMA capable networking within and between R@CMon Research Cloud and Data (RDSI) facilities. This means our one fabric can run multi-host MPI workloads, and leverage fast I/O storage, but also remain near the cost-point of commodity networking for the resources that are generic and commodity.

This is a key ingredient to the “21st Century Microscope”, where researchers orchestrate the instruments, compute, storage, analysis and visualisation themselves, looking down and tuning this 21st century lens, using big data and big computing to make new discoveries. R@CMon has been designed to be the platform where Australian researchers can lead the way at establishing their own 21st century microscope – for themselves and for their communities.

Once again Monash is leading platform technology innovation and accessibility by example. Through 2015 we look forward to optimising this technology, and encouraging increased self-service to these sorts of technologies.

 

GDE Error: Error retrieving file - if necessary turn off error checking (404:Not Found)

R@CMon Phase 2 is here!

Back in 2012 our submission to NeCTAR planned R@CMon as being delivered in two phases. First a commodity phase, letting the ideals of en masse computing dominate technical choices. We have been operating phase 1 since May 2013. Our new specialist second phase went live in October! R@Cmon phase 2 (R@CMon RDC cell) scales out high-performing and accelerating hardware as driven by the demands of the precinct. Often ‘big data’ is just not possible without ‘big memory’ to hold the problem space without going to disk (x100 slower). Often ‘more memory’ is the barrier, not ‘more cores’. Often ‘I need to interact with a 3D model’. And so on. R@CMon is truly now a scalable and critical mass of self-service, on-demand computing infrastructure. It is also the play-pit where research leaders can build their own 21st century microscopes.

NeCTAR monash-02 rack-rear

One of the four racks of NeCTAR monash-02. From top to bottom: Mellanox 56G switches, management switch, R820 compute nodes, R720 Ceph storage nodes

In addition to phase 1, phase 2 has –

  • 2064 new Intel virtual cores
  • 3 nodes with 1TB of RAM
  • 10 nodes with GPUs for 3d desktops
  • 3 nodes (the large memory ones) with high-performance PCIe SSD
  • All standard compute nodes mix SAS & SSD for low-latency local ephemeral storage
  • All nodes with RDMA (Remote Direct Memory Access – the stuff that makes fast, large-scale, multi-node HPC jobs possible) capable networking

As with phase 1, the entire infrastructure is orchestrated through OpenStack and presented on the Australian Research Cloud. R@CMon is once again pioneering research cloud infrastructure, virtualising all these specialist resources.

Over the next week we’ll blog with emerging examples of GPUs, SSDs and 1TB memory machines…

R820 1TB RAM compute node

One of the specialist nodes – a quad-socket R820 with 1TB RAM and high-performance PCIe-attached flash