Tag Archives: Remote Desktop

The Ramialison Group Analysis Workflow on R@CMon

The Ramialison Group at the Australian Regenerative Medicine Institute (ARMI) located in the biomedical research precinct of Monash University, Clayton specialises in systems biology both on the bench and through computational analysis. Their work is driven by the in vivo and in silico dissection of regulatory mechanisms involved in heart development, where deregulation of such mechanisms cause congenital heart disease, which results in 1 out of every 100 babies to be born with heart defects in Australia.

Heatmap generated from transcriptomic data from heart samples (Nathalia Tan)

Heatmap generated from transcriptomic data from heart samples (Nathalia Tan)

Their research focuses on identifying DNA elements that play a crucial role in the development of the heart and, that could be impaired in disease. To identify these sequences, several genome-wide interrogation technologies (genomics and transcriptomics) are employed on different model organisms such as mouse or zebrafish. Downstream analysis of the data generated from these experiments involves high performance computing and requires large storage, which can be up to hundreds of gigabytes in size for a single project.

To optimise their investigation into heart development, the R@CMon team has deployed a dedicated Decoding Heart Development and Disease (DHDD) server on the Monash node of the NeCTAR Research Cloud infrastructure, which has now been running for over a year. This has not only provided the group with faster processing speeds in comparison to running jobs on a local desktop, but also an appropriate file storage infrastructure with persistent storage for files that are regularly accessed during analysis. Through VicNode, the group has been given vault storage for archiving completed results for their various research projects. With the assistance R@CMon, the group has been able to easily add users to the server as it continues to grow with new members and local collaborators.

Web interface for the Trawler web service.

Web interface for the Trawler web service.

In addition to the DHDD server, the R@CMon team also assisted the Ramialison Group in deploying a dedicated cloud server that has been used to develop the Trawler motif discovery tool web service. The implementation of this tool allows the group to quickly and easily analyse next-generation sequencing data and identify overrepresented motifs, which has led to a manuscript that is currently in preparation. The Ramialison Group envisage future developments of similar simple and easy to use bioinformatics analysis tools through R@CMon.

Analytical Standard Uncertainty Evaluation on R@CMon

Arvind Rajan is a scholar from the School of Engineering at the Monash University Sunway Malaysia campus. Arvind’s project, “Analytical Uncertainty Evaluation of Multivariate Polynomial”, supported by Monash University Malaysia (HDR scholarship) and the Malaysia Fundamental Research Grant Scheme, extends analytical method of “Guide to the Expression of Uncertainty in Measurement (GUM)” by the development of a systematic framework – the Analytical Standard Uncertainty Evaluation (ASUE) for the analytical standard measurement uncertainty evaluation of non-linear systems. The framework is the first step towards the simplification and standardisation of the GUM analytical method for non-linear systems.

The ASUE Toolbox

The ASUE Toolbox

The R@CMon team supported the ASUE team at Sunway in deploying the framework on the NeCTAR Research Cloud. The project has been given access to the Monash-licensed Windows Server 2012 image and Windows-optimised instance flavour for configuration of the Internet Information Services (IIS) and ASP.NET stack. The ASUE team developed and deployed the framework on NeCTAR using remote desktop access (yes once again – even from overseas!). Mathematica, specifically webMathematica is then used on the NeCTAR instance to power the web-based dynamic ASUE Toolbox. The ASUE toolbox has been published in Measurement, a journal by International Measurement Confederation (IMEKO) and IEEE Access, an open access journal:

Y. C. Kuang, A. Rajan, M. P.-L. Ooi, and T. C. Ong, “Standard uncertainty evaluation of multivariate polynomial,” Measurement, vol. 58, pp. 483-494, Dec. 2014

A. Rajan, M. P. Ooi, Y. C. Kuang, and S. N. Demidenko, “Analytical Standard Uncertainty Evaluation Using Mellin Transform,” Access, IEEE, vol. 3, pp. 209-222, 2015

“The NeCTAR Research Cloud is a great service for researchers to host their own website and share the outcome of their research with engineers, practitioners and other professional community. Honestly, if it is not for the NeCTAR Research Cloud, I doubt our team could have made it this far. The support has been incredible so far. I will continue to publish my work using this service.”

Arvind  Rajan
Monash University Scholar
Electrical and Computer Systems Engineering

MaxQuant Proteomic Searches on R@CMon

David Stroud, NHMRC Doherty Fellow and member of the Ryan Lab from the Department of Biochemistry and Molecular Biology, Monash University does proteomics research and uses the MaxQuant quantitative proteomics software as part of his analysis workflows. MaxQuant is designed for processing high-resolution Mass Spectrometry data and is freely available on the Microsoft Windows platform. Step one in the workflow is to do sample analyses using Liquid chromatography-mass spectrometry (LC-MS) on a Thermo Orbitrap Mass-spectrometer. This step produces raw files containing spectra that represent thousands of peptides. The resulting raw files are then loaded into MaxQuant to perform searches where spectra are compared against known list of peptides. A quantification step is then performed enabling peptide abundance to be compared across samples. Once this process is completed, the resulting tab delimited files are captured for downstream analysis.

Inspection of results using the MaxQuant software.

MaxQuant searches are both CPU and IO intensive tasks. A typical search takes 24 to 48 hours, and in some cases up to a week, depending on the size of the raw files being processed. David has been running his workflow on his own machine with 8 cores, 16 gigabytes of memory (RAM) and a solid state drive (SSD) for storage where a standard search takes 2 to 3 weeks to complete. Performing large MaxQuant searches on the local machine became a struggle, and David needed a bigger machine with a desktop environment to scale up his analysis workflow. The R@CMon team assisted David in deploying the MaxQuant software on the Monash node of the NeCTAR Research Cloud with an m1.xxlarge instance, spawned using the Monash-licensed Windows Server 2012 image. MaxQuant searches on the NeCTAR instance shows a 3-4x speed-up compared to the local machine, what takes several weeks on the local machine now just takes several days on the NeCTAR instance.

Maxquant search of Thermo RAW files.

Maxquant search of Thermo RAW files.

The R@CMon team are currently working with David to explore further scaling options. The high-memory and PCIe SSD-enabled specialist kit on R@CMon Phase 2 can be exploited by MaxQuant for bursting IO intensive activities during searches. More on this coming soon!

Stock Price Impact Models Study on R@CMon Phase 2 (Update)

A mere six months ago Paul Lajbcygier and his research group used R@CMon Phase 2 “specialist kit” for processing and analysing higher frequency stock data, as part of their stock price impact models study. Since then, they’ve been running extraction queries continuously and recently published a paper highlighting their latest findings while acknowledging the NeCTAR Research Cloud infrastructure.

Lajbcygier, P., Sojka, J. (2015). The Viability of Alternative Indexation when including all Costs”, International Review of Financial Analysis

The group will continue to use the high-memory instance on R@CMon Phase 2 as they progress their research pipeline and the R@CMon team will continue to support them on their journey.

“I expect that over the coming months we will fully utilise the generous resources on the Monash node of the NeCTAR  Research Cloud as we extend our research into this cutting edge and exciting data intensive topic.”

Associate Professor Paul Lajbcygier
Faculty of Business and Economics
Department of Accounting and Finance
Department of Banking and Finance
Monash University

Rail Network Catastrophe Analysis on R@CMon

Monash University, through the Institute of Railway Technology (IRT), has been working on a research project with Vale S.A., a Brazilian multinational metals and mining corporation and one of the largest logistical operators in Brazil, to continuously monitor and assess the health of the Carajás Railroad Passenger Train (EFC) mixed-use rail network in Northern Brazil. This project will identify locations that produce “significant dynamic responses” with the aim for proactive maintenance to prevent catastrophic rail failure. As a part of this project, IRT researchers have been involved in (a) the analysis of the collected data and (b) the establishment of a database with visualisation capabilities that allows for the interrogation of the analysed data.
irt-vale-vis-01

GPU-powered DataMap analysis and visualisation on R@CMon.

Researchers use the DataMap analysis software for data interrogation and visualisation. DataMap is a Windows-based client-server tool that integrates data from various measurements and recording systems into a geographical map. Traditionally they have the software running on a commodity laptop with a dedicated GPU connecting to their database server. To scale to larger models, conduct more rigorous analysis and visualisation, as well as support remote collaboration, the system of tools needed to go beyond the laptop.
The R@CMon team supported IRT in deploying the software on the NeCTAR Research Cloud. The deployed instance runs on the Monash-licensed Windows flavours with GPU-passthrough to support DataMap’s DirectX requirements.
irt-vale-vis-02

GPU-powered DataMap analysis and visualisation on R@CMon.

Through the Research Cloud IRT researchers and Vale S.A. counterparts are able to collaborate for modelling, analysis and results using remote access to the GPU-enabled virtual machines.
“The assistance of R@CMon in providing virtual machines that have GPU support, has been instrumental in facilitating global collaboration between staff located at Vale S.A. (Brazil) and Monash University (Australia).”
Dr. Paul Reichl
Senior Research Engineer and Data Scientist
Institute of Railway Technology

The CVL on R@CMon Phase 2

Monash is home to the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a national facility for the imaging and characterisation community. An important and rather novel feature of the MASSIVE compute cluster is the interactive desktop visualisation environment available to assist users in the characterisation process. The MASSIVE desktop environment provided part of the inspiration for the Characterisation Virtual Laboratory (CVL), a NeCTAR VL project combining specialist software visualisation and rendering tools from a variety of disciplines and making them available on and through the NeCTAR research cloud.

The recently released monash-02 zone of the NeCTAR cloud provides enhanced capability to the CVL, bringing a critical mass of GPU accelerated cloud instances. monash-02 includes ten GPU capable hypervisors, currently able to provide up to thirty GPU accelerated instances via direct PCI passthrough. Most of these are NVIDIA GRID K2 GPUs (CUDA 3.0 capable), though we also have one K1. Special thanks to NVIDIA for providing us with a couple of seed units to get this going and supplement our capacity! After consultation with various users we created the following set of flavors/instance-types for these GPUs:

Flavor name#vcoresRAM (MB)/dev/vda (GB)/dev/vdb (GB)
mon.r2.5.gpu-k21540030N/A
mon.r2.10.gpu-k22108003040
mon.r2.21.gpu-k242170030160
mon.r2.63.gpu-k2126500030320
mon.r2.5.gpu-k11540030N/A
mon.r2.10.gpu-k12108003040
mon.r2.21.gpu-k142170030160
mon.r2.63.gpu-k1126500030320

R@CMon has so far dedicated two of these GPU nodes to the CVL, and this is our preferred method for use of this equipment, as the CVL provides a managed environment and queuing system for access (regular plain IaaS usage is available where needed). There were some initial hiccups getting the CVL’s base CentOS 6.6 image working with NVIDIA drivers on these nodes, solved by moving to a newer kernel, and some performance tuning tasks still remain. However, the CVL has now been updated to make use of the new GPU flavors on monash-02, as demonstrated in the following video…

GPU-accelerated Chimera application running on the CVL, showing the structure of human follicle-stimulating hormone (FSH) and its receptor.

If you’re interested in using GPGPUs on the cloud please contact the R@CMon team or Monash eResearch Centre.

3D Stellar Hydrodynamics Volume Rendering on R@CMon Phase 2

Simon Campbell, Research Fellow from the Faculty of Science, Monash University has been running large-scale 3D stellar hydrodynamics parallel calculations on the Magnus super computing facility at iVEC and Raijin, the national peak facility at NCI. These calculations aim to improve 1D modelling of the core helium burning (CHeB) phase of stars using a novel multi-dimensional fluid dynamics approach. The improved models will have significant impact on many fields of astronomy and astrophysics such as stellar population synthesis, galactic chemical evolution and interpretation of extragalactic objects.

The parallel calculations generate raw data dumps (heavy data) containing several scalar variables, which are pre-processed and converted into HDF5. A custom script is used to extract the metadata (light data) into XDMF format, a standard format used by HPC codes and recognised by various scientific visualisation applications like ParaView and VisIt. The stellar data are loaded into VisIt and visualised using volume rendering. Initial volume renderings were done on a modest dual core laptop using just low resolution models (200 x 200 x 100, 106 zones). It’s been identified that applying the same visualisation workflow on high resolution models (400 x 200 x 200, 107 zones and above), would require a parallel (MPI) build of VisIt running on a higher performance machine.

Snapshot of turbulent convection deep inside the core of a star, volume-rendered using parallel VisIt.

Snapshot of turbulent convection deep inside the core of a star that has a mass 8 times that of the Sun. Colours indicate gas moving at different velocities. Volume rendered in parallel using VisIt + MPI.

R@CMon Phase 2 to the rescue! The timely release of R@CMon Phase 2 provided the required computational grunt to perform these high resolution volume renderings. The new specialist kit in this release includes hypervisors housing 1TB of memory. The R@CMon team allocated a share (~50%, ~460GB of memory) on one of these high memory hypervisors to do the high resolution volume renderings. Persistent storage on R@CMon Phase 2 is also provided on the computational instance for ingestion of data from the supercomputing facilities and generation of processing and rendering results. VisIt has been rebuilt on the high-memory instance, this time with MPI capabilities and XDMF support.

Initial parallel volume rendering using 24 processes shows a ~10x speed-up. Medium (400 x 200 x 200, 107 zones) and high-resolution (800 x 400 x 400, 108 zones) plots are now being generated using the high-memory instance seamlessly, and an even higher resolution (1536 x 1024 x 1024, 109 zones) simulation is currently running on Magnus. The resulting datasets from this simulation, which are expected to be several hundred gigabytes in size, will then be put in the same parallel volume rendering workflow.