Tag Archives: Remote Desktop

The CVL on R@CMon Phase 2

Monash is home to the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a national facility for the imaging and characterisation community. An important and rather novel feature of the MASSIVE compute cluster is the interactive desktop visualisation environment available to assist users in the characterisation process. The MASSIVE desktop environment provided part of the inspiration for the Characterisation Virtual Laboratory (CVL), a NeCTAR VL project combining specialist software visualisation and rendering tools from a variety of disciplines and making them available on and through the NeCTAR research cloud.

The recently released monash-02 zone of the NeCTAR cloud provides enhanced capability to the CVL, bringing a critical mass of GPU accelerated cloud instances. monash-02 includes ten GPU capable hypervisors, currently able to provide up to thirty GPU accelerated instances via direct PCI passthrough. Most of these are NVIDIA GRID K2 GPUs (CUDA 3.0 capable), though we also have one K1. Special thanks to NVIDIA for providing us with a couple of seed units to get this going and supplement our capacity! After consultation with various users we created the following set of flavors/instance-types for these GPUs:

Flavor name#vcoresRAM (MB)/dev/vda (GB)/dev/vdb (GB)
mon.r2.5.gpu-k21540030N/A
mon.r2.10.gpu-k22108003040
mon.r2.21.gpu-k242170030160
mon.r2.63.gpu-k2126500030320
mon.r2.5.gpu-k11540030N/A
mon.r2.10.gpu-k12108003040
mon.r2.21.gpu-k142170030160
mon.r2.63.gpu-k1126500030320

R@CMon has so far dedicated two of these GPU nodes to the CVL, and this is our preferred method for use of this equipment, as the CVL provides a managed environment and queuing system for access (regular plain IaaS usage is available where needed). There were some initial hiccups getting the CVL’s base CentOS 6.6 image working with NVIDIA drivers on these nodes, solved by moving to a newer kernel, and some performance tuning tasks still remain. However, the CVL has now been updated to make use of the new GPU flavors on monash-02, as demonstrated in the following video…

GPU-accelerated Chimera application running on the CVL, showing the structure of human follicle-stimulating hormone (FSH) and its receptor.

If you’re interested in using GPGPUs on the cloud please contact the R@CMon team or Monash eResearch Centre.

3D Stellar Hydrodynamics Volume Rendering on R@CMon Phase 2

Simon Campbell, Research Fellow from the Faculty of Science, Monash University has been running large-scale 3D stellar hydrodynamics parallel calculations on the Magnus super computing facility at iVEC and Raijin, the national peak facility at NCI. These calculations aim to improve 1D modelling of the core helium burning (CHeB) phase of stars using a novel multi-dimensional fluid dynamics approach. The improved models will have significant impact on many fields of astronomy and astrophysics such as stellar population synthesis, galactic chemical evolution and interpretation of extragalactic objects.

The parallel calculations generate raw data dumps (heavy data) containing several scalar variables, which are pre-processed and converted into HDF5. A custom script is used to extract the metadata (light data) into XDMF format, a standard format used by HPC codes and recognised by various scientific visualisation applications like ParaView and VisIt. The stellar data are loaded into VisIt and visualised using volume rendering. Initial volume renderings were done on a modest dual core laptop using just low resolution models (200 x 200 x 100, 106 zones). It’s been identified that applying the same visualisation workflow on high resolution models (400 x 200 x 200, 107 zones and above), would require a parallel (MPI) build of VisIt running on a higher performance machine.

Snapshot of turbulent convection deep inside the core of a star, volume-rendered using parallel VisIt.

Snapshot of turbulent convection deep inside the core of a star that has a mass 8 times that of the Sun. Colours indicate gas moving at different velocities. Volume rendered in parallel using VisIt + MPI.

R@CMon Phase 2 to the rescue! The timely release of R@CMon Phase 2 provided the required computational grunt to perform these high resolution volume renderings. The new specialist kit in this release includes hypervisors housing 1TB of memory. The R@CMon team allocated a share (~50%, ~460GB of memory) on one of these high memory hypervisors to do the high resolution volume renderings. Persistent storage on R@CMon Phase 2 is also provided on the computational instance for ingestion of data from the supercomputing facilities and generation of processing and rendering results. VisIt has been rebuilt on the high-memory instance, this time with MPI capabilities and XDMF support.

Initial parallel volume rendering using 24 processes shows a ~10x speed-up. Medium (400 x 200 x 200, 107 zones) and high-resolution (800 x 400 x 400, 108 zones) plots are now being generated using the high-memory instance seamlessly, and an even higher resolution (1536 x 1024 x 1024, 109 zones) simulation is currently running on Magnus. The resulting datasets from this simulation, which are expected to be several hundred gigabytes in size, will then be put in the same parallel volume rendering workflow.

Stock Price Impact Models Study on R@CMon Phase 2

Paul Lajbcygier, Associate Professor from the Faculty of Business and Economics, Monash University is studying one of the important changes that affects the cost of trading in financial markets. This change relates to the effects of trading to prices, known as “price impact”, which is brought by wide propagation of algorithmic and high frequency trading and augmented by technological and computational advances. Professor Lajbcygier’s group has recently published new results supported by R@CMon infrastructure and application migration activities, providing new insights into the trading behaviour of so-called “Flash Boys“.

This study uses datasets licensed from Sirca and represents stocks in the S&P/ASX 200 index from year range 2000 to 2014. These datasets are pre-processed using Pentaho and later ingested into relational databases for detailed analysis using advanced queries. Two NeCTAR instances on R@CMon have been used initially in the early stages of the study. One of the instances is used as the processing engine where Pentaho and Microsoft Visual Studio 2012 are installed for pre-processing and post-processing tasks. The second instance is configured as the database server where the extraction queries are executed. Persistent volume storage is used to store reference datasets, pre-processed input files and extracted results. A VicNode merit application for research data storage allocation has been submitted to support the computational access to the preprocessed data supporting the analysis workflow running on the NeCTAR Research Cloud.

Ingestion of pre-processed data into the database running on the high-memory instance, for analysis.

Ingestion of pre-processed data into the database running on the high-memory instance, for analysis.

Initially econometric analyses were done on just the lowest two groups of stocks in the S&P/ASX 200 index. Some performance hiccups were encountered when processing higher frequency groups in the index – some of the extraction queries, which require a significant amount of memory, would not complete when run on the exponentially higher stock groups. The release of R@CMon Phase 2 provided the analysis workflow the capability to attack the higher stock groups using a high-memory instance, instantiated on the new “specialist” kit. Parallel extraction queries are now running on this instance (close to 100% utilisation) to traverse the remaining stock groups from year range 2000 to 2014.

A recent paper by Manh Pham, Huu Nhan Duong and Paul Lajbcygier, entitled, “A Comparison of the Forecasting Ability of Immediate Price Impact Models” has been accepted for the “1st Conference on Recent Developments in Financial Econometrics and Applications”. This paper highlights the results of the examination of the lowest two groups of the S&P/ASX 200 index, i.e., just the initial results. Future research and publications include examination of the upper group of the index based on the latest reference data as they come available and analysis of other price impact models.

This is an excellent example of novel research empowered by specialist infrastructure, and a clear win for a build-it-yourself cloud (you can’t get a 920GB instance from AWS). The researchers are able to use existing and well-understood computational methods, i.e., relational databases, but at much greater capacity than normally available. This has the effect of speeding up initial exploratory work and discovery. Future work may investigate the use of contemporary data-intensive frameworks such as Hadoop + Hive for even larger analyses.

This article can also be found, published created commons here 1.

Spreadsheet of death

R@CMon, thanks to the Monash eResearch Centre’s long history of establishing “the right hardware for research”, prides itself on effectiveness at computing, orchestrating and storing for research. In this post we highlight an engagement that didn’t yield an “effectiveness” to our liking, and how that helped shape elements of the imminent R@CMon phase2.

In the latter part of 2013 the R@CMon team was approached by a visiting student working at the Water Sensitive Cities CRC. His research project involved parameter estimation for an ill-posed problem in ground-water dynamics. He had setup (perhaps partially inherited) an Excel spreadsheet based Monte Carlo engine for this, with a front-end sheet providing input and output to a built in VBA macro for the grunt work – an erm… interesting approach! This had been working acceptably in the small, as he could get an evaluation done within 24 hours on his desktop machine (quad core i7). But now he needed to scale up and run 11 different models, and probably a few times each to tweak the inputs.  Been there yourself?  This is a very common pattern!

Nash-Sutcliffe model efficiency

Nash-Sutcliffe model efficiency (Figure courtesy of eng. Antonello Mancuso, PhD, University of Calabria, Italy)

MCC (the Monash Campus Cluster), our first destination for ‘compute’, doesn’t have any Windows capability, and even if it did, attempting to run Excel in batch mode would have been something new for us. No problem we thought, we’ll use the RC, give him a few big Windows instances and he can spread the calculations across them. Not an elegant or automated solution for sure, but this was a one-off with some tight time constraints, so it was more important to start calculations than get bogged down with a nicer solution.

It took a few attempts to get Windows working properly. We eventually found the handy cloudbase solutions trial image and its guidance documentation. But we also ran into issues activating Windows against the Monash KMS, turns out we had to explicitly select our local network time source as opposed to the default time.windows.com. We also found some problems with the CPU topology that Nova was giving our guests, Windows was seeing multiple sockets rather than multiple cores, which meant desktop variants were out as they would ignore most of the cores.

Soon enough we had a Server 2012 instance ready for testing. The user RDP’d in and set the cogs turning. Based on the first few Monte Carlo iterations (out of the million he needed for each scenario) he estimated it would take about two days to complete a scenario, quite a lot slower than his desktop but still acceptable given the overall scale-out speed up. However, on the third day after about 60 hours compute time he reported it was only 55% complete. Unfortunately that was an unsustainable pace – he needed results within a fortnight – and so with his supervisor they resolved to code and use a different statistical approach (using PEST) that would be more amenable to cluster-computing.

We did some rudimentary performance investigation during the engagement and didn’t find any obvious bottlenecks, the guest and host were always very CPU busy, so it seemed largely attributable to the lesser floating point capabilities of our AMD Bulldozer CPUs. We didn’t investigate deeply in this case and no doubt there could be other elements at play here (maybe Windows was much slower for compute on KVM than Linux), but this is now a pattern we’ve seen with floating point heavy workloads across operating systems and on bare metal. Perhaps code optimisations for the shared FPU in the Bulldozer architecture can improve things, but that’s hardly a realistic option for a spreadsheet.

The AMDs are great (especially thanks to their price) for general purpose cloud usage, that’s why the RC makeup is dominated by them and why commercial clouds like Azure use them. But for R@CMon’s phase2 we want to cater to performance sensitive as well as throughput oriented workloads, which is why we’ve deployed Intel CPUs for this expansion. Monash joins the eRSA and NCI Nodes in offering this high-end capability. More on the composition of R@CMon phase 2 in the coming weeks!

Deakin Bioinformatics Workshop (February 17-19, 2014)

Last February 17-19, 2014, a bioinformatics workshop was held at Deakin University – Geelong Waterfront Campus. The workshop covered Genotype By Sequences (GBS) methodologies using various well known bioinformatics tools. The two main tools used in the workshop were Trait Analysis by aSSociation, Evolution and Linkage (TASSEL) and Bowtie. TASSEL is used to investigate relationships between phenotypes and genotypes while Bowtie is a tool used to align DNA sequences to the human genome.

20140218_134552_scaled

Trainees at the Deakin workshop, using the NeCTAR-provisioned training environment.

The workshop was delivered using the NeCTAR Research Cloud infrastructure and Bioplatforms Australia Training Platform. The R@CMon team supported the workshop organisers at Deakin University in creating a customised cloud image containing required tools and datasets as well as ensuring allocation of computational and storage resources in the cloud. The CloudBioLinux-based cloud image has been instantiated for each trainee, giving each one a dedicated virtual desktop environment for their analyses.

20140219_110953_scaled

Workshop trainers demonstrating Genotype By Sequences (GBS) methodologies and tools using a custom NeCTAR cloud image.

Feedback collected from participants on the day was overwhelmingly positive. However, some user-experience issues were encountered with the remote desktops (NX), those can be attributed to the network between the cloud servers (hosted on the eRSA Node in South Australia) and the participants. Though such issues haven’t shown up for BPA workshops utilising the Monash Node up-and-down the east coast, this demonstrates the importance of being able to reserve local cloud capacity for certain use-cases like this which are latency and jitter sensitive. Fortunately those issues were isolated and according to instructors from Cornell University, the training platform used in the workshop was one of the best they’ve used and the trainees were keen to attend future GBS-related workshops delivered using the cloud.