Tag Archives: GPU

Rail Network Catastrophe Analysis on R@CMon

Monash University, through the Institute of Railway Technology (IRT), has been working on a research project with Vale S.A., a Brazilian multinational metals and mining corporation and one of the largest logistical operators in Brazil, to continuously monitor and assess the health of the Carajás Railroad Passenger Train (EFC) mixed-use rail network in Northern Brazil. This project will identify locations that produce “significant dynamic responses” with the aim for proactive maintenance to prevent catastrophic rail failure. As a part of this project, IRT researchers have been involved in (a) the analysis of the collected data and (b) the establishment of a database with visualisation capabilities that allows for the interrogation of the analysed data.
irt-vale-vis-01

GPU-powered DataMap analysis and visualisation on R@CMon.

Researchers use the DataMap analysis software for data interrogation and visualisation. DataMap is a Windows-based client-server tool that integrates data from various measurements and recording systems into a geographical map. Traditionally they have the software running on a commodity laptop with a dedicated GPU connecting to their database server. To scale to larger models, conduct more rigorous analysis and visualisation, as well as support remote collaboration, the system of tools needed to go beyond the laptop.
The R@CMon team supported IRT in deploying the software on the NeCTAR Research Cloud. The deployed instance runs on the Monash-licensed Windows flavours with GPU-passthrough to support DataMap’s DirectX requirements.
irt-vale-vis-02

GPU-powered DataMap analysis and visualisation on R@CMon.

Through the Research Cloud IRT researchers and Vale S.A. counterparts are able to collaborate for modelling, analysis and results using remote access to the GPU-enabled virtual machines.
“The assistance of R@CMon in providing virtual machines that have GPU support, has been instrumental in facilitating global collaboration between staff located at Vale S.A. (Brazil) and Monash University (Australia).”
Dr. Paul Reichl
Senior Research Engineer and Data Scientist
Institute of Railway Technology

The CVL on R@CMon Phase 2

Monash is home to the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a national facility for the imaging and characterisation community. An important and rather novel feature of the MASSIVE compute cluster is the interactive desktop visualisation environment available to assist users in the characterisation process. The MASSIVE desktop environment provided part of the inspiration for the Characterisation Virtual Laboratory (CVL), a NeCTAR VL project combining specialist software visualisation and rendering tools from a variety of disciplines and making them available on and through the NeCTAR research cloud.

The recently released monash-02 zone of the NeCTAR cloud provides enhanced capability to the CVL, bringing a critical mass of GPU accelerated cloud instances. monash-02 includes ten GPU capable hypervisors, currently able to provide up to thirty GPU accelerated instances via direct PCI passthrough. Most of these are NVIDIA GRID K2 GPUs (CUDA 3.0 capable), though we also have one K1. Special thanks to NVIDIA for providing us with a couple of seed units to get this going and supplement our capacity! After consultation with various users we created the following set of flavors/instance-types for these GPUs:

Flavor name#vcoresRAM (MB)/dev/vda (GB)/dev/vdb (GB)
mon.r2.5.gpu-k21540030N/A
mon.r2.10.gpu-k22108003040
mon.r2.21.gpu-k242170030160
mon.r2.63.gpu-k2126500030320
mon.r2.5.gpu-k11540030N/A
mon.r2.10.gpu-k12108003040
mon.r2.21.gpu-k142170030160
mon.r2.63.gpu-k1126500030320

R@CMon has so far dedicated two of these GPU nodes to the CVL, and this is our preferred method for use of this equipment, as the CVL provides a managed environment and queuing system for access (regular plain IaaS usage is available where needed). There were some initial hiccups getting the CVL’s base CentOS 6.6 image working with NVIDIA drivers on these nodes, solved by moving to a newer kernel, and some performance tuning tasks still remain. However, the CVL has now been updated to make use of the new GPU flavors on monash-02, as demonstrated in the following video…

GPU-accelerated Chimera application running on the CVL, showing the structure of human follicle-stimulating hormone (FSH) and its receptor.

If you’re interested in using GPGPUs on the cloud please contact the R@CMon team or Monash eResearch Centre.

The CVL on R@CMon

The Characterisation Virtual Laboratory (CVL) is a powerful platform that integrates Australian imaging facilities with computational and data storage infrastructure, together with sophisticated processing and analysis toolsets. The CVL platform provides scientists working in various fields with a common analysis and collaboration environment, the CVL turns the humble remote desktop into a highly flexible Scientific Software as-a-Service delivery platform powered by the NeCTAR Research Cloud.

CVL-Desktop-01

The CVL Desktop

The current production CVL includes toolsets covering Neuroimaging, Energy Materials and Structural Biology research drivers. The project includes so-called “CVL fabric services”, which provide the necessary infrastructure to modularise popular software toolsets from any number of domains.

The R@CMon team assisted the CVL team in migrating CVL services into R@CMon. The use of persistent storage (Volumes on R@CMon) ensured consistent user home directories and software-stack repositories. The default “CVL Desktop” pool is now serving users with software-rendered CVL environments running on R@CMon. The CVL team is also a beta user of GPU flavours on R@CMon and is currently testing GPU-enabled CVL environments on the “CVL GPU node” pool (via CVL Launcher).

Screen Shot 2014-03-11 at 2.48.55 pm

The available pools on the CVL Launcher.

The following video demonstrates a GPU-enabled CVL environment launched on R@CMon. It shows the PyMOL and UCSF Chimera applications from the Structural Biology workbench, running and utilising the available GPU. The use of GPU enables seamless interaction and manipulation of datasets.

The plan is to increase the “CVL GPU node” pool to accommodate more users once GPU node capacity on R@CMon has been upgraded with deployment of R@CMon Phase 2. Watch this space for more CVL on R@CMon news. Other updates about the CVL and its sub-projects are also available on the CVL site.

GPU Flavors on R@CMon

We’re pleased to announce we now have GPGPU accelerated cloud instances working, a first for the NeCTAR Research Cloud!

If you’re not as excited by SSH session logs as I am you might like the following screenshots captured by Juptier Hu from the CVL team, which illustrate the CVL running on both a new GPU flavor and a standard non-GPU flavor. The CVL uses TurboVNC to achieve remote hardware accelerated rendering (care of VirtualGL), so the framerates here show CPU versus GPU rendering speeds.

CPU Render - 9 fps

CVL Desktop + CPU Rendering

GPU Render - 380 fps

CVL Desktop + GPU Rendering

And to really prove it, here’s the obligatory mono-spaced console dump you’d expect from a technical blog:

01:47:51 blair@bethwaite:~$ nova show f44cad73-40e1-4e83-8699-dd3f7f2e9ead | grep flavor
| flavor | cg1.medium (19) |
01:47:54 blair@bethwaite:~$ nova show f44cad73-40e1-4e83-8699-dd3f7f2e9ead | grep network
| monash-test network | 11x.xxx.255.20 |

01:48:39 blair@bethwaite:~$ sshrc root@11x.xxx.255.20
root@ubuntu:~# lshw | grep -C 2 NVIDIA
description: 3D controller
product: GF100GL [Tesla M2070-Q]
vendor: NVIDIA Corporation
physical id: 6
bus info: pci@0000:00:06.0
root@ubuntu:~# nvidia-smi
Mon Jan 13 01:52:22 2014
+------------------------------------------------------+
| NVIDIA-SMI 5.319.76   Driver Version: 319.76         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M2070-Q       Off  | 0000:00:06.0     Off |                    0 |
| N/A   N/A    P0    N/A /  N/A |       10MB /  5375MB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU Memory |
|  GPU       PID  Process name                                     Usage      |
|=============================================================================|
|  No running compute processes found                                         |
+-----------------------------------------------------------------------------+

root@ubuntu:~# cd NVIDIA_CUDA-5.5_Samples/7_CUDALibraries/MersenneTwisterGP11213/
root@ubuntu:~/NVIDIA_CUDA-5.5_Samples/7_CUDALibraries/MersenneTwisterGP11213# ./MersenneTwisterGP11213 ./MersenneTwisterGP11213 Starting...

GPU Device 0: "Tesla M2070-Q" with compute capability 2.0
Allocating data for 2400000 samples...
Seeding with 777 ...

Generating random numbers on GPU...
Reading back the results...

Generating random numbers on CPU...
Comparing CPU/GPU random numbers...

Max absolute error: 0.000000E+00
L1 norm: 0.000000E+00

MersenneTwister, Throughput = 3.1591 GNumbers/s, Time = 0.00076 s, Size = 2400000 Numbers

Shutting down...

Currently these are available as cg1.* flavors in the monash-test cell, so not open for general consumption – and they won’t be for a while until we purchase and deploy more GPU nodes. The allocation process also needs to be tweaked to deal with this new capability. So currently GPU flavors are only accessible by special arrangement with the R@CMon team.

We’ll be deploying a considerable GPU capability as part of R@CMon in order to support, e.g., GPU accelerated rendering and GPGPU accelerated viz processing for the CVL. GPU capabilities can also be useful for on-demand development work, such as providing hardware rendering for the CAVE2 development environment.

At the moment we have a handful of NVIDIA Tesla M2070-Q’s, with Kepler K2’s coming as part of R@CMon phase2. If you’re keen to get access or try this out then drop us a line.

The Visualising Angkor Project on R@CMon

The Visualising Angkor Project – “Visualising Angkor Project” Monash University Faculty of IT, 2013 was one of the main showcases during OzViz 2013 held last December 9-10 2013 at Monash University. Tom Chandler (project leader) and his team from Faculty of Information Technology used the NeCTAR Research Cloud to generate high-resolution visualisations for the CAVE2™ – the next-generation immersive 2D and 3D virtual reality environment, located at New Horizons, Monash University.

The Visualising Angkor Project

The Visualising Angkor Project in the CAVE2 facility.

The Maya/mental ray virtual render farm has been instrumental in producing 27K x 3K panoramic stills and animations for this project. This workflow has been proven very challenging for Tom and his team before they started using the NeCTAR Research Cloud for their rendering jobs.

Ricefields

A panoramic rendering of the Angkor surrounding fields, generated using the NeCTAR Research Cloud.

The resulting high-resolution stills and animations are loaded into the CAVE2™ environment using advanced visualisation software frameworks. This provides a compelling visual and aural environment with a 330° display view – the lens of the 21st century microscope.

Ricefields

Rice fields surrounding the Angkor.

In 2014 the R@CMon and CAVE2™ teams will work together to build a CAVE2™ development environment on the NeCTAR Research Cloud. This will give end-users the opportunity to work with and test the tools and middleware available in the CAVE2™ environment on-demand, without needing access to the facility itself. This development image will take advantage of R@CMon’s new GPGPU accelerated VM flavors – more on that soon!