Tag Archives: CVL

The CVL on R@CMon Phase 2

Monash is home to the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE), a national facility for the imaging and characterisation community. An important and rather novel feature of the MASSIVE compute cluster is the interactive desktop visualisation environment available to assist users in the characterisation process. The MASSIVE desktop environment provided part of the inspiration for the Characterisation Virtual Laboratory (CVL), a NeCTAR VL project combining specialist software visualisation and rendering tools from a variety of disciplines and making them available on and through the NeCTAR research cloud.

The recently released monash-02 zone of the NeCTAR cloud provides enhanced capability to the CVL, bringing a critical mass of GPU accelerated cloud instances. monash-02 includes ten GPU capable hypervisors, currently able to provide up to thirty GPU accelerated instances via direct PCI passthrough. Most of these are NVIDIA GRID K2 GPUs (CUDA 3.0 capable), though we also have one K1. Special thanks to NVIDIA for providing us with a couple of seed units to get this going and supplement our capacity! After consultation with various users we created the following set of flavors/instance-types for these GPUs:

Flavor name#vcoresRAM (MB)/dev/vda (GB)/dev/vdb (GB)
mon.r2.5.gpu-k21540030N/A
mon.r2.10.gpu-k22108003040
mon.r2.21.gpu-k242170030160
mon.r2.63.gpu-k2126500030320
mon.r2.5.gpu-k11540030N/A
mon.r2.10.gpu-k12108003040
mon.r2.21.gpu-k142170030160
mon.r2.63.gpu-k1126500030320

R@CMon has so far dedicated two of these GPU nodes to the CVL, and this is our preferred method for use of this equipment, as the CVL provides a managed environment and queuing system for access (regular plain IaaS usage is available where needed). There were some initial hiccups getting the CVL’s base CentOS 6.6 image working with NVIDIA drivers on these nodes, solved by moving to a newer kernel, and some performance tuning tasks still remain. However, the CVL has now been updated to make use of the new GPU flavors on monash-02, as demonstrated in the following video…

GPU-accelerated Chimera application running on the CVL, showing the structure of human follicle-stimulating hormone (FSH) and its receptor.

If you’re interested in using GPGPUs on the cloud please contact the R@CMon team or Monash eResearch Centre.

The CVL on R@CMon

The Characterisation Virtual Laboratory (CVL) is a powerful platform that integrates Australian imaging facilities with computational and data storage infrastructure, together with sophisticated processing and analysis toolsets. The CVL platform provides scientists working in various fields with a common analysis and collaboration environment, the CVL turns the humble remote desktop into a highly flexible Scientific Software as-a-Service delivery platform powered by the NeCTAR Research Cloud.

CVL-Desktop-01

The CVL Desktop

The current production CVL includes toolsets covering Neuroimaging, Energy Materials and Structural Biology research drivers. The project includes so-called “CVL fabric services”, which provide the necessary infrastructure to modularise popular software toolsets from any number of domains.

The R@CMon team assisted the CVL team in migrating CVL services into R@CMon. The use of persistent storage (Volumes on R@CMon) ensured consistent user home directories and software-stack repositories. The default “CVL Desktop” pool is now serving users with software-rendered CVL environments running on R@CMon. The CVL team is also a beta user of GPU flavours on R@CMon and is currently testing GPU-enabled CVL environments on the “CVL GPU node” pool (via CVL Launcher).

Screen Shot 2014-03-11 at 2.48.55 pm

The available pools on the CVL Launcher.

The following video demonstrates a GPU-enabled CVL environment launched on R@CMon. It shows the PyMOL and UCSF Chimera applications from the Structural Biology workbench, running and utilising the available GPU. The use of GPU enables seamless interaction and manipulation of datasets.

The plan is to increase the “CVL GPU node” pool to accommodate more users once GPU node capacity on R@CMon has been upgraded with deployment of R@CMon Phase 2. Watch this space for more CVL on R@CMon news. Other updates about the CVL and its sub-projects are also available on the CVL site.

GPU Flavors on R@CMon

We’re pleased to announce we now have GPGPU accelerated cloud instances working, a first for the NeCTAR Research Cloud!

If you’re not as excited by SSH session logs as I am you might like the following screenshots captured by Juptier Hu from the CVL team, which illustrate the CVL running on both a new GPU flavor and a standard non-GPU flavor. The CVL uses TurboVNC to achieve remote hardware accelerated rendering (care of VirtualGL), so the framerates here show CPU versus GPU rendering speeds.

CPU Render - 9 fps

CVL Desktop + CPU Rendering

GPU Render - 380 fps

CVL Desktop + GPU Rendering

And to really prove it, here’s the obligatory mono-spaced console dump you’d expect from a technical blog:

01:47:51 blair@bethwaite:~$ nova show f44cad73-40e1-4e83-8699-dd3f7f2e9ead | grep flavor
| flavor | cg1.medium (19) |
01:47:54 blair@bethwaite:~$ nova show f44cad73-40e1-4e83-8699-dd3f7f2e9ead | grep network
| monash-test network | 11x.xxx.255.20 |

01:48:39 blair@bethwaite:~$ sshrc root@11x.xxx.255.20
root@ubuntu:~# lshw | grep -C 2 NVIDIA
description: 3D controller
product: GF100GL [Tesla M2070-Q]
vendor: NVIDIA Corporation
physical id: 6
bus info: pci@0000:00:06.0
root@ubuntu:~# nvidia-smi
Mon Jan 13 01:52:22 2014
+------------------------------------------------------+
| NVIDIA-SMI 5.319.76   Driver Version: 319.76         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M2070-Q       Off  | 0000:00:06.0     Off |                    0 |
| N/A   N/A    P0    N/A /  N/A |       10MB /  5375MB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU Memory |
|  GPU       PID  Process name                                     Usage      |
|=============================================================================|
|  No running compute processes found                                         |
+-----------------------------------------------------------------------------+

root@ubuntu:~# cd NVIDIA_CUDA-5.5_Samples/7_CUDALibraries/MersenneTwisterGP11213/
root@ubuntu:~/NVIDIA_CUDA-5.5_Samples/7_CUDALibraries/MersenneTwisterGP11213# ./MersenneTwisterGP11213 ./MersenneTwisterGP11213 Starting...

GPU Device 0: "Tesla M2070-Q" with compute capability 2.0
Allocating data for 2400000 samples...
Seeding with 777 ...

Generating random numbers on GPU...
Reading back the results...

Generating random numbers on CPU...
Comparing CPU/GPU random numbers...

Max absolute error: 0.000000E+00
L1 norm: 0.000000E+00

MersenneTwister, Throughput = 3.1591 GNumbers/s, Time = 0.00076 s, Size = 2400000 Numbers

Shutting down...

Currently these are available as cg1.* flavors in the monash-test cell, so not open for general consumption – and they won’t be for a while until we purchase and deploy more GPU nodes. The allocation process also needs to be tweaked to deal with this new capability. So currently GPU flavors are only accessible by special arrangement with the R@CMon team.

We’ll be deploying a considerable GPU capability as part of R@CMon in order to support, e.g., GPU accelerated rendering and GPGPU accelerated viz processing for the CVL. GPU capabilities can also be useful for on-demand development work, such as providing hardware rendering for the CAVE2 development environment.

At the moment we have a handful of NVIDIA Tesla M2070-Q’s, with Kepler K2’s coming as part of R@CMon phase2. If you’re keen to get access or try this out then drop us a line.