Archives

Beginnings of a new data service: Store.Synchrotron

In line with the first day of eResearch Australasia, the Australian Synchrotron demonstrates its new data service – Store.Synchrotron with the upload of its first open experiment data.

Store.Synchrotron

The Store.Synchrotron service, built on the MyTardis data management system.

This is the first step in a partnership between the Australian Synchrotron and Monash University that leverages the leading Australian eResearch infrastructures of – MyTardis, R@CMon and VicNode, through the support of the NeCTAR and ANDS programs. The goal is to establish a data service, initially for the MX beam line, that captures all research beam line data for analysis, discoverability and re-use. The MX beam line alone means that approximately 1-2TB/month of research data is captured for collaborative use, and managed through the life cycle to the point where data behind important discoveries is one click away to being made open. The Store.Synchrotron pilot has been capturing data for several months, and as of today some collections have progressed from research collaborations to open through Creative Commons licenses.

Monash University is a recognized leader in the creation and management of large scale imaging for research. Store.Synchrotron is a user of the MyTardis service hosted by the Monash eResearch Centre under the leadership of the coordinator of the newly formed Monash Bioinformatics Platform – Steve Androulakis. MyTardis operates on a virtual machine at R@CMon and uses the computational volume storage that is part of the R@CMon/VicNode facility. As the data grows, infrequent data will spill over to vault storage, automating operational efficiencies.

Cinema4D Render Farm on R@CMon

Jon McCormack, an Associate Professor from Faculty of Information Technology at Monash University, creates high-resolution renderings of various subjects. His artworks have been showcased in leading galleries, museums and symposia.

Before the advent of the Monash node of the NeCTAR Research Cloud, Jon has been limited to running his render jobs to a couple of machines he can get his hands on in the faculty. After the first phase of the Monash node went online, the R@CMon team helped Jon in porting his rendering workflow in the NeCTAR Research Cloud environment. Jon uses Cinema4D for creating his high-resolution renders. Cinema4D runs on Windows and OS X so we’ve prepared a suitable Windows Server 2012 image for this purpose.

We assessed the performance of a NeCTAR guest running a rendering job. For this, we’ve used CINEBENCH from MAXON, a well known benchmarking tool to measure and compare CPU and graphics performance of various systems and platforms. The following video shows CINEBENCH running on a 16-core guest in the Monash node.

The CINEBENCH result showed that the guest performed well on the “Main Processor Performance (CPU)” component. Once GPUs become available in the second phase of the Monash node, we’ll be looking at running the second component of CINEBENCH which measures “Graphics Card Performance (GPU)”.

Cinema4D comes with a distributed rendering system which allows unlimited render clients connecting to a render server. The render server is where render jobs are submitted and is in charge of distributing it to the available render clients. NeCTAR guests have been provisioned to be render clients in Jon’s tenancy that talks to Jon’s render server running on a dedicate machine. Each render client renders a frame or a tile and submits the finished render back to the render server.

The following are some low-res renders from the “Image from Fifty Sisters, commission for the Ars Electronica Museum, Linz, Austria 2012/2013. Copyright Jon McCormack” project that Jon produced using his render farm in the NeCTAR Research Cloud.

76 tree - Fifty Sisters Series

Esso - Fifty Sisters

BP old form - Fifty Sisters Series

 

 

Software Carpentry Bootcamp for Bioinformaticians (Adelaide/Melbourne) – UPDATE

Last September 24-26 and October 1-3, the latest Software Carpentry Bootcamps were held in University of Adelaide and Monash University.

These Software Carpentry Bootcamps were designed for Bioinformaticians to enhance their knowledge and skills in programming and software development practices.The bootcamps were delivered using the NeCTAR Research Cloud where each trainee has been given dedicated access to a specific virtual workstation.

SWC-03

With the use of an automatic provisioning system, each virtual workstations have been preconfigured with the required tools, training materials and computing resources to perform the hands-on exercises.

SWC-02

On the first day of the bootcamp, the Software Carpentry instructors including the R@CMon team introduced the trainees to Python. The second day was mostly about software testing and documentation. The last day was when the trainees applied their knowledge from the previous sessions into collaborative group exercises.

SWC-01

Photos taken by Nathan Watson-Haigh (ACPFG).

“Thanks a lot for this information and also your kind efforts in running such a  useful and informative workshop” – Fariborz Sobhanmanesh (Research Engineer, Bioinformatician with CSIRO Animal, Food and Health Sciences Centre)

“I recently attended the SWC bootcamp in Adelaide and found it incredibly useful. Sure there was a lot of information in a short amount of time but the topics covered were practical and very relevant to my daily work. Thanks must go to the presenters and organisers who kept things moving along brilliantly.” – Terry Bertozzi (Research Scientist with the South Australian Museum)

Computational Resource Framework

Over the past year the Monash NeCTAR porting programme has worked with Griffith University eResearch developers on their Computational Resource Framework (CRF) project. We’re well overdue to promote their excellent work (and a little of ours), and as there will be a presentation on this at eResearch Australasia (A RESTful Web Service for High Performance Computing based on Nimrod/G), now seems a good time to blog about it!

hpcportal_login_sm

The CRF aims to address one of the long-standing issues in HPC, that is uptake by non-technical users. HPC is a domain with a well-entrenched command-line ethos, unfortunately this does alienate a large cohort of potential users, and that has negative implications for research productivity and collaboration. At Monash, our HPC specialists go to a great deal of effort to ease new users into the CLI (command line interface) environment, however, this is a labour-intensive process that doesn’t catch everybody and often leaves users reliant on individual consultants or team-members.

For some time portals have been the go-to panacea for democratising HPC and scientific computing, and there are some great systems being deployed on the RC, but they still seem to require a large amount of development effort to build and typically cater to a particular domain. Another common issue with “job-management” style portals (including Nimrod’s own ancient CGI beauty) is that they expose and delegate too much information and responsibility to the end user – typically an end user who doesn’t know, or want to know, about the intricacies of the various computational resources. Such mundanities as what credentials they need, which ones are actually working today, etc.

The CRF is different in this respect as it is not a domain-specific interface, instead the Griffith team have taken the approach of concentrating on a minimum set of functionality for some popular HPC and scientific computing applications. The user just inputs info relevant to the application and the CRF configuration determines where and how the job is run. Currently there are UIs for creating, submitting and managing NAMD, R and MATLAB jobs and sets thereof; AutoDock, Octave and Gaussian are in the works. They’ve put a bunch of effort into building out a web-service in front of Nimrod/G, that layer means their UIs are fairly self-contained bits of php that present an interface and translate inputs into a template Nimrod experiment. The web-service has also enabled them to build some other cool bits and pieces, like experiment submission and results over email.

hpcportal_namd_sm

Using Nimrod/G as the back-end resource meta-scheduler means the CRF can intermingle jobs over the local cluster and burst or overflow into the NeCTAR Research Cloud, and that’s the focus of the Monash & Griffith collaboration for this project. We’re now looking to implement scheduling functionality that will make it possible to put simple policies on resources, e.g., use my local cluster but if jobs don’t start within 30 mins then head to the cloud. There should be some updates following in that area after the eResearch conference!

Using the NeCTAR Research Cloud for Delivering Hands-on Bioinformatics Training (Adelaide)

A 1-day training workshop is scheduled for September 27, 2013 in Adelaide.

The training workshop aims to provide attendees relevant hands-on experience in using the NeCTAR Research Cloud for Bioinformatics.
The R@CMon team will be helping in the delivery of this workshop.

For more detailed information (contacts, venue, registration) on the training workshop, visit this BIG SA page.

Upcoming Software Carpentry Bootcamp for Bioinformaticians (Adelaide/Melbourne)

Two Software Carpentry Bootcamps are scheduled for September/October 2013.

The bootcamps are designed for Bioinformaticians to improve their productivity through good programming and software development practices.
The bootcamps will be held in Adelaide on September 2013. and Melbourne on October 2013.
It will be delivered using the NeCTAR Research Cloud and we, the R@CMon team, will be assisting in these bootcamps.

The bootcamps are supported by the following organisations:

Visit the Australian Bioinformatics Network site for details on the bootcamps

More information about the Adelaide bootcamp can be found here.
More information about the Melbourne bootcamp can be found here.

Visit the Software Carpentry site for more information about Software Carpentry and future international bootcamps.

Promoting Underworld-Geothermal Collaborations

Yufei Xi, a researcher from China University of Geosciences, recently visited the Monash Geodynamics group at Monash University.
During her 3-month visit, she has developed models for understanding heat flow of Guangdong province in south east China with Underworld.

To continue our collaborations upon her return to China, we’ve developed a machine image that is fully configured with Underworld/Geothermal and MPI enabled. This machine image has been instantiated in the Monash node of the NeCTAR Research Cloud and is used for easeful model collaborations and Underworld production runs.