Tag Archives: Using Storage

Geodata Server on R@CMon

The Australian Bureau of Statistics (ABS) provides public access to internet activity data as “data cubes” under the catalog number “8153.0”. These statistics are derived from data provided by  internet service providers (ISPs) and offer an estimate of the number of users (frequency) having access to a specific Internet technology such as ADSL. While this survey is adequate for general observations, the granularity is too coarse to assess the impact of internet access on Australian society and economic growth. The Geodata Server project led by Klaus Ackermann (Faculty of Business and Economics, Monash University) was created with an aim to provide significantly enhanced granularity on Internet usage, in both the temporal and spatial dimensions for Australia on a local government level and for other cities worldwide.

IPv4 Heatmap and Project Background, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

One of the main challenges in the project is the analysis of 1.5 trillion observations from the ABS data sets. The project requires high-performance and high-throughput computational resources to analyse this vast amount of data. Also, a reasonable amount of data storage space is vital for storing reference and computed data. Another major challenge is how to architect the analysis pipeline to fully utilise the available resources. Over the last 3 years, several iterations of the methodology as well as infrastructure setup have been developed and tested to optimise the analysis pipeline. The R@CMon team engaged with Klaus to address the various computational, storage and analysis requirements of the project. A dedicated NeCTAR project has been provisioned for Geodata Server, which includes the computational resources to be used on the Monash node of the NeCTAR Research Cloud. Computational storage was provisioned to the project via VicNode allocation scheme.

Processing Workflow on R@CMon, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

With the computational and storage resources in place, the project was able to progress with the development of the analysis pipeline based on various “big data” technologies. In coordination with the R@CMon team, several Hadoop distributions have been evaluated, namely, Cloudera, MapR and Hortonworks. The latter was chosen for its  ease of installation and 100% open source commitment. The resulting cluster consists of 32 cores with 8TB of Hadoop Filesystem (HDFS) storage divided among 4 nodes. Tested configuration includes 16 cores and 2 nodes or 32 cores and 1 node. The data has been distributed into 2TB volume drives. The master node of the cluster has an extra large volume attached to store the raw (reference) data. To optimize the performance of the distributed HDFS, all loaded data is stored in compressed Lempel–Ziv–Oberhumer (LZO) format to reduce the burden on the network, that is shared among other tenants on the NeCTAR Research Cloud.

Multi City Analysis, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

Through R@CMon, the Geodata Server project was able to successfully handle and curate trillions of IP-activity observational data and link these data accurately to its geo-location in single and multi-city models. Analysis tools were laid down as part of the pipeline from high-performance (HPC) processing on Monash’s supercomputers, to Hadoop-like type of data parallelisation in the research cloud. From this, preliminary observations suggest strong spatial-correlation and evidence of political boundaries discontinuities on IP activities, which suggests some cultural and/or institutional factors. Some of the models produced from this project are currently being curated in preparation for public release to the wider Australian research community. The models are actively being improved with additional IP statistical data from other cities in the world. As the data grows, the analysis pipeline, computational and storage requirements are expected to scale as well. The R@CMon team will continue to support the Geodata Server project to reach its next milestones.

Monash Connections Online

The Monash University Library’s Special Collections are a large and special compilation of various media like rare books, music and multimedia in various forms and languages such as Slavic, Asian, Yiddish and Jewish. These collections are considered among the most comprehensive in whole of Australasia. Hosted on legacy infrastructure, the original collection’s web presence has recently become a maintenance challenge for library administrators due to its legacy hardware and software stack. There was also a push in early 2016 to centralise the university’s data centre infrastructure, where the legacy collections platform is being hosted. This presented an opportunity for Monash University Library to to migrate the collections onto one of the latest community-supported public collections publishing platforms.   After evaluation the open-source, freely available   Omeka  LAMP stack was chosen for the new platform.

Monash Collections Online Main Page

The R@CMon team engaged the various library stakeholders to spin-up a test instance of Omeka on the Monash node of the NeCTAR Research Cloud. The team at the library tested the various hosting and publishing capabilities of Omeka, including installation and integration of custom themes and plugins (e.g multimedia playback plugins). After several consultations, demonstrations and rigorous testing between the R@CMon and library teams, the executive decision has been made for Omeka to be the new publishing and showcasing platform for the library’s special collections.

Monash Collections Online Tall Tales and True Exhibition

The R@CMon team deployed a highly-available instance of Omeka on the NeCTAR Research Cloud utilising the LAMP stack plus HAProxy. Through VicNode, a dedicated and accessible storage share has been provisioned for the collections, housing the various types of files and media for current and future public showcases and exhibitions. The newly minted Monash Collections Online platform has been officially unveiled at the start of 2017 and is now publicly available. The new platform is also regularly updated with new content by the library team. Since its release, the R@CMon team continues to support the new platform through standard and regular engagements with the Monash University Library.

Disruptive change in the clinical treatment of pancreatic cancer

Professor Jenkins’ research focuses on pancreatic cancer, an inflammation-associated cancer and the fourth most common cause of cancer death worldwide, with an extremely low 5% five-year survival rate. Typically studies look at gene expression patterns between normal pancreas and cancerous pancreas in order to identify unique signatures, which can be indicative of sensitivity or resistance to specific chemotherapeutic treatments.

“Using next generation gene sequencing, involving big instruments, big data and big computing – allows near-term disruptive change in the clinical treatment of pancreatic cancer.” Prof. Jenkins, Monash Health..

To date, gene expression studies have largely focused on samples taken from open surgical biopsy; a procedure known to be very invasive and only possible in 20% of pancreatic cancers. Prof Jenkins’ group, in collaboration with Dr Daniel Croagh from the Department of Upper Gastrointestinal and Hepatobiliary Surgery at Monash Medical Centre, recently trialled an alternative less invasive process available to nearly all pancreatic cancer patients known as endoscopic ultrasound-guided fine-needle aspirate (EUS-FNA) which uses a thin, hollow needle to collect the samples of cells from which genetic material can be extracted and analysed. The challenge then becomes to ensure gene sequencing from EUS-FNA samples is comparable to open surgical biopsy such that established analysis and treatment can be used.


Twenty-four EUS-FNA-derived genetic samples from normal and cancerous pancreas were sequenced at the MHTP Medical Genomics Facility producing a total amount of 40Gb of raw data. Those data were securely transferred onto R@CMon by the Monash Bioinformatics Platform for processing, statistical analysis and computational exploration using state-of-the-art Bioinformatics methods.

super_computer

Results thus far from this study show that data from EUS-FNA-derived samples were of high quality and also allowed the identification of gene expression signatures between normal and cancerous pancreas. Professor Jenkins’ group is now confident that EUS-FNA-derived material not only has the potential to capture nearly all of pancreatic cancer patients (compared to ~20% by surgery), but to also improve patient management and their treatment in the clinic.

“The current clinical genomics research space requires specialized high performance computational and storage infrastructure to support the processing and long term storage of those so-called “big data”. Thus R@CMon plays a major role in the discovery and development of new therapies and the improvement of Human health care in general.” Roxane Legaie, Senior Bioinformatician, Monash Bioinformatics Platform

 

The Digital Object Identifier (DOI) Minter on R@CMon

The Monash Digital Object Identifier (DOI) Minter was developed by the  ANDS-funded Monash University Major Open Data Collections (MODC) Project as an extendible service and deployed on the Monash node (R@CMon) of the NeCTAR Research Cloud for providing a persistent and unique identifier for datasets and research publications. A DOI is permanently assigned to datasets and publications to provide information about them, including where they or information about them can be found on the Internet. The DOI will not change even if information about the datasets changes over time.

Store.Synchrotron's data publishing form

Store.Synchrotron’s data publishing form using the Monash DOI minter service.

The Monash DOI Minter gives Monash University the ability to mint DOIs for data collections that are hosted and managed by services on R@CMon. The integration and accessibly to DOIs has never been easier. For instance the Monash Library can now use this service to mint DOIs for publicly accessible research collections.  But also it is now being utilised by the Australian Synchrotron’s Store.Synchrotron service, which manages data produced by the Macromolecular Crystallography (MX) beamline and streamlines DOI minting for datasets through a publication workflow.

Demo publication

A demo published collection on Store.Synchrotron.

An MX beamline user can now collect data on the beamline which is stored, archived and made accessible through the Store.Synchrotron service. When the researcher has publication quality data, a copy of this data is deposited in the Protein Data Bank (PDB), with the appropriate metadata. The new publication workflow allows researchers to publish data hosted by the Store.Synchrotron service, with PDB metadata being automatically attached to the datasets, and a DOI being minted and activated after a researcher-selected embargo period. The DOI reference can then be included in their research papers.

We think it is a brilliant pattern of play for accelerating persistent identifiers of research data held at universities. To this end, we have made the DOI Minter available for others to instantiate.

R@CMon hosted Australia’s first Ceph Day

Ceph Days are a series of regular events in support of the Ceph open source community. They now occur at locations all around the world. In November, R@CMon hosted Australia’s first Ceph Day. The day hosted 70-odd guests, many of which  were from interstate and a few from overseas. There participants were from the research sector, private industry and ICT providers.  It was a fantastic culmination of Australia’s growing Ceph community.

If you don’t already know, Ceph is basically an open-source technology for software-defined cluster-based storage.  It means our storage backend is essentially infinitely scalable, and our focus can shift to the access mechanisms for data.

Check out the promo:

[youtube https://www.youtube.com/watch?v=vcK6KSA0DN0&w=500&h=281]

R@CMon has pioneered the adoption of Ceph for accessible research data storage and at mid-2013 was the first NeCTAR Research Cloud node to provide un-throttled volume storage. R@CMon has also worked closely with was InkTank and now Redhat to develop the support model for such an enterprise (see Ceph Enterprise – a disruptive period in the storage marketplace).

The day began with the Ceph Community Director – Patrick McGarry. His presentation included information about the upcoming expanded Ceph metrics platform, what the Ceph User Committee has been up to, new community infrastructure for a better contributor experience, and revised open source governance.

[youtube https://www.youtube.com/watch?v=joCp3WByV9E&w=500&h=281]

Undoubtedly the highlight of the day was the joint talk given by R@CMon’s very own director – Steve Quenette and technical lead – Blair Bethwaite. Here we explain Ceph in the context of the 21st century microscope – the tool each researcher creates to do modern day research. We also explain how we technically approached creating our fabric.

[youtube https://www.youtube.com/watch?v=aZNwQieDpfg&w=500&h=281]