Category Archives: Research Stories

Monash Business School Financial Markets Workshop

Last April 30 to May 1, Associate Professor Paul Lajbcygier and Senior Lecturer Huu Nhan Duong from the Monash Business School organised a Financial Markets Workshop at Monash Caulfield Campus, bringing in a number of prominent Australian and international market microstructure researchers as well as high-profile high frequency traders and regulators from the US. The workshop covered several research topics such as “market design and quality”; “high frequency trading”; “volatility and liquidity modelling”; “short selling”; “stock market crashes”; “cryptocurrencies”; and the real effect of financial markets on corporate decisions. The R@CMon team has worked with Paul’s group for several years now, supporting their “big data analysis” workflows on the research cloud. Enabling them to crunch more data, which contributed in several high-impact publications, ARC grant submissions and attainment of a major SEED funding. The international financial workshop event marks the culmination of Paul’s groups accomplishments in high frequency trading research over the years and serves as foundation for future critical mass of research in financial markets. The R@CMon team will continue to support Paul’s group and the Department of Banking and Finance as they work on more high-impact research and in tackling various computational challenges that they may encounter along the journey.

XCMSplus Metabolomics Analysis on R@CMon

At the start of 2017, the R@CMon team had its first user consultation with Dr. Sri Ramarathinam, a research fellow from the Immunproteomics Laboratory (Purcell Laboratory) at the School of Biomedical Sciences in Monash University. Sri and his group at the lab studies metabolomics compounds in various samples by conducting a “search” and “identification” process using a pipeline of analysis and visualisation tools. The lab has acquired the license to use the commercial XCMSPlus metabolomics platform from SCIEX on their workflow. XCMSPlus provides a powerful solution for analysis of untargeted metabolomics data in a stand-alone configuration, which will greatly increase the lab’s capacity to analyse more samples, with faster and easeful results generation and interpretation.

XCMSPlus main login Page, entry point of the complete metabolomics platform

During the first engagement meeting with Sri and the lab, it’s been highlighted that a specialised hosting platform (with appropriate storage and computational capacity) would be required for XCMSPlus. XCMSPlus is distributed as stand-alone appliance (personal cloud) from the vendor. As an appliance, XCMSPlus has been optimised and packaged to be deployed on a single, multi-core and high-memory machine. An added minor complication is that this appliance was distributed in VMWare’s appliance format, which need to be translated into an OpenStack-friendly format. The R@CMon team provided the hosting platform required for XCMSPlus through the Monash node of the Nectar Research Cloud.

Analysis results and visualisation in XCMSPlus

A dedicated Nectar project has been provisioned for the lab, which is now being used for hosting XCMSPlus. This project also has enough capacity for future expansion and new analysis platform deployments. The now R@CMon-hosted (and supported) XCMSPlus platform for the Immunproteomics Laboratory is the first custom XCMSPlus deployment in Australia. Due to being the first in Australia, there were some early minor issues encountered during its first test runs. These technical issues were eventually sorted out due to collaborative troubleshooting efforts from the R@CM team, the lab and the vendor. And after several months of usage, hundred of jobs submitted and processed by XCMSPlus, and counting, the lab is continuing to fully integrate it as part of their analysis workflow. The R@CMon team is actively engaging with the lab for supporting its adaption of XCMSPlus and planning for future analysis workflow expansions.

HIVed Database on R@CMon

Measuring the changes in gene expressions levels and determining differential expressed genes during the processes of human immunodeficiency virus (HIV) infection, replication and latency is instrumental in further understanding HIV infections. These measurements or studies are vital in developing strategies for virus eradication from the human body. Dr. Chen Li, a research fellow from the Immunoproteomics Laboratory at Monash University has developed a novel compendium of comprehensive functional genes annotations from genes expressions and proteomics studies. The genes in the compendium have been carefully curated and shown to be differentially expressed during HIV infection, replication and latency.

The HIVed Online Database, Front Page

The R@CMon team assisted with the deployment of the online database – HIVed on the Monash node of the NeCTAR Research Cloud. The system has been running on R@CMon and serving the public community for more than a year. HIVed is considered to be the first fully comprehensive database that combines datasets from a wide range of experimental studies that have been carefully curated using a variety of experimental conditions. The datasets are further enriched by integrating it with other public databases to provide additional annotations for each data points. The HIVed online database has been developed to facilitate the functional annotation and experimental hypothesis HIV related genes with an intuitive web interface which enables dynamic display or presentation of common threads across HIV latency and infection conditions and measurements. The work done for the development of HIVed has been recently published into Scientific Reports and the Immunoproteomics Laboratory has plans to incorporate new experimental studies and external annotations into the HIVed database as they become available.

Melbourne Weather Server on R@CMon

Dr. Simon Clarke is a senior lecturer from the School of Mathematical Sciences at Monash University. He’s been granted permission by the Bureau of Meteorology to repackage its observational and forecasting Melbourne weather data for downstream analysis and visualisations. Weather data from the bureau is downloaded and processed at regular 10 minute intervals. Various metrics and visualisations are then computed using a custom-made MATLAB batch script developed in-house. The resulting output is then fed to a web server for public presentation with integrations to other external sites hosted by the bureau. The original weather server was housed on a “legacy” hosting platform, that had reached its end of life. The Melbourne weather server needed a new home.

Melbourne Weather Server

The R@CMon team engaged with Simon to scope the various weather server’s hosting requirements. Aside from the traditional LAMP-style type of hosting required, the server also needed direct access to MATLAB’s batch mode functionality. A new R@CMon-hosted instance was deployed on the Monash node of the NeCTAR Research Cloud. With it, a standard LAMP stack was also installed and configured. A Monash University-licensed installation of MATLAB has been made available onto the new weather server, allowing the downstream analysis of the raw data from the bureau to be conducted.

Melbourne Weather Server Visits for 2017

The new Melbourne weather server is now publicly accessible and available across world. The regular live feed is serving the Australian and international community providing live Melbourne weather observations and forecasts. With the support of the R@CMon team, it will continue to do so for more years to come.

Geodata Server on R@CMon

The Australian Bureau of Statistics (ABS) provides public access to internet activity data as “data cubes” under the catalog number “8153.0”. These statistics are derived from data provided by  internet service providers (ISPs) and offer an estimate of the number of users (frequency) having access to a specific Internet technology such as ADSL. While this survey is adequate for general observations, the granularity is too coarse to assess the impact of internet access on Australian society and economic growth. The Geodata Server project led by Klaus Ackermann (Faculty of Business and Economics, Monash University) was created with an aim to provide significantly enhanced granularity on Internet usage, in both the temporal and spatial dimensions for Australia on a local government level and for other cities worldwide.

IPv4 Heatmap and Project Background, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

One of the main challenges in the project is the analysis of 1.5 trillion observations from the ABS data sets. The project requires high-performance and high-throughput computational resources to analyse this vast amount of data. Also, a reasonable amount of data storage space is vital for storing reference and computed data. Another major challenge is how to architect the analysis pipeline to fully utilise the available resources. Over the last 3 years, several iterations of the methodology as well as infrastructure setup have been developed and tested to optimise the analysis pipeline. The R@CMon team engaged with Klaus to address the various computational, storage and analysis requirements of the project. A dedicated NeCTAR project has been provisioned for Geodata Server, which includes the computational resources to be used on the Monash node of the NeCTAR Research Cloud. Computational storage was provisioned to the project via VicNode allocation scheme.

Processing Workflow on R@CMon, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

With the computational and storage resources in place, the project was able to progress with the development of the analysis pipeline based on various “big data” technologies. In coordination with the R@CMon team, several Hadoop distributions have been evaluated, namely, Cloudera, MapR and Hortonworks. The latter was chosen for its  ease of installation and 100% open source commitment. The resulting cluster consists of 32 cores with 8TB of Hadoop Filesystem (HDFS) storage divided among 4 nodes. Tested configuration includes 16 cores and 2 nodes or 32 cores and 1 node. The data has been distributed into 2TB volume drives. The master node of the cluster has an extra large volume attached to store the raw (reference) data. To optimize the performance of the distributed HDFS, all loaded data is stored in compressed Lempel–Ziv–Oberhumer (LZO) format to reduce the burden on the network, that is shared among other tenants on the NeCTAR Research Cloud.

Multi City Analysis, Ackermann, Angus & Raschky: Economics of Technology, Wombat 2016

Through R@CMon, the Geodata Server project was able to successfully handle and curate trillions of IP-activity observational data and link these data accurately to its geo-location in single and multi-city models. Analysis tools were laid down as part of the pipeline from high-performance (HPC) processing on Monash’s supercomputers, to Hadoop-like type of data parallelisation in the research cloud. From this, preliminary observations suggest strong spatial-correlation and evidence of political boundaries discontinuities on IP activities, which suggests some cultural and/or institutional factors. Some of the models produced from this project are currently being curated in preparation for public release to the wider Australian research community. The models are actively being improved with additional IP statistical data from other cities in the world. As the data grows, the analysis pipeline, computational and storage requirements are expected to scale as well. The R@CMon team will continue to support the Geodata Server project to reach its next milestones.

Worm Strains Catalogue on R@CMon

Associate Professor Roger Pocock is the head of the Neuronal Development and Plasticity Laboratory at Monash University. Roger’s lab investigates the various fundamental mechanisms that factors in brain development using the Caenorhabditis elegans organism as a model system. Roger joined Monash University in 2014, bringing with him a comprehensive catalogue of worm strains data that has been carefully curated for years from his previous laboratory at the University of Copenhagen. The strains catalogue is held in a FileMaker database, that the laboratory members regularly update and query for current and new strains’ entries.

C. elegans as a model system, Neuronal Development and Plasticity Laboratory

FileMaker (and its derivatives) is commercial software for creating custom applications for a variety target platforms (e.g. web, iPad, Windows, Mac). The worm strains catalogue from Roger’s lab was the first FileMaker-based database deployment on R@CMon. The R@CMon team were able to install and configure a fully-licensed and latest version of FileMaker Pro on the Monash node of the NeCTAR Research Cloud inside a dedicated tenancy (i.e. computational and storage resources) provisioned for the lab. The FileMaker software itself has been deployed on a Monash-licensed Windows Server instance, which has access to the latest system and security updates from Microsoft.

Worm Strains Catalogue Entry

The FileMaker WebDirect feature has been enabled on the new server to allow easy access to the strains catalogue from standard web browsers via internet, without any need for additional programming or software installation on the user’s client machine. Proper HTTPS have been prepared and enabled on new the WebDirect interface. Since then, and with the ongoing support of R@CMon, the catalogue has grown to include  external collaborators’ models that are derived from other strains.

Monash Connections Online

The Monash University Library’s Special Collections are a large and special compilation of various media like rare books, music and multimedia in various forms and languages such as Slavic, Asian, Yiddish and Jewish. These collections are considered among the most comprehensive in whole of Australasia. Hosted on legacy infrastructure, the original collection’s web presence has recently become a maintenance challenge for library administrators due to its legacy hardware and software stack. There was also a push in early 2016 to centralise the university’s data centre infrastructure, where the legacy collections platform is being hosted. This presented an opportunity for Monash University Library to to migrate the collections onto one of the latest community-supported public collections publishing platforms.   After evaluation the open-source, freely available   Omeka  LAMP stack was chosen for the new platform.

Monash Collections Online Main Page

The R@CMon team engaged the various library stakeholders to spin-up a test instance of Omeka on the Monash node of the NeCTAR Research Cloud. The team at the library tested the various hosting and publishing capabilities of Omeka, including installation and integration of custom themes and plugins (e.g multimedia playback plugins). After several consultations, demonstrations and rigorous testing between the R@CMon and library teams, the executive decision has been made for Omeka to be the new publishing and showcasing platform for the library’s special collections.

Monash Collections Online Tall Tales and True Exhibition

The R@CMon team deployed a highly-available instance of Omeka on the NeCTAR Research Cloud utilising the LAMP stack plus HAProxy. Through VicNode, a dedicated and accessible storage share has been provisioned for the collections, housing the various types of files and media for current and future public showcases and exhibitions. The newly minted Monash Collections Online platform has been officially unveiled at the start of 2017 and is now publicly available. The new platform is also regularly updated with new content by the library team. Since its release, the R@CMon team continues to support the new platform through standard and regular engagements with the Monash University Library.