Modern computing has enabled research that was previously considered unfeasible. Parallel algorithms have been developed to run over powerful multicore machines. For even more computing power, these machines can be aggregated together into large high performance computing (HPC) clusters. On these clusters, jobs can be spread out across a large number of nodes instead of being executed on a single machine. This can substantially decrease the time required to execute resource intensive modeling and simulation jobs – a common requirement in the field of biophysics. It is also useful when a large number of much smaller jobs need to be executed. Unfortunately, running jobs on a cluster involves a steep learning curve. Jobs must be submitted via software systems known as resource managers. These systems can usually only be run via the command line and require expertise that most researchers don't have.

To solve this problem, we have developed JMS, a web-based front-end to an HPC cluster. JMS allows users to run, manage and monitor jobs via a user-friendly web interface. It also lets users create new tools that can be pipelined together along with existing tools to create complex computational workflows. These workflows can be saved, versioned and reused as needed. A detailed job history of all jobs is stored and can be accessed and download at any time. All tools, workflows and jobs can be shared with other users to create a highly collaborative work environment. In addition, tools and workflows can be made public via external interfaces. Although applicable to any field, JMS is currently being tailored toward structural bioinformatics with the introduction of tools and workflows for homology modelling, docking studies, and molecular dynamics.

JMS has been open-sourced and is freely available at https://github.com/RUBi-ZA/JMS.

JMS has been published in PLoS ONE:
David K Brown, David L Penkler, Thommas M Musyoka, and Özlem Tastan Bishop "JMS: An open source workflow management system and web-based cluster front-end for high performance computing"  PLoS ONE 10(8): e0134273, 2015. doi: 10.1371/journal.pone.0134273