lev_lafayette's blog

Easybuild: Building Software with Ease

Building software from source is necessary for performance and development reasons. However, this can come with complex dependency and compiler requirements, which have to be explicitly stated in research computing to ensure replication of results. EasyBuild, originally developed by the Julich Supercomputing Centre, the University of Gent, and the Texas Advanced Computing Center, is a tool that allows the building of software with ease, managing the complex dependencies and toolchains, and integrating by default with the Lmod environment modules system.

Praise-Singing Poppler Utilities

Last year I gave a presentation at Linux Users of Victoria entitled Being An Acrobat: Linux and PDFs (there was an additional discussion not in the presentation about embedding Javascript in a PDF and some related security issues, but that's for another post). Part of this presentation was singing the praises of Poppler Utilities (named after the Futurama episode, "The Problem with Popplers").

Simple FOSS versus Complex Enterprise Software

As is often the case real IT operators in large organisations find themselves having to deal with "enterprise" software which has been imposed upon them. The decision to implement such software is usually determined by perceived business requirements (which is reasonable enough), but with little consideration of the operations and flexibility for new, or even assumed, needs.

Python 2.7.x with GCC 8.x and EasyBuild

An attempted build of Python-2.7.13 with GCC-8.2.0 led to an unexpected error where the build failed to generation of POSIX vars. This is kind of important and unsurprisingly, others on in the Python community have noticed it as well both this year, and in a directly related matter from late 2016, with a recommended patchfile provided on the Python-Dev mailing list.

Performance Improvements with GPUs for Marine Biodiversity: A Cross-Tasman Collaboration

Identifying probable dispersal routes and for marine populations is a data and processing intensive task of which traditional high performance computing systems are suitable, even for single-threaded applications. Whilst processing dependencies between the datasets exist, a large level of independence between sets allows for use of job arrays to significantly improve processing time. Identification of bottle-necks within the code base suitable for GPU optimisation however had led to additional performance improvements which can be coupled with the existing benefits from job arrays.

Not The Best Customer Service (laptop.com.au)

You would think with a website like laptop.com.au you would be sitting on a gold mine of opportunity. It would take real effort not to turn such a domain advantage into a real advantage, to become the country's specialist and expert provider of laptops. But alas, some effort is required in this regard and it involves what, in my considered opinion, is not doing the right thing. I leave you, gentle reader, to form your own opinion on the matter from the facts provided.

New Developments in Supercomputing

Over the past 33 years the International Super Computing conference in Germany has become one of the world's major computing events with the bi-annual announcement of the Top500 systems, which continues to be dominated in entirety by Linux systems. In June this year over 3,500 people attended ISC with a programme of tutorials, workshops and miniconferences, poster sessions, student competitions, a vast vendor hall, and numerous other events.

Exploring Issues in Event-Based HPC Cloudbursting

The use of cloud compute, especially in proportion to single-node tasks, provides a more effective allocation of financial resources. The introduction of cloud-bursting to scheduling systems could ideally provide on-demand compute resources for High Performance Computing (HPC) systems, where queue wait-times are a source of user consternation.

Transparency and Immersion in HPC

The development of the graphic user-interface is widely considered a major phenomenological contribution to the Human-Computer Interaction (HCI) by providing an intuitive framework for data storage and processing, encapsulated in the term "user friendly". Whilst for a very large number of everyday computational tasks this Windows-Icons-Menu-Pointer (WIMP) interface has been highly successful, the field of high performance computing (HPC) continues to use the command line interface.

Pages