Did you know you can bring down an entire HPC cluster with an old script? Well, this week I had such an experience. As the systems administrator for a seriously aging cluster with over 800 post-graduate and post-doctoral researchers, "stress" is a normal part of daily life (for future reference: it's probably killing me).
It is not unusual for a few jobs to fall into a batchhold state when one is managing a cluster; users often write PBS submissions with errors in them (such as requesting more core than what is actually available). When a sysadmin has the opportunity to do so they should check such scripts, and educate the users on what they have done wrong.
Multicore World is a small annual international conference held in New Zealand/Aotearoa sponsored by OpenParallel. I have been fortunate enough to act an MC for all but one of the five conferences since its inception, this year also presenting a short paper on the introduction of the new HPC/Cloud hybrid at the University of Melbourne.
It is finally time for a book like this one.
When parallel programming was just getting off the ground in the late 1960s, it started as a battle between starry-eyed academics who envisioned how fast and wonderful it could be, and cynical hard-nosed executives of computer companies who joked that “parallel computing is the wave of the future, and always will be.”
The following describes a procedure for bringing up a compute node in TORQUE that's marked as 'Down'. Whilst the procedure, once known, is relatively simple, investigation to come to this stage required some research and to save others time this document may help.
1. Determine whether the node is really down.
Following an almighty NFS outage quite a number of compute nodes were marked as "down". However the two standard tools, `mdiag -n | grep "Down"` and `pbsnodes -ln` gave significantly different results.
A far too venerable cluster (Scientific Linux release 6.2, 2.6.32 kernel, Opteron 6212 processors) with more than 800 user accounts makes use of NFS-v4 to access storage directories. It is a typical architecture, with a management and login node with a number of compute nodes. The directory /usr/local is on the management node and mounted across to the login and compute nodes. User and project directories are distributed two storage arrays appropriately named storage1 and storage2.
[root@edward-m ~]# cat /etc/fstab
I had a process in a "uninterruptible sleep" state. Trying to kill it is, unsurprisingly, unhelpful. All the literature on the subject will say that it cannot be killed, and they're right. It's called "uninterruptible" for a reason. An uninterruptable process is in a system call that cannot be interrupted by a signal (such as a SIGKILL, SIGTERM etc).
Often on a cluster a user launches a compute job only to discover that they have some need to delete it (e.g., the data file is corrupt, there was an error in their application commands or PBS script). In TORQUE/PBSPro/OpenPBS etc this can be carried out by the standard PBS command, qdel.
[compute-login ~] qdel job_id
Sometimes however that simply doesn't work. An error message like the following is typical: "qdel: Server could not connect to MOM". I think I've seen this around a hundred times in the past few years.
Once upon a time, in a generation past, letters would be received with written text. There was a default form (paper with ink or pencil) and an encoding (in the language of the correspondents). Whilst this may all seem very trivial, it does have a particular importance for the subject at hand in the context of contemporary electronic mail. Can the recipient of your message actually read what you've sent them? Could imagine a situation where people knowingly sent written correspondence in a format that recipient couldn't read? Have you ever received an email attachment that you couldn't open?
For the past eight years I've worked at the Victorian Partnership for Advanced Computing, also known as V3 Alliance, its trading name after merging with the Victorian eReserch Initiative. Today is my last official day, although I suspect I'll be doing "VPAC things" for a while yet.