Supercomputers: Current Status and Future Trends

The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect. Over time operating systems have moved from proprietary system-specific models, to UNIX-like generic operating systems, to free and open-source GNU/Linux.

The strength of supercomputers is their support of parallel processing, whether as data or task parallelism and, in the latter case, with shared memory (OpenMP) or distributed memory (MPI) implementations. As shared systems, supercomputers have job scheduling and resource management; these were initially incorporated into the operating system. The development of NASA's Portable Batch System and subsequent implementations (OpenPBS, PBSPro, TORQUE) has been highly influential but is now being largely replaced by SLURM (Simple Linux Utility for Resource Management).

Future issues include a significant skills gap between need and ability among researchers, the rise of data parallel requirements, a recent rise in open-source plus closed extensions for monopolistic advantages, concurrency issues in the development of massive multicore processors, and a loosening of the distinction between tightly-coupled and loosely-coupled systems.

Presentation to Melbourne PC Users Group, Inc. August 2nd, 2016
http://levlafayette.com/files/2016melbpc-supercomputers.pdf