The visit to Center for Scientific Computing (CSC) was carried out on October 14, 2016. Based at the Riedberg campus of the Goethe University Frankfurt the centre currently operates two Linux-based computer clusters FUCHS, and LOEWE-CSC. FUCHS has 14 air-cooled and 5 water-cooled racks using AMD Opteron (Istanbul and Magny-Cours) with 39956 cores total, mixed 4X DDR-QDR InfiniBand fabric and, a parallel scratch file system with an aggregated bandwidth of 6 GB/s and a capacity of 600 TB. 41 TFlops peak perfomance.
Following the acceptance of the University of Melbourne's paper to the OpenStack Barcelona summit in late October for their new HPC-cloud hybrid system, Spartan, an opportunity presented itself to visit other HPC centres and review their architecture and training programmes.
For about two weeks prior and a week after presenting at the OpenStack Summit in Barcelona I had the opportunity to visit several of Europe's major high performance computing facilities, giving each a bit of a standard pitch for the HPC-Cloud hybrid system we had developed at the University of Melbourne.
The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect.
COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages.
Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.
Thursday July 30th, at the Gryphon Gallery at the University of Melbourne, was the official launch of the 'Spartan' high-performance computing and cloud hybrid. Speakers at the launch included Dr Stephen Giugni, Director, Research Platform Services., Prof Margaret Sheil, Acting Vice Chancellor of the University of Melbourne., Professor Richard Sinnott, Director, eResearch and Professor of Applied Computing Systems., Mr Bernard Meade, Head of Research Compute Services, Research Platform Services, and yours truly, in my role as HPC Support Engineer, Research Platform Services.
As I argued in my presentation, the great advantage of Spartan is that it is designed around what users need. Based on research from the previous general compute resource, Edward, most people wanted to submit lots of jobs with a relatively small core count and memory footprint with data parallel approaches, but some really needed a large core counts with a fast interconnect. Putting the two types of users of the same system was not ideal. Also, engineers tend to want performance from a system, whereas managers want flexibility. Spartan provides both through its partitioning system. I am convinced that this will be architecture of future research computing.
Spartan's launch has received extensive media coverage, including high ranking sites such as HPC Wire, Gizmodo, and Delimiter. In addition to the aforementioned speakers, particular thanks must also be given to Linh Vu, Daniel Tosello, and Chris Samuel for their engineering excellence in helping put together the system, and to Greg Sauter for his project management (and for his photography). Welcome to Spartan!
Due to underlow and overflow, computers suffer rounding errors. These errors are highly significant, computers make them constantly and with great speed. Sometimes those errors cost millions of dollars, or directly lead to a tragic loss of life. Many of these errors are caused by the way that computers store numbers. The use of scientific notation, as implemented by IEEE floating point standards, is not only imprecise, but also requires too many bits - which is costly in power, time, and money.
Apropos the previous post, I am coming to the conclusion that University's are very strange places when it comes to password policies. Mind you, it shouldn't really come to much of a surprise - the choice of technologies adopted are often so mind-bogglingly strange one is tempted to conclude that the decisions are more political than technical. Of course, that would never happen in the commercial world. All this aside, consider the password policy of a certain Victorian university.
Ars Technica has reported of a relatively small GPU-Linux cluster which can crack by brute force standard eight-character MS-Windows passwords in under six hours. There are, of course, a reasons and caveats. Firstly, as online servers will typically block repeat password attempts, is system is most effective against offline password hashes, which then of course can be used for online exploits.
Did you know you can bring down an entire HPC cluster with an old script? Well, this week I had such an experience. As the systems administrator for a seriously aging cluster with over 800 post-graduate and post-doctoral researchers, "stress" is a normal part of daily life (for future reference: it's probably killing me).