Submitted by lev_lafayette on Tue, 06/10/2014 - 02:39
Presentation to ICCS 2014 International Conference on Computational Science, Cairns, June 10, 2014
High performance computing is in increasing demand, especially with the need to conduct parallel processing on very large datasets, whether evaluated by volume, velocity and variety. Unfortunately the necessary skills - from familiarity with the command line interface, job submission, scripting, through to parallel programming - is not commonly taught at the level required for most researchers. As a result the uptake of HPC usage remains disproportionately low, with emphasis on system metrics taking priority, leading to a situation described as 'high performance computing considered harmful'. Changing this is not of a problem of computational science but rather a problem for computational science which can only be resolved from an multi-disciplinary approach. The following example addresses the main issues in such teaching and thus makes an appeal to some universality in application which may be useful for other institutions.
For the past several years the Victorian Partnership for Advanced Computing (VPAC) has conducted a range of training courses designed to bring the capabilities of postgraduate researchers to a level of competence useful for their research. These courses have developed in this time, in part through providing a significantly wider range of content for varying skillsets, but more importantly by introducing some of the key insights from the discipline of adult and tertiary education in the context of the increasing trend towards lifelong learning. This includes an andragagical orientation, providing integrated structural knowledge, encouraging learner autonomy, self-efficacy, and self-determination, utilising appropriate learning styles for the discipline, utilising modelling and scaffolding for example problems (as a contemporary version of proximal learning), and following up with a connectivist mentoring and outreach program in the context of a culturally diverse audience.
Keywords adult and tertiary education, high performance and scientific computing
Submitted by lev_lafayette on Fri, 05/23/2014 - 16:02
Some MS-Windows Win-32 Intel Fortran code was produced with Visual Studio. The user, working on a 3D optimization of bone structure, wanted the code refactored to 64-bit Linux GNU Fortran 90 to be suitable for the Abaqus Finite Element Analysis software, and to be able to run on a cluster. This was in many ways a "first draft" modification of the code and further development is planned. It illustrates a basic introduction to some relatively interesting differences within Fortran and (yet another) practical use of job arrays.
Submitted by lev_lafayette on Tue, 04/29/2014 - 01:10
In the organisation of one's life it's a good idea to make use of a scheduler - that is, a diary, a calender, etc - as distinct from a to-do list which will be visited at another time. This is the place for appointments etc that should not be changed; not tasks or projects. One particularly popular implementation, given that it can be accessed anywhere where one has Internet access, is Google Calendar.
Submitted by lev_lafayette on Thu, 04/24/2014 - 05:27
Schrodinger is one of the more popular licensed computational chemistry suites, offering a range of associated products. Installation is relatively easy, but does require that the sysop pays some attention to the process and makes a handful of modifications as needed for their particular environment, in this case, MPI, PBS, and CentOS Linux.
Firstly, being licensed software, installation requires logon, which will provide access to a tarball of the suite of applications availabile.
Submitted by lev_lafayette on Fri, 03/21/2014 - 10:08
For a very long time, OpenMPI has described itself as "an open source, freely available implementation of both the MPI-1 and MPI-2 documents", which allows for parallel programming. The team has just released version 1.7.5, and they can proudly announce Open MPI is now fully MPI-3.0 compliant. This is a "feature release" will be part of the 1.8 series.
Submitted by lev_lafayette on Fri, 03/14/2014 - 03:30
Software Quality Assurance integrates the entire software development process. This includes defining requirements and integration, architecture and design, coding conventions, code reuse, source code control and revision, code reviews and testing regimen.
Defining Requirements and Integration
Requirements typically definitions follow the procedures established in the Quality Management System for Project Management
Submitted by lev_lafayette on Fri, 03/07/2014 - 06:04
There should be little doubt that the future of computing is a multicore future. If nothing else, the clock speed/heat trade-off provides a fundamental hardware tendency. But as is well recognised, parallel programming is not the easiest task in the world, hence the importance of teaching core concepts. One of these is Amdahl's Law and the subsequent Gustafon-Barsis Law. The following is an attempt to explain these concepts in an accessible and allegorical manner which educators and trainers may find useful.
Submitted by lev_lafayette on Sat, 02/15/2014 - 09:35
A presentation to the Linux Users of Victoria Beginners Workshop, February 15, 2013
Submitted by lev_lafayette on Thu, 02/13/2014 - 05:25
Previous comments concerning VASP installs still largely apply.
1. It still performs ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set.
2. It still has an weird and frustrating license which is open source (if you pay them) but not free.
Submitted by lev_lafayette on Fri, 12/20/2013 - 03:49
NWChem is a suite of computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
tar xvf Nwchem-6.3.revision2-src.2013-10-17.tar.gz