# Teaching Amdahl's Law and the Gustafon-Barsis Law

Attachment | Size |
---|---|

500px-AmdahlsLaw.svg_.png | 29.7 KB |

There should be little doubt that the future of computing is a multicore future. If nothing else, the clock speed/heat trade-off provides a fundamental hardware tendency. But as is well recognised, parallel programming is not the easiest task in the world, hence the importance of teaching core concepts. One of these is Amdahl's Law and the subsequent Gustafon-Barsis Law. The following is an attempt to explain these concepts in an accessible and allegorical manner which educators and trainers may find useful.

Amdahl's Law, first announced in 1967, effectively argues that there is a limit to the performance improvement of a program not matter how many additional processers are added to the task. At first consideration, this may seem surprising. Surely, one could think, that if two processers are allocated to a task it should take half as long to complete as one? In theory of course this would be true, assuming the task can be conducted in parallel. In programming, just like in reality, this is not always the case.

The metaphor being considered here is driving from Melbourne to Sydney - insert other preferred cities as desired. Because this an interesting computational task, the more interesting route is being taken via the coastline, rather than cross-country. For the sake of argument assume this journey is going to take 16 hours (it's actually somewhat less). Yes, a journey is taken in serial - but the point here is to illustrate the task rate.

At the half way point of this journey and 8 hours of driving there is a very clever mechanic, just outside Mallacoota, who has developed a dual-core engine which, as a remarkable engineering feat, allows a car to travel twice as fast (with no loss in safety etc). To use the computing metaphor, it completes the task twice a quickly. The mechanic, clearly a very skilled individual, is able to remove the old single-core engine and replace it with a new dual-core instantly. Perhaps they are a wizard rather than a mechanic.

In any case, the second-half of the journey with the new dual-core engine now only takes 4 hours rather than the expected 8, and the total journey takes 12 hours rather than the expected 16. With the development of quad-core and octo-core engines the following journey time is illustrated.

Cores Mallacoota Sydney Total Time 1 8 hours +8 hours 16 hours 2 8 hours +4 hours 12 hours 4 8 hours +2 hours 10 hours 8 8 hours +1 hour 9 hours .. .. .. .. Inf 8 hours +nil 8 hours

Whilst the total journey time is reduced with the addition of new multicore engines, even an infinite-core engine cannot reduce the travel time to less than the proportion of the journey that is conducted with a single-core engine.

In very general terms Amdhal's Law states thaat the total maximum improvement to a system is limited by the proportion that has been improved. In computing programming because some of the task is in serial, there is a maxiumum limit to the speedup based on the time that is required for the sequential task - no matter how many processors are thrown at the problem.

The maximum speedup is:

S(N) = 1 / (1-P) + (P/N)

Where P is the proportion of a program that can be made parallel, and (1 - P) is the proportion that cannot be parallelised (and therefore remains serial).

Parallel programming is a complicated affair that requires some serial overhead. Not only are there serial tasks within a program, the very act of making a program parallel involves serial overhead, such as start-up time, synchronisation, data communications, and so forth. Therefore, all parallel programs will be subject to Amdahl's Law and are therefore limited in their total performance improvement, no matter how many cores they run on. The following graphic, from Wikipedia, illustrates these limits.

Whilst this certainly seemed disappointing at the time, some twenty years later John L. Gustafson and Edwin H. Barsis noted that Amadahl's Law assumed a computation problem of fixed data set size and noted that this certainly wasn't the case in the real world. Assuming that the complexity is within the serial overhead (following Minsky's Conjecture), the bigger the dataset, the smaller the proportion of the program that will be serial. Or, to use the car metaphor, why stop at Sydney? You have a multicore car now, just keep going! For their elegant and practical solution they won the 1988 they won the Gordon Bell Prize. Let's review that table again.

Cores Mallacoota Sydney Brisbane Rockhampton Total Time 1 8 hours +8 hours +8 hours +8 hours 32 hours 2 8 hours +4 hours +4 hours +4 hours 20 hours 4 8 hours +2 hours +2 hours +2 hours 14 hours 8 8 hours +1 hour +1 hour +1 hours 11 hours .. .. .. .. .. Inf 8 hours +nil +nil +nil 8 hours

As can be noted the overhead becomes proportionally less and less the further the distance travelled. Thus, whilst Amdahl's Law is certainly true in a fixed sense, data problems are not fixed, and the advantages of parallelisation can be achieved through making use of Gustafon-Barsis Law. It should also be fairly clear how multicore computing, parallel programming, and big data converge into a trajectory for the future of computing.

- lev_lafayette's blog
- Log in to post comments