@InProceedings{Jones13, title = "{A} {P}ersonal {P}erspective on the {S}tate of {HPC} in 2013", author= "Jones, Christopher C.R.", editor= "Welch, Peter H. and Barnes, Frederick R. M. and Broenink, Jan F. and Chalmers, Kevin and Pedersen, Jan Bækgaard and Sampson, Adam T.", pages = "263--270", booktitle= "{C}ommunicating {P}rocess {A}rchitectures 2013", isbn= "978-0-9565409-7-3", year= "2013", month= "nov", abstract= "This paper is fundamentally a personal perspective on the sad state of High Performance Computing (HPC, or what was known once as Supercomputing). It arises from the author's current experience in trying to find computing technology that will allow codes to run faster: codes that have been painstakingly adapted to efficient performance on parallel computing technologies since around 1990, and have allowed effective 10-fold increases in computing performance at 5 year HPC up-grade intervals, but for which the latest high-count multi-core processor options fail to deliver improvement. The presently available processors may as well not have the majority of their cores as to use them actually slows the code - hard-won budget must be squandered on cores that will not contribute. The present situation is not satisfactory: there are very many reasons why we need computational response, not merely throughput. There are a host of cases where we need a large, complex simulation to run in a very short time. A simplistic calculation based on the nominal performance of some of the big machines with vast numbers of cores would lead one to believe that such rapid computation would be possible. The nature of the machines and the programming paradigms, however, remove this possibility. Some of the ways in which the computer science community could mitigate the hardware shortfalls are discussed, with a few more off the wall ideas about where greater compute performance might be found." }