Figure 2 : HPC paradigms – current and future; Moore’s law and
the slow-down due to the power wall.
A summary of historical developments in parallel and high-performance
computing architectures is sketched in Table 3. A sketch of the early
instruction for computation, including the concept of parallelism in
computing, can be traced to Charles Babbage (see Table 3). Scientific
computing has benefitted from the advances in chip architecture that led
to the linear Moore’s law behavior for four decades from the 1980s-2010
(Figure 2, right). However, even during this golden age of increasing
clock speeds and doubling of computational speed every 18 months in
single-core architectures, high-performance computing broke the shackles
of serial (and vectorized) computing to embrace parallel computing as a
mainstream route to solve computational problems. The switch to parallel
microprocessors is a game-changer in the history of computing [47]
(see,
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html). The
advances in parallel hardware and software have torpedoed the advances
in multiphysics and multiresolution simulations. This convergence of
high-performance computing and multiscale modeling has transformed
parallel algorithms (see Table 2), which are the engines of multiphysics
modeling.