Table 3 : Historical milestones in parallel computing architectures
The paradigm shift in bringing the Babbage vision of parallelism to the common folk occurred with the change in emphasis from single instruction multiple data (SIMD) or embarrassingly parallel tasks where the same code is run multiple times in multiple processors to explore different parameters, to multiple instructions multiple data (MIMD) or inherently parallel tasks where the multiple processors work collectively to divide a given task by communicating with each other continually. The Message Passing Interface (MPI) (see Table 3) established a portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of parallel software industry and encouraged the development of portable and scalable large-scale parallel applications utilizing the MIMD paradigm. Specifically, specific examples in the molecular dynamics of large systems or quantum chemistry codes could only be realized by shared memory and message passing architectures in the MIMD model.