Microprocessor Design/Memory-Level Parallelism

Template:Microprocessor Parallelism Microprocessor performance is largely determined by the degree of organization of parallel work of various units. Different ways of microprocessor parallelization are considered. For parallel processing of commands, the pipeline method is used; for parallel data processing, the SIMD (Single Instruction - Many Data) architecture is applied. The implemented method of thread-level parallelism was the basis for creation of multicore microprocessors. A multicore microprocessor is one of more powerful processors that are surrounded by a multitude of auxiliary engines, which are designed for more efficient processing of complex multimedia applications in the multithreaded mode. Architectures with support of chip-level multiprocessing represent the future of microprocessors, because such architecture can achieve huge productivity levels with more acceptable frequencies through parallel execution of many operations.

Memory-Level Parallelism

edit

Memory-Level Parallelism (MLP) is the ability to perform multiple memory transactions at once. In many architectures, this manifests itself as the ability to perform both a read and write operation at once, although it also commonly exists as being able to perform multiple reads at once. It is rare to perform multiple write operations at once, because of the risk of potential conflicts (trying to write two different values to the same location).

Notice that this is not the same as vectorized memory operations, such as reading 4 separate but contiguous 8-bit values in a single 32-bit read.