Você está na página 1de 1

A pipelined multi-threading parallelizing compiler tries to break up the sequenc e of operations inside a loop into a series of code blocks,

such that each code block can be executed on separate processors concurrently. There are many pleasingly parallel problems that have such relatively independen t code blocks, in particular systems using pipes and filters. For example, when producing live broadcast television, many times a second we need to (1) read a f rame of raw pixel data from the image sensor, (2) do MPEG motion compensation on the raw data, (3) entropy compress the motion vectors and other data, (4) break up the compressed data into packets, (5) add the appropriate error correction a nd do a FFT to convert the data packets into COFDM signals, and (6) send the COF DM signals out the TV antenna. A pipelined multi-threading parallelizing compiler could assign each of these 6 operations to 6 different processors (perhaps arranged in a systolic array), ins erting the appropriate code to forward the output of one processor to the next p rocessor. [edit]Difficulties Automatic parallelization by compilers or tools is very difficult due to the fol lowing reasons[3]: dependence analysis is hard for code using indirect addressing, pointers, recurs ion, and indirect function calls; loops have an unknown number of iterations; accesses to global resources are difficult to coordinate in terms of memory allo cation, I/O, and shared variables. [edit]Workaround Due to the inherent difficulties in full automatic parallelization, several easi er approaches exist to get a parallel program in higher quality. They are: Allow programmers to add "hints" to their programs to guide compiler paralleliza tion, such as HPF for distributed memory systems and OpenMP or OpenHMPP for shar ed memory systems. Build an interactive system between programmers and parallelizing tools/compiler s. Notable examples are Vector Fabrics' Pareon, SUIF Explorer (The Stanford Univ ersity Intermediate Format compiler), the Polaris compiler, and ParaWise (formal ly CAPTools). Hardware-supported speculative multithreading. [edit]Historical parallelizing compilers See also: Automatic parallelization tool Most research compilers for automatic parallelization consider Fortran programs, [citation needed] because Fortran makes stronger guarantees about aliasing than languages such as C. Typical examples are: Paradigm compiler Polaris compiler Rice Fortran D compiler SUIF compiler Vienna Fortran compiler

Você também pode gostar