Você está na página 1de 4

Parallel Computing VIVA Questions

1) What is parallel computing? (pg. 1) 2) Give some application of parallel computing. (pg. 1) 3) What is multi-computers? (pg.2) 4) What is multi-processors? (pg.2) 5) Differentiate between data Parallelism and functional Parallelism with an example. (pg.10) 6) Define Data Clustering. (pg.14) 7) Explain Pipelining. (pg.12) 8) Identified the four distinct paths for the development of applications software for parallel computers. (pg.17) 9) Give the difference between Shared Vs Switched Media. (pg. 28) 10) 11) 12) List the various criteria for switch network topologies. (pg.29) What is Processor Array? (pg.37) List the shortcomings of Processor Arrays. (pg.42)

13) Illustrate the difference between Mutual Exclusion and Barrier Synchronization. (pg.45) 14) Explain MISD with an example. (pg.55)

15) Differentiate between domain decomposition and functional decomposition. (pg. 65) 16) 17) 18) 19) 20) 21) List the goals of Agglomeration. (pg.68, 69) Define local communication and global communication. (pg.67) What do you meant by mapping in fosters methodology (pg.70) What is message-passing model? (pg. 94) List the advantages of message-passing model. (pg.95) Define communicator. (pg.99)

22) What are the minimum two functions that are needed for MPI program to execute? (pg. 99,101)

23) 24) 25) 26) 27) 28) 29) 30) 31) 32)

Define collective communication with an example. (pg. 104) Give the advantage of using all-pairs shortest-path problem. (pg.138) Difference between directed graph and weighted graph. (pg.138) What are fields that are present in MPI _STATUS variable? (pg.147) Define deadlock. (pg. 148) What the two conditions for the occurrence of deadlock ? (pg.148, 149) Define graph. (pg.137) Define speedup. (pg. 174) Define Efficiency. (pg. 174) Explain Manager/Worker Paradigm. (pg.218)

33) List the advantage & disadvantage of allocating only a task to a worker. (pg. 218, 219) 34) 35) Define SPMD (pg. 219) Give the 3 phases involved in Manager process. (pg. 223)

36) Differentiate between blocking and non-blocking operations. (pg. 223, 224) 37) Give the advantage of using non-blocking send and receive. (pg. 224)

38) What are the ways in which the m X n matrix can be decomposed? (pg. 180) 39) List the naturals ways to distribute the vectors in matrix multiplication ? (pg. 184) 40) Illustrate the use of topology in a communicator ? (pg. 203)

41) Give the functionality of all-gather communication in matrix multiplication? 42) Give the functionality of all-to-all communication in matrix multiplication? 43) What is Karp-Flatt metric?

The KarpFlatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and the

Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.
Given a parallel computation exhibiting speedup on p processors, where p > 1, the experimentally determined serial fraction e is defined to be the KarpFlatt Metric viz:

e=
The less the value of e the better the parallelization.

44)

What is Amdahls law ?

Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speed up using multiple processors. The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20.

45)

What is Gustafson-Barsiss Law ?

Gustafson's Law (also known as Gustafson-Barsis' law) is a law in computer science which says that problems with large, repetitive data sets can be efficientlyparallelized. Gustafson's Law contradicts Amdahl's law, which describes a limit on the speed-up that parallelization can provide. Gustafson's law was first described [1]by John L. Gustafson and his colleague Edwin H. Barsis: . where P is the number of processors, S is the speedup, and the non-parallelizable part of the process. Gustafson's law addresses the shortcomings of Amdahl's law, which does not scale the availability of computing power as the number of machines increases. Gustafson's Law proposes that programmers set the size of problems to use the available equipment to solve problems within a practical fixed time. Therefore, if faster (more parallel) equipment is available, larger problems can be solved in the same time.

46) 47) 48) 49) 50)

Give the syntax of MPI_Send and MPI_Recv functions ? (pg. 146, 147) Give the syntax of MPI_Bcast function ? (pg. 122) Give the syntax of MPI_Reduce function? (pg. 105, 106) Give the syntax of MPI_Gatherv function ? (pg. 193, 194) Give the syntax of MPI_Scatterv function ? (pg. 192,193)

Você também pode gostar