Você está na página 1de 9

n

VALLIAMMAI ENGINEERING COLLEGE


SRM Nagar, Kattankulathur 603 203

DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
QUESTION BANK

VIII SEMESTER

CS6801- Multi Core Architectures and Programming

Regulation 2013

Academic Year 2016 17

Prepared by

Mr.L.Karthikeyan Assistant Professor /CSE

Mr.M.Mayuranathan, Assistant Professor/CSE


n

VALLIAMMAI ENGNIEERING COLLEGE


SRM Nagar, Kattankulathur 603203.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Year & Semester : IV Year/VII Semester


Subject Code : CS6801
Subject Name : Multi-Core Architecture and programming
Degree & Branch : M.E CSE
Staff in charge : Mr.M.Mayuranathan and Mr.L.Karthiekeyan

S.No QUESTIONS COMPETENCE LEVEL


UNIT I MULTI-CORE PROCESSORS
SYLLABUS
Single core to Multi-core architectures SIMD and MIMD systems Interconnection networks -
Symmetric and Distributed Shared Memory Architectures Cache coherence - Performance Issues
Parallel program design.
1. Define Vector Instruction. Remember BTL 1

2. Compare between symmetric memory Architecture and distributed Evaluate BTL 5


Memory Architecture
3. Generalize are the factors to increasing the operating frequency of Create BTL 6
the processor?
4. Discuss the issues available in handling the performance? Understand BTL2

5. Show and define SIMD System Apply BTL3

6. compose the definition of MIMD System Create BTL6

7. Express NUMA with neat sketch Understand BTL2

8. Pointout Interconnection networks and its types Analyse BTL4

9. Pointout Toroidal mesh with neat diagram Analyse BTL4

10. Give definition of latency and bandwidth. Understand BTL2

11. Show the neat diagram for structural model of centralized shared Apply BTL3
memory multiprocessor
n

12. Define Cache Coherence protocol with its types Remember BTL1

13. Define Directory based. Remember BTL1

14. Pointout Snooping. Analyse BTL4

15. Give the characteristic of the performance due to write update and Understand BTL2
write invalidate protocol?
16. Tell the disadvantage of Symmetric shared memory architecturte? Remember BTL1

17. Define Agglomeration or aggregation. Remember BTL1

18. Compare singe and multi core CPU Evaluate BTL5

19. Define false sharing. Remember BTL1

20. Show a mathematical formula for speedup and efficiency of parallel Apply BTL3
program speed up.
PART- B
1 i) Summarize the Evaluation from single core to multi core Understand BTL2
architecture
ii) Compare single core and multi core processor
2 Discuss SIMD Understand BTL2
.
3 Describe MIMD Remember BTL1

4 Illustrate Discuss Shared Memory Interconnect. Apply BTL3

5 Explain Distributed Memory Interconnect. Analyze BTL4

6 Explain Symmetric shared Memory Architecture. Analyze BTL4

7 Discuss Distributed Shared Memory Architecture. Understand BTL2

8 Illustrate Cache Coherence. Apply BTL3

9 Identify the Performance issues and discuss briefly. Remember BTL1

10 Explain Directory based cache coherence protocol Analyze BTL4

11 Examine the Amdahl Law. Remember BTL1

12 Describe the Serial program. Remember BTL1

13 Generalize the Snooping protocol briefly. Create BTL6


n

14 Summarize the Parallelizing the serial program Evaluate BTL5

UNIT II PARALLEL PROGRAM CHALLENGES


SYLLABUS

Performance Scalability Synchronization and data sharing Data races Synchronization


primitives (mutexes, locks, semaphores, barriers) deadlocks and livelocks communication between
threads (condition variables, signals,message queues and pipes).

1 Define Algorithm complexity? Remember BTL1

2 Tell the Out-of-order in detail? Remember BTL1

3 Pointout Hardware prefetching Analyze BTL4

4 Give the Definition of Software prefetching. Understand BTL2

5 Define Work imbalance. Remember BTL1

6 Generalize hot or contended locks. Create BTL6

7 Give the define Oversubscription Understand BTL2

8 Tell the processes of priority inversion. Create BTL6

9 Assess the Definition Data races. Evaluate BTL5

10 Assess the Definition Synchronization Evaluate BTL5

11 Define region of code. Remember BTL1

12 Give the definition deadlock Understand BTL2

13 Define livelock Remember BTL1

14 Examine the condition variable. Apply BTL3

15 Complete the definition Signal. Apply BTL3

16 Pointout Event. Analyze BTL4

17 Examine the Message queue. Apply BTL3


n

18 Pointout Named pipes Analyze BTL4

19 Give the Example two threads in deadlock. Understand BTL2


20 Define mutex,critical region and semaphores Remember BTL1
PART- B
1 Discuss in details about the importance of algorithmic Understand BTL2
complexity
2 Explain the Outline about necessity of structure reflects in Analyze BTL4
performance
3 Describe briefly about Data Structures plays vital role in part Remember BTL1
of performance with an example.
4 Describe briefly the Super linear scaling. Remember BTL1
5 Give an outline about scaling of library code Understand BTL2

6 Write in detail and summarize about Hardware constraints applicable Evaluate BTL5
to improve scaling.

7 Illustrate about operating system constraints to scaling. Apply BTL3


8 Give briefly the List in details about tools used for detecting data Apply BTL2
races.
9 Generalize in detail about Data races and to overcome it. Create BTL6

10 Describe in detail the Mutex, and semaphore Remember BTL1


11 Explain briefly the Spin lock and reader and writer lock. Analyze BTL4
12 Describe in detail about conditional variable in communication Remember BTL1
between thread.
13 Explain and Write short note signal ,events and message queue Analyze BTL4
14 Illustrate and explain the Named pipes Apply BTL3

UNIT III SHARED MEMORY PROGRAMMING WITH OpenMP


SYLLABUS

OpenMP Execution Model Memory Model OpenMP Directives Work-sharing Constructs


Library functions Handling Data and Functional Parallelism Handling Loops Performance
Considerations.

1 Compose the statement termed as initial task region? Create BTL6


2 Pointout the List the effect of cancel construct. Analyze BTL4
3 When will a cancellation point construct triggers? Remember BTL1
4 Define the term thread private memory. Remember BTL1
5 Differentiate between shared memory model and message passive Understand BTL2
model.
n

6 Give short note on private variable. Understand BTL2


7 Pointout about flush-set Analyze BTL4
8 Express how to enable consistency between two threads temporary Understand BTL2
view and memory.
9 Order the criteria for sentinel of condition compilation. Analyze BTL4

10 Examine are the ICVs stored values affect loop region? Apply BTL3
11 Identify the ICVs stored values affect program execution. Remember BTL1
12 List the restriction to array. Remember BTL1
13 Give the List of the restrictions to parallel construct. Understand BTL2
14 Show the List of the restrictions to work sharing constructs. Apply BTL3
15 Show the List of the restriction to sections constructs. Apply BTL3
16 Describe about simple lock routines. Remember BTL1
17 Examine the about Nestable lock routines Remember BTL1
18 Assess about pragma. Evaluate BTL5
19 Compose the definition the term shared variable in execution context. Create BTL6
20 Conclude the run-time system know how many threads to create Evaluate BTL5
PART- B
1 Generalize briefly about OPENMP execution model Create BTL6
2 Explain the types Shared memory model Evaluate BTL5
3 i) Discuss Directive formate? (8) Understand BTL2

ii) Discuss conditional compilation(8)


4 Collect the all information about Internal control variables(16) Remember BTL1

5 i) Explain the Array section(4) Analyze BTL4


ii) Explain the Canonical loop form(6)
iii) Explain the Parallel construct(6)
6 i) Describe the loop constrct(8) Remember BTL1
ii) Describe the sections consructs(8)
7 i) Discuss single construct(8) Understand BTL2
ii) Discuss workshare construct(8)
8 i) Illustrate the runtime library definitions(8) Apply BTL3
ii) Illustrate the Execution environment routines(8)

9 i) Examine the Lock routines(8) Remember BTL1


ii) Examine the Portable timer routines(8)
10 Explain briefly about the General Data parallelism. Analyze BTL4

11 Describe briefly about the Functional parallelism. Understand BTL2


12 Describe briefly about the Handling loops. Understand BTL2
13 Illustrate briefly about the Performance considerations Apply BTL3
14 i) Explain the Nowait clause(8) Analyze BTL4
ii) Explain the Single Pragma(8)
n

UNIT IV DISTRIBUTED MEMORY PROGRAMMING WITH MPI


SYLLABUS
MPI program execution MPI constructs libraries MPI send and receive Point-to-point
andCollective communication MPI derived datatypes Performance evaluation.

1 Define the term MPI Remember BTL1


2 Give the term collective communications Understand BTL2
3 What is the purpose of wrapper script? Create BTL6
4 Tell, How to compile and execute a mpi_hello.c program in mpi Remember BTL1
environment?
5 What are the functions in MPI to initiate and terminates a Remember BTL1
computation, identify processes, and send and receive message.
6 List the different datatype constructors Analyze BTL4
7 Complete about communicators in MPI Apply BTL3
8 Pointout to remove a group? Analyze BTL4
9 List the dimension of the array in distributed array. Remember BTL1
10 Complete, How to create a Cartesian constructor? Apply BTL3
11 Complete, What are the interfaces to create a distributed graph Apply BTL3
topology?
12 List the features of blocking and non-blocking in point-to-point Remember BTL1
communication?

13 List some predefined reduction operators in MPI Evaluate BTL5


14 Examine about MPI_All_reduce and their representations Apply BTL3
15 Pointout the term broadcast in collective communication Analyze BTL4
16 Assess, List the functions of Group Accessors. Evaluate BTL5
17 How to plan to represent any collection of data items in MPI? Create BTL6
18 Predict, How will calculate elapsed time in MPI? Understand BTL2
19 Express the term linear speedup Understand BTL2

20 Tell about strongly and weakly scalable Remember BTL1


PART- B
1 i) Describe about MPI Program execution(8) Remember BTL1
ii) Describe about MPI Init and Finalize(8)
2 Explain briefly about the MPI Program. Analyze BTL4
3 i) Describe about the Datatype constructors(8) Remember BTL2
ii) Discuss about the Subarray Datatype constructor(8)
4 i) Explain the Distributed array datatype constructor(6) Evaluate BTL5
ii) Explain the Cartesian constructor(6)
iii) Explain the Distributed graph constructor(4)
5 i) Generalize the Group of process(8) Create BTL6
ii) Explain the Virtual topology(8)
6 i) Describe the Attribute Caching(8) Understand BTL2
ii) Discuss about the Communicators(8)
n

7 Illustrate and explain about the MPI Send. Apply BTL3


8 Describe about the MPI Receive. Remember BTL1
9 Describe about the Point-to-Point communication Remember BTL1
10 Discuss about the Collective communication Understand BTL2
11 i) Demonstrate the Compare point-to-point communication Apply BTL3
and collective communication.
ii) Discuss about the Scatter and gather
12 Explain about the MPI derived datatypes Analyze BTL4
13 Identify the Performance evaluation in detail. Remember BTL 1
14 Explain about the Tree-structured communication Analyze BTL4
UNIT V PARALLEL PROGRAM DEVELOPMENT 9
SYLLABUS
Case studies - n-Body solvers Tree Search Open MP and MPI implementations and comparison.

1 Tell the input and output parameters of n-body problem Remember BTL1
2 Tell about reduced algorithm. Remember BTL1
3 Pointout about Pthreads Loop_schedule functionality in Analyze BTL4
parallel processing.
4 Discuss the different data structures can be adopted in a process? Understand BTL2
5 Tell the two phases for computation of forces? Remember BTL1

6 Pointout the term ring pass. Analyze BTL4


7 Demonstrate graph? Apply BTL3
8 Examine the directed graph. Apply BTL3
9 Explain, Why digraph is used in Travelling salesperson problem? Evaluate BTL5
10 Show, How to find a least cost tour in TSP Apply BTL3
11 Give, how to function push_copy is used in TSP Understand BTL2

12 Tell the global variables for recursive DFS? Remember BTL1


13 Define the term Pthreads or POSIX Thread Remember BTL1
14 Formulate the statement, the different categories of Pthread? Create BTL6
15 Pointout are the reasons for parameter thread-in_cond_wait used Analyze BTL4
in Tree search?
16 Tell the modes od message passing interfaces for send and its Remember BTL1
functions?
17 Discuss about My_avail_tour_count functions Understand BTL2
18 Discuss about Pthread_mutex_trylock Evaluate BTL5
19 Write about fulfill_request functions. Create BTL6
20 Distinguish between MPI_pack and MPI_unpack Understand BTL2
n

1 Generalize about the Two serial programs. Create BTL6

2 Explain about the Parallelizing the n-body solvers. Analyze BTL4


3 Explain about the Parallelizing the basic solver using OpenMP Evaluate BTL5
4 Describe about the Parallelizing the reduced solver using OpenMP Understand BTL2
5 Explain about the Parallelizing the basic solver using MPI Analyze BTL4
6 Explain about the Parallelizing the reduced solver using MPI Analyze BTL4

7 Examine the Performance of the MPI solvers. Apply BTL3


8 Express detail about Recursive DFS Understand BTL2
9 Express detail about Non recursive DFS Remember BTL1
10 Examine the Data structure for the serial Implementation Apply BTL3
11 Express detail about the Static parallelizing of tree search using Pthreads Remember BTL1
12 i) Describe the dynamic parallelizing of tree search using Understand BTL2
Pthreads(8)
ii) Describe the termination(8)
13 i) Describe the Evaluating the Pthread Tree(8) Remember BTL1
ii) Describe the search Program(4)
iii) Describe the Parallelizing the tree-search program using
OpenMP(4)
14 Describe and Compare OpenMP and MPI implementation Remember BTL1

Você também pode gostar