Quiznetik

Muli-core Architectures and Programming | Set 1

1. A collection of lines that connects several devices is called ______________

Correct : A. bus

2. PC Program Counter is also called ____________

Correct : A. instruction pointer

3. Which MIMD systems are best scalable with respect to the number of processors?

Correct : A. Distributed memory computers

4. Cache coherence: For which shared (virtual) memory systems is the snooping protocol suited?

Correct : D. Bus based systems

5. The idea of cache memory is based ______

Correct : A. on the property of locality of reference

6. When number of switch ports is equal to or larger than number of devices, this simple network is referred to as ______________

Correct : D. Both a and b

7. A remote node is being node which has a copy of a ______________

Correct : D. Cache block

8. A pipeline is like _______________

Correct : A. an automobile assembly line

9. Which cache miss does not occur in case of a fully associative cache?

Correct : A. Conflict miss

10. Bus switches are present in ____________

Correct : B. crossbar switching

11. Systems that do not have parallel processing capabilities are ______________

Correct : A. SISD

12. Parallel programs: Which speedup could be achieved according to Amdahl´s law for infinite number of processors if 5% of a program is sequential and the remaining part is ideally parallel?

Correct : B. 20

13. SIMD represents an organization that ______________

Correct : A. Includes many processing units under the supervision of a common control unit

14. Cache memory works on the principle of ____________

Correct : B. Locality of reference

15. In shared bus architecture, the required processor(s) to perform a bus cycle, for fetching data or instructions is ________________

Correct : A. One Processor

16. Alternative way of a snooping-based coherence protocol, is called a ____________

Correct : C. Directory protocol

17. If no node having a copy of a cache block, this technique is known as ______

Correct : B. Un-cached

18. Requesting node sending the requested data starting from the memory, and the requestor which has made the only sharing node, known as ________.

Correct : A. Read miss

19. A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______.

Correct : C. Pipe-lining

20. All nodes in each dimension form a linear array, in the __________.

Correct : D. Mesh topology

21. The concept of pipelining is most effective in improving performance if the tasks being performed in different stages :

Correct : B. require about the same amount of time

22. The expression 'delayed load' is used in context of

Correct : C. pipelining

23. During the execution of the instructions, a copy of the instructions is placed in the ______ .

Correct : D. Cache

24. Producer consumer problem can be solved using _____________

Correct : C. monitors

25. A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which access takes place is called:

Correct : B. race condition

26. The segment of code in which the process may change common variables, update tables, write into files is known as :

Correct : B. critical section

27. All deadlocks involve conflicting needs for __________

Correct : A. Resources

28. ___________ are used for signaling among processes and can be readily used to enforce a mutual exclusion discipline.

Correct : A. Semaphores

29. To avoid deadlock ____________

Correct : A. there must be a fixed number of resources to allocate

30. A minimum of _____ variable(s) is/are required to be shared between processes to solve the critical section problem.

Correct : B. two

31. Spinlocks are intended to provide __________ only.

Correct : B. Bounded Waiting

32. To ensure difficulties do not arise in the readers – writer’s problem, _______ are given exclusive access to the shared object.

Correct : B. writers

33. If a process is executing in its critical section, then no other processes can be executing in their critical section. This condition is called ___________.

Correct : D. mutual exclusion

34. A semaphore is a shared integer variable ____________.

Correct : B. that cannot drop below zero

35. A critical section is a program segment ______________.

Correct : A. where shared resources are accessed

36. A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is ___________

Correct : D. 8

37. A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units, then _____________

Correct : B. deadlock can never occur

38. _____________ refers to the ability of multiple process (or threads) to share code, resources or data in such a way that only one process has access to shared object at a time.

Correct : D. Mutual Exclusion

39. ____________ is the ability of multiple processes to co-ordinate their activities by exchange of information.

Correct : B. Synchronization

40. Paths that have an unbounded number of allowed nonminimal hops from packet sources, this situation is referred to as __________.

Correct : A. Livelock

41. Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a _________.

Correct : C. Deadlock

42. Which of the following conditions must be satisfied to solve the critical section problem?

Correct : D. All of the mentioned

43. Mutual exclusion implies that ____________.

Correct : A. if a process is executing in its critical section, then no other process must be executing in their critical sections

44. Bounded waiting implies that there exists a bound on the number of times a process is allowed to enter its critical section ____________.

Correct : A. after a process has made a request to enter its critical section and before the request is granted

45. What are the two atomic operations permissible on semaphores?

Correct : A. Wait

46. What are Spinlocks?

Correct : D. All of the mentioned

47. What is the main disadvantage of spinlocks?

Correct : B. they require busy waiting

48. The signal operation of the semaphore basically works on the basic _______ system call.

Correct : B. wakeup()

49. If the semaphore value is negative ____________.

Correct : A. its magnitude is the number of processes waiting on that semaphore

50. Which directive must precede the directive: #pragma omp sections (not necessarily immediately)?

Correct : A. #pragma omp section

51. When compiling an OpenMP program with gcc, what flag must be included?

Correct : A. -fopenmp

52. Within a parallel region, declared variables are by default ________ .

Correct : D. Shared

53. A ______________ construct by itself creates a “single program multiple data” program, i.e., each thread executes the same code.

Correct : A. Parallel

54. _______________ specifies that the iteration of the loop must be executed as they would be in serial program.

Correct : B. Ordered

55. ___________________ initializes each private copy with the corresponding value from the master thread.

Correct : A. Firstprivate

56. The __________________ of a parallel region extends the lexical extent by the code of functions that are called (directly or indirectly) from within the parallel region.

Correct : C. Dynamic extent

57. The ______________ specifies that the iterations of the for loop should be executed in parallel by multiple threads.

Correct : B. for pragma

58. _______________ Function returns the number of threads that are currently active in the parallel section region.

Correct : B. omp_get_num_threads ( )

59. The size of the initial chunksize _____________.

Correct : A. total_no_of_iterations / max_threads

60. A ____________ in OpenMP is just some text that modifies a directive.

Correct : B. clause

61. In OpenMP, the collection of threads executing the parallel block the original thread and the new thread is called a ____________

Correct : A. team

62. When a thread reaches a _____________ directive, it creates a team of threads and becomes the master of the team.

Correct : B. Parallel

63. Use the _________ library function to determine if nested parallel regions are enabled.

Correct : D. omp_get_nested()

64. The ____________ directive ensures that a specific memory location is updated atomically, rather than exposing it to the possibility of multiple, simultaneous writing threads.

Correct : C. atomic

65. A ___________ construct must be enclosed within a parallel region in order for the directive to execute in parallel.

Correct : D. work-sharing

66. ____________ is a form of parallelization across multiple processors in parallel computing environments.

Correct : B. Data parallelism

67. In OpenMP, assigning iterations to threads is called ________________

Correct : A. scheduling

68. The ____________is implemented more efficiently than a general parallel region containing possibly several loops.

Correct : B. Parallel Do/For

69. _______________ causes no synchronization overhead and can maintain data locality when data fits in cache.

Correct : D. Static

70. How does the difference between the logical view and the reality of parallel architectures affect parallelization?

Correct : A. Performance

71. How many assembly instructions does the following C instruction take? global_count += 5;

Correct : A. 4 instructions

72. MPI specifies the functionality of _________________ communication routines.

Correct : A. High-level

73. _________________ generate log files of MPI calls.

Correct : B. mpilog

74. A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a ________________.

Correct : C. Broadcast

75. __________________ is a nonnegative integer that the destination can use to selectively screen messages.

Correct : B. Type

76. The routine ________________ combines data from all processes by adding them in this case and returning the result to a single process.

Correct : A. MPI _ Reduce

77. The easiest way to create communicators with new groups is with_____________.

Correct : C. MPI_Comm_Split

78. _______________ is an object that holds information about the received message, including, for example, it’s actually count.

Correct : D. status

79. The _______________ operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes.

Correct : A. Reduce-scatter

80. __________________is the principal alternative to shared memory parallel programming.

Correct : B. Message passing

81. ________________may complete even if less than count elements have been received.

Correct : A. MPI_Recv

82. A ___________ is a script whose main purpose is to run some program. In this case, the program is the C compiler.

Correct : A. wrapper script

83. ________________ returns in its second argument the number of processes in the communicator.

Correct : B. MPI_Comm_size

84. _____________ always blocks until a matching message has been received.

Correct : C. MPI Recv

85. Communication functions that involve all the processes in a communicator are called ___________

Correct : B. collective communications

86. MPI_Send and MPI_Recv are called _____________ communications.

Correct : C. point-to-point

87. The processes exchange partial results instead of using oneway communications. Such a communication pattern is sometimes called a ___________.

Correct : A. butterfly

88. A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________.

Correct : A. broadcast

89. In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory.

Correct : B. derived datatype

90. MPI provides a function, ____________ that returns the number of seconds that have elapsed since some time in the past.

Correct : A. MPI_Wtime

91. Programs that can maintain a constant efficiency without increasing the problem size are sometimes said to be _______________.

Correct : B. strongly scalable

92. Parallelism can be used to increase the (parallel) size of the problem is applicable in ___________________.

Correct : B. Gustafson-Barsis's Law

93. Synchronization is one of the common issues in parallel programming. The issues related to synchronization include the followings, EXCEPT:

Correct : D. Correctness

94. Considering to use weak or strong scaling is part of ______________ in addressing the challenges of distributed memory programming.

Correct : B. Speeding up computations

95. Which of the followings is the BEST description of Message Passing Interface (MPI)?

Correct : B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other

96. An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles

Correct : A. Program

97. The set of NP-complete problems is often denoted by ____________

Correct : B. NP-C or NPC

98. Pthreads has a nonblocking version of pthreads_mutex_lock called __________

Correct : B. pthread_mutex_trylock

99. What are the algorithms for identifying which subtrees we assign to the processes or threads __________

Correct : C. depth-first search breadth-first search

100. What are the scoping clauses in OpenMP _________

Correct : A. Shared Variables & Private Variables