Explore tens of thousands of sets crafted by our community.
Task and Data Parallelism
10
Flashcards
0/10
Parallel Computing
Parallel Computing is a type of computation where many calculations or processes are carried out simultaneously. It leverages multiple processing elements for solving a problem more quickly than with a single processor.
Granularity in Parallel Computing
Granularity refers to the size of a task or job in parallel computing. It can be categorized as fine-grained, where tasks are small and frequent, or coarse-grained, where tasks are larger and less frequent. Task and data parallelism can operate at different granularities.
Shared Memory Architecture
A Shared Memory Architecture is one where all the processors share the main memory space. It allows multiple processors to operate directly on shared data. Task parallelism is often employed in shared memory systems.
MIMD (Multiple Instruction, Multiple Data)
MIMD is a type of parallel computing where different processing elements can execute different instructions on different data points at the same time. It can support both task and data parallelism.
Synchronization
Synchronization in parallel computing is the coordination of concurrent processes or tasks to ensure the correct sequencing and data integrity. It is crucial for tasks that have dependencies or need to share resources.
Task Parallelism
Task Parallelism, also known as function parallelism, involves the execution of different tasks or functions concurrently on multiple computing cores. Each core may execute a different task on the same or different data set.
Distributed Memory Architecture
In Distributed Memory Architecture, each processor has its own private memory. Communication between processors is achieved through a communication network. Data parallelism is often more suitable for distributed memory systems.
Data Parallelism
Data Parallelism involves the distribution of data across multiple processing units and performing the same operation on each unit of data concurrently. This is commonly used in operations that involve arrays or matrices.
SIMD (Single Instruction, Multiple Data)
SIMD is a type of parallel computing where multiple processing elements perform the same operation on multiple data points simultaneously. It is a form of data parallelism.
Race Condition
A race condition occurs in parallel computing when multiple threads or processes access and manipulate shared data concurrently, leading to unexpected results. It is a type of synchronization issue that needs careful handling.
© Hypatia.Tech. 2024 All rights reserved.