Logo
Pattern

Discover published sets by community

Explore tens of thousands of sets crafted by our community.

Task and Data Parallelism

10

Flashcards

0/10

Still learning
StarStarStarStar

Parallel Computing

StarStarStarStar

Parallel Computing is a type of computation where many calculations or processes are carried out simultaneously. It leverages multiple processing elements for solving a problem more quickly than with a single processor.

StarStarStarStar

Granularity in Parallel Computing

StarStarStarStar

Granularity refers to the size of a task or job in parallel computing. It can be categorized as fine-grained, where tasks are small and frequent, or coarse-grained, where tasks are larger and less frequent. Task and data parallelism can operate at different granularities.

StarStarStarStar

Shared Memory Architecture

StarStarStarStar

A Shared Memory Architecture is one where all the processors share the main memory space. It allows multiple processors to operate directly on shared data. Task parallelism is often employed in shared memory systems.

StarStarStarStar

MIMD (Multiple Instruction, Multiple Data)

StarStarStarStar

MIMD is a type of parallel computing where different processing elements can execute different instructions on different data points at the same time. It can support both task and data parallelism.

StarStarStarStar

Synchronization

StarStarStarStar

Synchronization in parallel computing is the coordination of concurrent processes or tasks to ensure the correct sequencing and data integrity. It is crucial for tasks that have dependencies or need to share resources.

StarStarStarStar

Task Parallelism

StarStarStarStar

Task Parallelism, also known as function parallelism, involves the execution of different tasks or functions concurrently on multiple computing cores. Each core may execute a different task on the same or different data set.

StarStarStarStar

Distributed Memory Architecture

StarStarStarStar

In Distributed Memory Architecture, each processor has its own private memory. Communication between processors is achieved through a communication network. Data parallelism is often more suitable for distributed memory systems.

StarStarStarStar

Data Parallelism

StarStarStarStar

Data Parallelism involves the distribution of data across multiple processing units and performing the same operation on each unit of data concurrently. This is commonly used in operations that involve arrays or matrices.

StarStarStarStar

SIMD (Single Instruction, Multiple Data)

StarStarStarStar

SIMD is a type of parallel computing where multiple processing elements perform the same operation on multiple data points simultaneously. It is a form of data parallelism.

StarStarStarStar

Race Condition

StarStarStarStar

A race condition occurs in parallel computing when multiple threads or processes access and manipulate shared data concurrently, leading to unexpected results. It is a type of synchronization issue that needs careful handling.

Know
0
Still learning
Click to flip
Know
0
Logo

© Hypatia.Tech. 2024 All rights reserved.