Explore tens of thousands of sets crafted by our community.
Parallel Computing Patterns
15
Flashcards
0/15
Bulk Synchronous Parallel
A parallel computing pattern that organizes parallel computation into a sequence of parallel steps followed by synchronization. Use when you can structure your computations into phases separated by barrier synchronizations.
Distributed Memory
A computing pattern where each processor has its own private memory and processors communicate by passing messages. Use when the system consists of a network of interconnected processors.
Dataflow
A pattern where the execution is driven by the availability of data operands rather than by a statically determined sequence. Use when you can represent computations as a directed graph of operations.
Task Parallelism
A pattern where different computational tasks are performed in parallel. Use when tasks can be defined that operate relatively independently and do not need to synchronize frequently.
Master-Worker
A pattern where a master process distributes work to multiple worker processes and collects results. Use when tasks can be easily partitioned and distributed to workers.
Pipeline Parallelism
A pattern where a sequence of stages process a stream of input data in an assembly-line fashion. Use when tasks can be broken down into sequential stages with data flowing between them.
Recursive Parallelism
A pattern where a problem is solved by recursively dividing it into smaller sub-problems that are solved in parallel. Use when problems exhibit a natural hierarchical structure and can be divided recursively.
Event-Driven Parallelism
A pattern where computation takes place in response to external events. Use when the system needs to react to asynchronous events or actions.
Fork-Join
A pattern where a task is divided into subtasks that are executed in parallel, and then joined upon completion. Use when you can decompose a task into independent subtasks that can run concurrently.
Shared Memory
A parallel computing pattern where multiple processors operate on a common address space. Use when high-speed data exchange between processors is required.
Geometric Decomposition
A pattern that involves dividing a large geometric problem space into smaller pieces that can be solved in parallel. Use when the problem can be represented in a geometric space and can be subdivided.
MapReduce
A programming model for processing large data sets with a distributed algorithm on a cluster. Use when data can be mapped over a set of operations and then reduced to a result.
Speculative Parallelism
A pattern where multiple speculative branches of a computation are executed in parallel. Use when there's uncertainty about which direction a computation will take.
Load Balancing
A pattern that aims to evenly distribute work across parallel compute resources to avoid idle times and optimize throughput. Use when workload is dynamic or varies greatly in size.
Data Parallelism
A parallel computing pattern where similar operations are performed concurrently on elements of distributed data structures. Use when large data sets can be processed independently in parallel.
© Hypatia.Tech. 2024 All rights reserved.