Explore tens of thousands of sets crafted by our community.
Basic Parallel Computing Terminology
30
Flashcards
0/30
Concurrency
A property of systems in which several computations are executing simultaneously and potentially interacting with each other.
Distributed Memory
A system where each processor has its own private memory and computation is performed asynchronously across different nodes.
Supercomputer
A computer with a high level of performance as compared to a general-purpose computer. The performance is typically measured in FLOPS (Floating Point Operations Per Second).
Task Parallelism
A form of parallelization where the focus is on distributing tasks—conceptually separate pieces of work—across processors.
Semaphore
A synchronization primitive that can be used to control access to a common resource in a concurrent system.
Hyper-threading
Intel's proprietary simultaneous multithreading implementation used to improve parallelization of computations performed on x86 microprocessors.
Shared Memory
A memory accessible by all the processors in a multi-processor system.
Load Balancing
The process of distributing a set of tasks over a set of resources, with the goal of making their overall processing more efficient.
Gustafson's Law
A law that more accurately reflects the nature of computing tasks as being scalable; it suggests that as problem size grows, the parallel portion increases.
Synchronization
The coordination of concurrent computational processes to complete tasks in a proper sequence and without conflict.
Cluster
A set of loosely or tightly connected computers that work together so that they can be viewed as a single system.
Node
In distributed or parallel computing, a node is a single computing device within a larger system.
CUDA (Compute Unified Device Architecture)
A parallel computing platform and application programming interface (API) model created by Nvidia.
MapReduce
A programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster.
Speedup
A measure of the improvement in speed of execution of a task executed on two similar architectures with different resources.
Amdahl's Law
A formula that shows the potential speedup of a task using multiple processors relative to a single processor, considering the portion of the task that can be parallelized.
Grid Computing
A form of distributed computing wherein a 'virtual supercomputer' is composed of a cluster of networked, loosely-coupled computers acting in concert to perform very large tasks.
Processor
A small chip that resides in computers and other electronic devices; its basic job is to receive input and provide the appropriate output.
Race Condition
A flaw that occurs when the timing or order of events affects a program's correctness.
Deadlock
A situation in concurrent computing where two or more processes are unable to proceed because each is waiting for the other to release resources.
Data Parallelism
A form of parallel computing where the focus is on distributing data across different nodes, which operate on the data in parallel.
Distributed Computing
A field of computer science that studies distributed systems. A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages.
OpenMP (Open Multi-Processing)
An application programming interface (API) that supports multi-platform shared-memory multiprocessing programming.
MPI (Message Passing Interface)
A standardized and portable message-passing system designed to function on parallel computing architectures.
Multithreading
A technique by which a single set of code can be used by several processors at different stages of execution.
GPU Computing
The use of a Graphics Processing Unit (GPU) to do general purpose scientific and engineering computing.
Scalability
The capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.
Lock
A synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.
Parallel Computing
A type of computation in which many calculations or processes are carried out simultaneously.
FLOPS
An acronym that stands for floating-point operations per second, which is a measure of computer performance.
© Hypatia.Tech. 2024 All rights reserved.