Logo
Pattern

Discover published sets by community

Explore tens of thousands of sets crafted by our community.

Basic Parallel Computing Terminology

30

Flashcards

0/30

Still learning
StarStarStarStar

Concurrency

StarStarStarStar

A property of systems in which several computations are executing simultaneously and potentially interacting with each other.

StarStarStarStar

Distributed Memory

StarStarStarStar

A system where each processor has its own private memory and computation is performed asynchronously across different nodes.

StarStarStarStar

Supercomputer

StarStarStarStar

A computer with a high level of performance as compared to a general-purpose computer. The performance is typically measured in FLOPS (Floating Point Operations Per Second).

StarStarStarStar

Task Parallelism

StarStarStarStar

A form of parallelization where the focus is on distributing tasks—conceptually separate pieces of work—across processors.

StarStarStarStar

Semaphore

StarStarStarStar

A synchronization primitive that can be used to control access to a common resource in a concurrent system.

StarStarStarStar

Hyper-threading

StarStarStarStar

Intel's proprietary simultaneous multithreading implementation used to improve parallelization of computations performed on x86 microprocessors.

StarStarStarStar

Shared Memory

StarStarStarStar

A memory accessible by all the processors in a multi-processor system.

StarStarStarStar

Load Balancing

StarStarStarStar

The process of distributing a set of tasks over a set of resources, with the goal of making their overall processing more efficient.

StarStarStarStar

Gustafson's Law

StarStarStarStar

A law that more accurately reflects the nature of computing tasks as being scalable; it suggests that as problem size grows, the parallel portion increases.

StarStarStarStar

Synchronization

StarStarStarStar

The coordination of concurrent computational processes to complete tasks in a proper sequence and without conflict.

StarStarStarStar

Cluster

StarStarStarStar

A set of loosely or tightly connected computers that work together so that they can be viewed as a single system.

StarStarStarStar

Node

StarStarStarStar

In distributed or parallel computing, a node is a single computing device within a larger system.

StarStarStarStar

CUDA (Compute Unified Device Architecture)

StarStarStarStar

A parallel computing platform and application programming interface (API) model created by Nvidia.

StarStarStarStar

MapReduce

StarStarStarStar

A programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster.

StarStarStarStar

Speedup

StarStarStarStar

A measure of the improvement in speed of execution of a task executed on two similar architectures with different resources.

StarStarStarStar

Amdahl's Law

StarStarStarStar

A formula that shows the potential speedup of a task using multiple processors relative to a single processor, considering the portion of the task that can be parallelized.

StarStarStarStar

Grid Computing

StarStarStarStar

A form of distributed computing wherein a 'virtual supercomputer' is composed of a cluster of networked, loosely-coupled computers acting in concert to perform very large tasks.

StarStarStarStar

Processor

StarStarStarStar

A small chip that resides in computers and other electronic devices; its basic job is to receive input and provide the appropriate output.

StarStarStarStar

Race Condition

StarStarStarStar

A flaw that occurs when the timing or order of events affects a program's correctness.

StarStarStarStar

Deadlock

StarStarStarStar

A situation in concurrent computing where two or more processes are unable to proceed because each is waiting for the other to release resources.

StarStarStarStar

Data Parallelism

StarStarStarStar

A form of parallel computing where the focus is on distributing data across different nodes, which operate on the data in parallel.

StarStarStarStar

Distributed Computing

StarStarStarStar

A field of computer science that studies distributed systems. A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages.

StarStarStarStar

OpenMP (Open Multi-Processing)

StarStarStarStar

An application programming interface (API) that supports multi-platform shared-memory multiprocessing programming.

StarStarStarStar

MPI (Message Passing Interface)

StarStarStarStar

A standardized and portable message-passing system designed to function on parallel computing architectures.

StarStarStarStar

Multithreading

StarStarStarStar

A technique by which a single set of code can be used by several processors at different stages of execution.

StarStarStarStar

GPU Computing

StarStarStarStar

The use of a Graphics Processing Unit (GPU) to do general purpose scientific and engineering computing.

StarStarStarStar

Scalability

StarStarStarStar

The capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.

StarStarStarStar

Lock

StarStarStarStar

A synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.

StarStarStarStar

Parallel Computing

StarStarStarStar

A type of computation in which many calculations or processes are carried out simultaneously.

StarStarStarStar

FLOPS

StarStarStarStar

An acronym that stands for floating-point operations per second, which is a measure of computer performance.

Know
0
Still learning
Click to flip
Know
0
Logo

© Hypatia.Tech. 2024 All rights reserved.