Logo
Pattern

Discover published sets by community

Explore tens of thousands of sets crafted by our community.

Basics of Neural Networks for Vision

20

Flashcards

0/20

Still learning
StarStarStarStar

ReLU Activation Function

StarStarStarStar

Stands for Rectified Linear Unit, a non-linear function applied to the output of neurons, defined as f(x)=max(0,x)f(x) = max(0, x). It introduces non-linearity into the network, allowing for more complex functions to be modeled.

StarStarStarStar

Weight Decay

StarStarStarStar

A regularization technique that adds a penalty term to the loss function in order to discourage large weights during training by including a regularization term in the loss calculation.

StarStarStarStar

Fully Connected (Dense) Layer

StarStarStarStar

A layer in a neural network where each neuron is connected to every neuron in the previous layer, used to combine features learned by previous layers to do tasks like classification.

StarStarStarStar

Convolutional Layer

StarStarStarStar

A layer designed to process data with a known grid-like topology. It applies a convolution operation to the inputs, extracting features like edges and textures which are important in Computer Vision.

StarStarStarStar

Transfer Learning

StarStarStarStar

A research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.

StarStarStarStar

Pooling

StarStarStarStar

A process to reduce the spatial dimensions of the input, which reduces the number of parameters, controlling overfitting. It summarizes features in 'pools' such as by taking max or average values.

StarStarStarStar

Cross-Entropy Loss

StarStarStarStar

A loss function that measures the performance of a classification model whose output is a probability value between 0 and 1. It increases as the predicted probability diverges from the actual label.

StarStarStarStar

Weight Initialization

StarStarStarStar

The process of setting the weights of a neural network to initial values before training starts. Good initialization can speed up learning and lead to a higher overall accuracy of the network.

StarStarStarStar

Softmax Layer

StarStarStarStar

A mathematical function that turns a vector of numbers into a vector of probabilities, with the sum of all the probabilities being 1. It’s used in the final layer of a neural network-based classifier.

StarStarStarStar

Generative Adversarial Networks (GANs)

StarStarStarStar

A class of machine learning frameworks where two neural networks contest with each other in a game. A generator network creates outputs, while a discriminator network evaluates them.

StarStarStarStar

Attention Mechanisms

StarStarStarStar

Part of a neural model that allows the network to focus on different parts of the input sequentially, weighing parts differently, akin to how human attention works when we focus on certain parts of an input.

StarStarStarStar

Optimizer

StarStarStarStar

The algorithm or method used to change the attributes of the neural network such as the weights and learning rate in order to reduce the losses. Optimizers determine how the network will be updated during training.

StarStarStarStar

Dropout

StarStarStarStar

A regularization technique where randomly selected neurons are ignored during training. This prevents units from co-adapting too much and forces the network to learn more robust features.

StarStarStarStar

Batch Normalization

StarStarStarStar

A technique to improve the stability and performance of artificial neural networks, applied by normalizing the inputs of each layer to have zero mean and unit variance.

StarStarStarStar

Data Augmentation

StarStarStarStar

A strategy to increase the diversity of data available for training models, without actually collecting new data. Involves transformations like rotations, translations, and flips.

StarStarStarStar

Feature Maps

StarStarStarStar

The output of one filter applied to the previous layer. A given filter is convolved across the width and height of the input volume and computes dot products between the entries of the filter and input at any position.

StarStarStarStar

Overfitting

StarStarStarStar

Occurs when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data, reflecting over-optimization on the training data.

StarStarStarStar

Backpropagation

StarStarStarStar

An algorithm for training neural networks where gradients are calculated via chain rule and used to update the weights, allowing for learning complex tasks by minimizing the loss function.

StarStarStarStar

Stochastic Gradient Descent

StarStarStarStar

A version of gradient descent where updates are made for each training example, which allows for faster convergence but with more noise in the updating process.

StarStarStarStar

Early Stopping

StarStarStarStar

A form of regularization used to avoid overfitting when training a learner with an iterative method, such as gradient descent. Training is stopped as soon as the performance on a validation set gets worse.

Know
0
Still learning
Click to flip
Know
0
Logo

© Hypatia.Tech. 2024 All rights reserved.