Explore tens of thousands of sets crafted by our community.
Deep Learning Layers
10
Flashcards
0/10
Normalization Layer
Normalizes the input to have zero mean and unit variance for stable training.
Residual Layer
Enables training of deeper networks by adding the original input to the output of the layer.
Input Layer
Receives the initial data for processing.
Pooling Layer
Reduces spatial dimensions and computational complexity.
Convolutional Layer
Applies various filters to create feature maps highlighting important spatial features.
Dropout Layer
Randomly sets a fraction of input units to 0 at each update during training to prevent overfitting.
Attention Layer
Weights input based on context and relevance, allowing the network to focus on important parts of the input.
Embedding Layer
Transforms discrete input categories into dense vectors of fixed size.
Dense (Fully Connected) Layer
Performs linear transformation with learned weights and biases.
Recurrent Layer
Processes sequences of data, retaining information across time steps through internal state.
© Hypatia.Tech. 2024 All rights reserved.