Explore tens of thousands of sets crafted by our community.
Deep Learning Layers
10
Flashcards
0/10
Input Layer
Receives the initial data for processing.
Dense (Fully Connected) Layer
Performs linear transformation with learned weights and biases.
Convolutional Layer
Applies various filters to create feature maps highlighting important spatial features.
Pooling Layer
Reduces spatial dimensions and computational complexity.
Recurrent Layer
Processes sequences of data, retaining information across time steps through internal state.
Normalization Layer
Normalizes the input to have zero mean and unit variance for stable training.
Dropout Layer
Randomly sets a fraction of input units to 0 at each update during training to prevent overfitting.
Attention Layer
Weights input based on context and relevance, allowing the network to focus on important parts of the input.
Residual Layer
Enables training of deeper networks by adding the original input to the output of the layer.
Embedding Layer
Transforms discrete input categories into dense vectors of fixed size.
© Hypatia.Tech. 2024 All rights reserved.