Explore tens of thousands of sets crafted by our community.
Generative Models
10
Flashcards
0/10
Normalizing Flows
Normalizing flows are a series of invertible transformations applied to simple probability distributions to transform them into complex ones. These models are useful for density estimation and data generation, with applications in anomaly detection and generative tasks that require detailed probability distributions.
Explicit Generative Models
Explicit generative models directly learn the probability distribution of data, often through maximum likelihood estimation. They include models like Naive Bayes and Gaussian Mixture Models (GMMs). They are applied in clustering, density estimation, and outlier detection.
Generative Adversarial Networks (GANs)
GANs consist of two neural networks, the generator and the discriminator, which are trained simultaneously. The generator produces samples from noise, while the discriminator evaluates them against real data, effectively teaching the generator to produce realistic data. Applications include image generation, style transfer, and data augmentation.
Autoregressive Models
Autoregressive models, such as PixelRNN and PixelCNN, generate high-dimensional data by modeling the conditional probability of every individual element, such as a pixel in the context of previous ones. They are used for tasks such as text generation or image completion.
Sparse Coding
Sparse coding algorithms seek to learn a set of overcomplete bases to represent data vectors as sparse linear combinations of these bases. This approach is beneficial for feature extraction, signal processing, and compression.
Energy-Based Models (EBMs)
EBMs learn an energy function that assigns low energy to high-probability (real or correct) data points and high energy to the rest. The goal is to sample or infer configurations that minimize energy. Applications include structured prediction, anomaly detection, and generative tasks.
Diffusion Models
Diffusion models, such as Denoising Diffusion Probabilistic Models (DDPMs), are trained to reverse a diffusion process from a complex data distribution to a simple one and then back again. This process gradually converts noise into a sample from the target distribution. They are used for generating high-fidelity images and audio synthesis.
Restricted Boltzmann Machines (RBMs)
RBMs are a two-layer neural network using a stochastic approach. They learn a probability distribution over the input space and can sample from that distribution after being trained. RBMs can be stacked to form deeper models called Deep Belief Networks. Applications include dimensionality reduction, classification, and feature learning.
Variational Autoencoders (VAEs)
VAEs are neural networks that use a probabilistic twist to encode input data into a latent (hidden) representation, from which it can be generated back. It assumes the data is generated by some random process involving hidden variables and tries to approximate that process. Applications include image denoising, data compression, and generative design.
Latent Dirichlet Allocation (LDA)
LDA is a generative statistical model that explains sets of observations through unobserved groups that explain why some parts of the data are similar. For text data, LDA can uncover the topic structure. It's widely used in natural language processing for topic discovery and document classification.
© Hypatia.Tech. 2024 All rights reserved.