Logo
Pattern

Discover published sets by community

Explore tens of thousands of sets crafted by our community.

Generative Models

10

Flashcards

0/10

Still learning
StarStarStarStar

Normalizing Flows

StarStarStarStar

Normalizing flows are a series of invertible transformations applied to simple probability distributions to transform them into complex ones. These models are useful for density estimation and data generation, with applications in anomaly detection and generative tasks that require detailed probability distributions.

StarStarStarStar

Explicit Generative Models

StarStarStarStar

Explicit generative models directly learn the probability distribution of data, often through maximum likelihood estimation. They include models like Naive Bayes and Gaussian Mixture Models (GMMs). They are applied in clustering, density estimation, and outlier detection.

StarStarStarStar

Generative Adversarial Networks (GANs)

StarStarStarStar

GANs consist of two neural networks, the generator and the discriminator, which are trained simultaneously. The generator produces samples from noise, while the discriminator evaluates them against real data, effectively teaching the generator to produce realistic data. Applications include image generation, style transfer, and data augmentation.

StarStarStarStar

Autoregressive Models

StarStarStarStar

Autoregressive models, such as PixelRNN and PixelCNN, generate high-dimensional data by modeling the conditional probability of every individual element, such as a pixel in the context of previous ones. They are used for tasks such as text generation or image completion.

StarStarStarStar

Sparse Coding

StarStarStarStar

Sparse coding algorithms seek to learn a set of overcomplete bases to represent data vectors as sparse linear combinations of these bases. This approach is beneficial for feature extraction, signal processing, and compression.

StarStarStarStar

Energy-Based Models (EBMs)

StarStarStarStar

EBMs learn an energy function that assigns low energy to high-probability (real or correct) data points and high energy to the rest. The goal is to sample or infer configurations that minimize energy. Applications include structured prediction, anomaly detection, and generative tasks.

StarStarStarStar

Diffusion Models

StarStarStarStar

Diffusion models, such as Denoising Diffusion Probabilistic Models (DDPMs), are trained to reverse a diffusion process from a complex data distribution to a simple one and then back again. This process gradually converts noise into a sample from the target distribution. They are used for generating high-fidelity images and audio synthesis.

StarStarStarStar

Restricted Boltzmann Machines (RBMs)

StarStarStarStar

RBMs are a two-layer neural network using a stochastic approach. They learn a probability distribution over the input space and can sample from that distribution after being trained. RBMs can be stacked to form deeper models called Deep Belief Networks. Applications include dimensionality reduction, classification, and feature learning.

StarStarStarStar

Variational Autoencoders (VAEs)

StarStarStarStar

VAEs are neural networks that use a probabilistic twist to encode input data into a latent (hidden) representation, from which it can be generated back. It assumes the data is generated by some random process involving hidden variables and tries to approximate that process. Applications include image denoising, data compression, and generative design.

StarStarStarStar

Latent Dirichlet Allocation (LDA)

StarStarStarStar

LDA is a generative statistical model that explains sets of observations through unobserved groups that explain why some parts of the data are similar. For text data, LDA can uncover the topic structure. It's widely used in natural language processing for topic discovery and document classification.

Know
0
Still learning
Click to flip
Know
0
Logo

© Hypatia.Tech. 2024 All rights reserved.