Explore tens of thousands of sets crafted by our community.
Performance Optimization in Machine Learning
12
Flashcards
0/12
Gradient Descent
A first-order iterative optimization algorithm for finding a local minimum of a differentiable function. Used in minimizing the cost function in models like linear regression.
Mini-batch Gradient Descent
A variation of gradient descent where the model is updated using a subset of the training data, which reduces the variance of the parameter updates and can lead to faster convergence.
Regularization
Techniques such as L1 (Lasso) and L2 (Ridge) regularization that penalize large weights in a model to prevent overfitting and improve generalization.
Cross-validation
A model validation technique to assess how the results of a statistical analysis will generalize to an independent data set, often used in settings where the goal is prediction.
Feature Scaling
The method of normalizing or standardizing the range of independent variables or features of data, which is important for algorithms that calculate distances.
Batch Normalization
A technique to normalize the inputs of each layer to improve the speed, performance, and stability of deep learning networks.
Early Stopping
A form of regularization where you stop training as soon as the performance on a validation set starts to degrade, preventing overfitting.
Ensemble Methods
Combining predictions from multiple machine learning models to improve predictive performance compared to a single model.
Hyperparameter Tuning
The process of finding the optimal set of hyperparameters (parameters that are not learned) for a learning algorithm, typically using methods like grid search or random search.
Dropout
A regularization technique for neural networks that involves randomly setting a fraction of input units to 0 at each update during training to prevent overfitting.
Pruning
Reducing the size of a machine learning model by removing parts that have little impact on the output, such as less important features or weights, to reduce complexity and improve speed.
Transfer Learning
The practice of reusing a pre-trained model on a new, related task or problem, where only the final layers are fine-tuned, saving on training time and resources.
© Hypatia.Tech. 2024 All rights reserved.