Search

Home > Machine Learning Guide > 28. Hyperparameters 2
Podcast: Machine Learning Guide
Episode:

28. Hyperparameters 2

Category: Technology
Duration: 00:50:10
Publish Date: 2018-02-04 12:35:51
Description:

Hyperparameters part 2: hyper-search, regularization, SGD optimizers, scaling

## Episode

- Hyper optimization ** GridSearch, RandomSearch ** Bayesian Optimization (https://thuijskens.github.io/2016/12/29/bayesian-optimisation/) - Regularization: Dropout, L2, L1 ** DNNs = Dropout ** L2 = most common ** L1 = sparsity (zeros) & feature-selection (rarer circumstances) - Optimizers (SGD): Momentum -> Adagrad -> RMSProp -> Adam -> Nadam ** http://sebastianruder.com/optimizing-gradient-descent/index.html#visualizationofalgorithms - Initializers: Zeros, Random Uniform, Xavier - Scaling ** Feature-scaling: MinMaxScaler, StandardScaler, RobustScaler ** Features + inter-layer: Batch Normalization

Total Play: 0

Users also like

200+ Episodes
Data Science .. 300+     20+
300+ Episodes
Revolutions 2K+     50+
2 Episodes
Anxiety & De .. 20+    
100+ Episodes
Fisicast 800+     60+