#Optimization
Dropout distillation(ICML2016)
Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder
[paper]
[supplement]
Learning to learn by gradient descent by gradient descent(ArXiv)
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Nando de Freitas
[paper]
Controlling Exploration Improves Training for Deep Neural Networks
Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura(NTT)
[paper]
Convex Relaxation Regression: Black-Box Optimization of Smooth Functions by Learning Their Convex Envelopes(Arxiv2016)
[paper
Adam: A Method for Stochastic Optimization (ICLR2015)
D.P. Kingma, M. Welling (Universiteit van Amsterdam)
[paper]
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations (Arxiv,2015)
Patrick R. Conrad, Youssef M. Marzouk, Natesh S. Pillai, Aaron Smith
[paper]
Taking the Human Out of the Loop: A Review of Bayesian Optimization
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams and Nando de Freitas
[paper