Skip to content

Reproduction of AutoAugment from Google

Notifications You must be signed in to change notification settings

hongdayu/autoaugment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

AutoAugment

My attempt at reproducing the following paper from Google. I have used Keras and TensorFlow.

There are two components to the code:

  1. Controller: a recurrent neural network that suggests transformations
  2. Child: the final neural network trained with the previous suggestion.

Each child is trained start-to-finish using the policies produced by the recurrent neural network (controller). The model is then evaluated in the validation set. The tuple (child validation accuracy score, controller softmax probabilities) are then stored in a list.

The controller is trained in order to maximize the derivative of its outputs with respect to each weight, $\frac{\partial y}{\partial w}$, times the [0,1] normalized accuracy scores from the previous list. The $y$ outputs are the "controller softmax probabilities" from the previous list.

All this is implemented in the fit() function which can be found inside each class.

Disclaimer: I am unsure whether the code resembles that of the authors. I have used a lot of information from this other paper, which is the main citation from AutoAugment.

About

Reproduction of AutoAugment from Google

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages