from RAdam import RAdamOptimizer
train_op = RAdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, weight_decay=0.0).minimize(loss)-
Notifications
You must be signed in to change notification settings - Fork 14
Simple Tensorflow implementation of "On The Variance Of The Adaptive Learning Rate And Beyond"
License
taki0112/RAdam-Tensorflow
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Simple Tensorflow implementation of "On The Variance Of The Adaptive Learning Rate And Beyond"
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published

