Beta Version 1.12.0

Added

  • Added these value schedulers under the “ValueSchedulers” section:

    • Chained

    • Constant

    • CosineAnnealing

    • Exponential

    • InverseSquareRoot

    • InverseTime

    • Linear

    • MultipleStep

    • Multiplicative

    • Polynomial

    • Sequential

    • Step

  • Added these optimizers under the “Optimizers” section:

    • ResilientBackwardPropagation
* AdaptiveFactor
* AdaptiveMomentEstimationWeightDecay
* RectifiedAdaptiveMomentEstimation

Changes

  • ValueSchedulers can now be used in place of Optimizers for scheduling learning rate.

  • The calculate() function now accepts weightTensor as the third parameter for all the optimizers under the “Optimizers” section.

  • Renamed AdaptiveGradientDelta to AdaptiveDelta under the “Optimizers” section.

  • The Optimizers’ “internalParameterArray” value are now set to nil instead of an empty table when calling the new() constructor and reset() function under the “Optimizers” section.

  • Made some internal code changes to DeepDoubleQLearningV2, DeepDoubleStateActionRewardStateActionV2 and DeepDoubleExpectedStateActionRewardStateActionV2 models under the “Optimizers” section.

  • Made the default value for “averagingRate” parameter for DeepDoubleQLearningV2, DeepDoubleStateActionRewardStateActionV2 and DeepDoubleExpectedStateActionRewardStateActionV2 models to 0.01 instead of 0.995 under the “Optimizers” section.

Removed

  • Removed TimeDecay and StepDecay from the “ValueSchedulers” section.

  • Removed LearningRateStepDecay and LearningRateTimeDecay from the “Optimizers” section.

Fixes

  • Fixed some bugs in AdaptiveMomentEstimation and NesterovAcceleratedAdaptiveMomentEstimation under the “Optimizers” section.

  • Fixed some bugs where BaseRegularizer and BaseEligibilityTrace returns an “Unknown” class name under the “Regularizers” and “EligibilityTraces” sections.