Release Version 1.16.0

ValueSchedulers

  • Added these value schedulers:

    • Chained

    • Constant

    • CosineAnnealing

    • Exponential

    • InverseSquareRoot

    • InverseTime

    • Linear

    • MultipleStep

    • Multiplicative

    • Polynomial

    • Sequential

    • Step

  • ValueSchedulers can now be used in place of Optimizers for scheduling learning rate.

  • Removed TimeDecay and StepDecay.

Optimizers

  • Added these optimizers:

    • ResilientBackwardPropagation
* AdaptiveFactor
* AdaptiveMomentEstimationWeightDecay
* RectifiedAdaptiveMomentEstimation
  • The calculate() function now accepts weightTensor as the third parameter for all the optimizers.

  • The Optimizers’ “internalParameterArray” value are now set to nil instead of an empty table when calling the new() constructor and reset() function.

  • Removed LearningRateStepDecay and LearningRateTimeDecay.

  • Renamed AdaptiveGradientDelta to AdaptiveDelta.

  • Fixed some bugs in AdaptiveMomentEstimation and NesterovAcceleratedAdaptiveMomentEstimation.

Models

  • Made some internal code changes to DeepDoubleQLearningV2, DeepDoubleStateActionRewardStateActionV2 and DeepDoubleExpectedStateActionRewardStateActionV2 models.

  • Made the default value for “averagingRate” parameter for DeepDoubleQLearningV2, DeepDoubleStateActionRewardStateActionV2 and DeepDoubleExpectedStateActionRewardStateActionV2 models to 0.01 instead of 0.995.

EligibilityTraces

  • Fixed some bugs where BaseEligibilityTrace returns an “Unknown” class name.

Regularizers

  • Fixed some bugs where BaseRegularizer returns an “Unknown” class name.