API Reference - Models
Regression
Model | Alternate Names | Use Cases |
---|---|---|
LinearRegression | None | Price Prediction, Time To Level Up Prediction |
NormalLinearRegression (Not Recommended) | None | Same As Above |
SupportVectorRegression (May Need Further Refinement) | SVR | Same As Above |
KNearestNeighboursRegressor | KNN-R | Same As Above |
Classification
Model | Alternate Names | Use Cases |
---|---|---|
KNearestNeighboursClassifier | KNN-C | Item Recommendation, Similar Player Matchmaking |
LogisticRegression | Perceptron | Purchase Likelihood Estimation, Player Confidence Prediction |
SupportVectorMachine | SVM | Hacking Detection, Anomaly Detection |
GaussianNaiveBayes | None | Player Behavior Categorization (e.g., Cautious Vs. Aggressive), Fast State Classification |
MultinomialNaiveBayes | None | Inventory Action Prediction, Strategy Profiling Based on Item Usage |
BernoulliNaiveBayes | None | Binary Action Prediction (e.g., Jump Or Not), Quick Decision Filters |
ComplementNaiveBayes | None | Imbalanced Decision Prediction (e.g., Rare Choices, Niche Paths) |
NeuralNetwork | Multi-Layer Perceptron | Decision-Making, Player Behaviour Prediction |
Clustering
Model | Alternate Names | Use Cases |
---|---|---|
AffinityPropagation | None | Player Grouping |
AgglomerativeHierarchical | None | Enemy Difficulty Generation |
DensityBasedSpatialClusteringOfApplicationsWithNoise | DBSCAN | Density Grouping |
MeanShift | None | Boss Spawn Location Search Based On Player Locations |
ExpectationMaximization | EM | Hacking Detection, Anomaly Detection |
KMeans | None | Maximizing Area-of-Effect Abilities, Predictive Target Grouping |
KMedoids | None | Player Grouping Based On Player Locations With Leader Identification |
Deep Reinforcement Learning
Model | Alternate Names | Use Cases |
---|---|---|
DeepQLearning | Deep Q Network | Self-Learning Fighting AIs, Self-Learning Parkouring AIs, Self-Driving Cars |
DeepDoubleQLearningV1 | Double Deep Q Network (2010) | Same As Deep Q-Learning |
DeepDoubleQLearningV2 | Double Deep Q Network (2015) | Same As Deep Q-Learning |
DeepClippedDoubleQLearning | Clipped Double Deep Q Network | Same As Deep Q-Learning |
DeepStateActionRewardStateAction | Deep SARSA | Same As Deep Q-Learning |
DeepDoubleStateActionRewardStateActionV1 | Double Deep SARSA | Same As Deep Q-Learning |
DeepDoubleStateActionRewardStateActionV2 | Double Deep SARSA | Same As Deep Q-Learning |
DeepExpectedStateActionRewardStateAction | Deep Expected SARSA | Same As Deep Q-Learning |
DeepDoubleExpectedStateActionRewardStateActionV1 | Double Deep Expected SARSA | Same As Deep Q-Learning |
DeepDoubleExpectedStateActionRewardStateActionV2 | Double Deep Expected SARSA | Same As Deep Q-Learning |
ActorCritic | AC | Same As Deep Q-Learning |
AdvantageActorCritic | A2C | Same As Deep Q-Learning |
REINFORCE | None | Same As Deep Q-Learning |
MonteCarloControl (May Need Further Refinement) | None | Same As Deep Q-Learning |
OffPolicyMonteCarloControl | None | Same As Deep Q-Learning |
VanillaPolicyGradient | VPG | Same As Deep Q-Learning |
ProximalPolicyOptimization | PPO | Same As Deep Q-Learning |
ProximalPolicyOptimizationClip | PPO-Clip | Same As Deep Q-Learning |
SoftActorCritic | SAC | Same As Deep Q-Learning |
DeepDeterministicPolicyGradient | DDPG | Same As Deep Q-Learning |
TwinDelayedDeepDeterministicPolicyGradient | TD3 | Same As Deep Q-Learning |
Generative
Model | Alternate Names | Use Cases |
---|---|---|
GenerativeAdversarialNetwork | GAN | Building And Image Generation |
ConditionalGenerativeAdversarialNetwork | CGAN | Same As GAN, But Can Assign Classes |
WassersteinGenerativeAdversarialNetwork | WGAN | Same As GAN, But More Stable |
ConditionalWassersteinGenerativeAdversarialNetwork | CWGAN | Combination Of Both CGAN And WGAN |