High-Value Project Tutorials
Disclaimer
-
References that validates the use cases can be found here. It also includes my papers.
-
Before you engage in integrating machine, deep and reinforcement learning models into live projects, I recommend you to have a look at safe practices here.
-
The content of this page and its links are licensed under the DataPredict™ library’s Terms And Conditions. This includes the codes shown in the links below.
-
Therefore, creating or redistributing copies or derivatives of this page and its links’ contents are not allowed.
-
Commercial use is also not allowed without a license (except certain conditions).
-
-
For information regarding potential license violations and eligibility for a bounty reward, please refer to the Terms And Conditions Violation Bounty Reward Information.
Retention Systems
-
Creating Time-To-Leave Prediction Model
-
No need to add new content; the model can use existing content to optimize your game.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Probability-To-Leave Prediction Model
-
No need to add new content; the model can use existing content to optimize your game.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Left-Too-Early Detection Model
-
Inverse of probability-to-leave model by detecting outliers.
-
No need to add new content; the model can use existing content to optimize your game.
-
Highly exploitable if the player accumulates long session times over many sessions before suddenly decrease the session times gradually if rewards are involved.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Play Time Maximization Model
-
The model chooses actions or events that maximizes play time.
-
No need to add new content; the model can use existing content to optimize your game.
-
Have higher play time potential due to its ability to exploit and explore than the other three models, but tend to be risky to use.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™, especially if custom events are associated with the model’s output.
-
Regression Recommendation Systems
-
Creating Probability-Based Recommendation Model
- Minimal implementation takes a minimum of 2 hours using DataPredict™.
-
Creating Similarity-Based Recommendation Model
-
Memory-based model. May eat up storage space.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™.
-
-
Creating Reward-Maximization-Based Regression Recommendation Model
-
Limited to one recommendation at a time.
-
Have higher monetization potential due to its ability to exploit and explore than the other two models, but tend to be risky to use.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™, especially if multiple recommendations are made.
-
Binary Recommendation Systems
-
Creating Classification-Based Recommendation Model
- Minimal implementation takes a minimum of 2 hours using DataPredict™.
-
Creating Reward-Maximization-Based Binary Recommendation Model
-
Limited to one recommendation at a time.
-
Have higher monetization potential due to its ability to exploit and explore than the classification-based model, but tend to be risky to use.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™, especially if multiple recommendations are made.
-
Adaptive Difficulty Systems
-
Creating Regression-Based Enemy Data Generation Model
-
Every time a player kills an enemy, both player’s combat data and enemy’s data are used to train the model.
-
No need to add new content; the model can use existing content to optimize your game.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Cluster-Based Enemy Data Generation Model
-
Uses players’ combat data to generate the center of enemies’ data.
-
No need to add new content; the model can use existing content to optimize your game.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Reward-Maximization-Based Difficulty Generation Model
-
Every time an enemy is killed, the positive reward tells the model to “make more enemies similar to this”.
-
If the player ignores or doesn’t kill the enemy, the negative reward tells the model that “this enemy is not interesting to the player” or “this enemy is too hard for the player to kill”.
-
No need to add new content; the model can use existing content to optimize your game.
-
Have higher play time potential due to its ability to exploit and explore than the other two models, but tend to be risky to use.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™.
-
Targeting Systems
-
Creating Cluster-Based Targeting Model
-
Find the center of players based on number of clusters.
-
Minimal implementation takes a minimum of 30 minutes using DataPredict™.
-
-
Creating Reward-Maximization-Based Targeting Model
-
If your map has terrains and structures, the model may learn to corner and “trap” players to certain locations for easy “kills” or “targets”.
-
Limited to one target at a time, but can take in multiple player locations.
-
Might be the most terrible idea out of this list. However, I will not stop game designers from making their games look “smart”.
- The model will likely do a lot of exploration before it can hit a single player. Once that particular location is marked as “reward location”, the model will might overfocus on it.
-
Minimal implementation takes a minimum of 2 hours using DataPredict™.
-
AI Players
-
Creating Data-Based AI Players
-
Uses real players’ data so that the AI players mimic real players.
-
Matches with real players’ general performance.
-
-
Creating Reward-Maximization-Based AI Players
-
Allows the creation of AI players that maximizes positive rewards.
-
May outcompete real players.
-
May exploit bugs and glitches.
-
-
Creating Data-Based Reactionary AI Players
-
Same as data-based AI players.
-
The only difference is that you give counter attacks to players’ potential attacks.
-
Best for mixing machine learning with game designers’ control.
-
-
Creating Reward-Maximization-Based Reactionary AI Players
-
Same as reward-maximization-based AI players.
-
The only difference is that you give counter attacks to players’ potential attacks.
-
Best for mixing reinforcement learning with game designers’ control.
-
Breaks mathematical theoretical guarantees due to inteference from game designers’ control instead of model’s own actions. Therefore, it is risky to use.
-
Quality Assurance
-
Creating Reward-Maximization-Based AI Bug Hunter
- For a given “normal” data, the AI is rewarded based on how far the difference of the collected data compared to current data.
-
Creating Curiosity-Based AI Bug Hunter
- The AI will maximize actions that allows it to explore the game.
Priority Systems
-
Creating Probability-Based Priority System
-
Creating Regression-Based Priority System
Anti-Cheats
- Creating Anomaly Detection Model