- Le plus récent
- Le plus de votes
- La plupart des commentaires
Hi,
The Amazon Personalize quotas represent the absolute minimum for the service to be able to train a model. For a machine learning model to learn, it needs to be able to extract information from the data you sue for training. Good datasets have:
- multiple users interacting with multiple items, i.e. each user interacts with many items
- many items have interactions from multiple users, i.e. each item has many interactions from users
This does not have to be true for every item and every user, but you should have a "core" of users and items that have a large number of interactions.
Depending on your data, the overall total number can vary and depending on the behaviour of your users, you will need more or less interactions for the model to be able to give good recommendations.
Also, please keep in ming that the Movielens dataset is not a dataset of actual user interactions - which is what a recommendation engine should be trained on -, but rather a movie ratings dataset that is publicly available and you can convert to data you can use to get familiar Amazon Personalize. As users interact with items recommended to them by the recommendation solution, the data you use for the next training will also improve, as the user is being presented more things that are interesting to them and consequently interacting with them. Because the Movielens dataset is not an interactions dataset and also does not reflect how users would interact with items over time, you will see this reflected in the metrics. Therefore any metrics resulting from experiments using the Movielens dataset should be taken with a grain of salt.
Interestingly, this is also the case when you use data that was generated from a different recommendation solution to train an Amazon Personalize model. The metrics are generated initially on the historical data, where users interacted with what was on their screen (recommended by the previous solution), but they might have interacted with something else if something else was offered.
Because of this, you should always finally evaluate recommendation engines using A/B testing.
In this blog you can find methods for A/B testing with Amazon Personalize: https://aws.amazon.com/blogs/machine-learning/using-a-b-testing-to-measure-the-efficacy-of-recommendations-generated-by-amazon-personalize/
For a build path, there are workshops in the Retail Demo Store for A/B, interleaving, and multi-armed bandit testing.
- https://github.com/aws-samples/retail-demo-store/blob/master/workshop/3-Experimentation/3.2-AB-Experiment.ipynb
- https://github.com/aws-samples/retail-demo-store/blob/master/workshop/3-Experimentation/3.3-Interleaving-Experiment.ipynb
- https://github.com/aws-samples/retail-demo-store/blob/master/workshop/3-Experimentation/3.4-Multi-Armed-Bandit-Experiment.ipynb
You can use Amazon CloudWatch Evidently to record your A/B testing experiments.
Contenus pertinents
- demandé il y a un an
- demandé il y a un mois
- demandé il y a un an
- demandé il y a 6 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 4 mois