How much interactions data do you need for decent/good recommendations in amazon personalize? Any suggestions?

0

In the documentation, on the quotas page there is information about the minimum and maximum limits for the datasets we give to personalize. But can someone suggest how much interaction data you would need in order to generate decent/good recommendations?

I used the movielens-100k dataset and set performHPO to True. For the trained solution, the normalized discounted cumulative gain @ 25 was 0.2977 and the precision @ 25 was 0.0512. Is this low performance due to not having enough interaction data?

已提问 2 年前275 查看次数
1 回答
0

Hi,

The Amazon Personalize quotas represent the absolute minimum for the service to be able to train a model. For a machine learning model to learn, it needs to be able to extract information from the data you sue for training. Good datasets have:

  • multiple users interacting with multiple items, i.e. each user interacts with many items
  • many items have interactions from multiple users, i.e. each item has many interactions from users

This does not have to be true for every item and every user, but you should have a "core" of users and items that have a large number of interactions.

Depending on your data, the overall total number can vary and depending on the behaviour of your users, you will need more or less interactions for the model to be able to give good recommendations.

Also, please keep in ming that the Movielens dataset is not a dataset of actual user interactions - which is what a recommendation engine should be trained on -, but rather a movie ratings dataset that is publicly available and you can convert to data you can use to get familiar Amazon Personalize. As users interact with items recommended to them by the recommendation solution, the data you use for the next training will also improve, as the user is being presented more things that are interesting to them and consequently interacting with them. Because the Movielens dataset is not an interactions dataset and also does not reflect how users would interact with items over time, you will see this reflected in the metrics. Therefore any metrics resulting from experiments using the Movielens dataset should be taken with a grain of salt.

Interestingly, this is also the case when you use data that was generated from a different recommendation solution to train an Amazon Personalize model. The metrics are generated initially on the historical data, where users interacted with what was on their screen (recommended by the previous solution), but they might have interacted with something else if something else was offered.

Because of this, you should always finally evaluate recommendation engines using A/B testing.

In this blog you can find methods for A/B testing with Amazon Personalize: https://aws.amazon.com/blogs/machine-learning/using-a-b-testing-to-measure-the-efficacy-of-recommendations-generated-by-amazon-personalize/

For a build path, there are workshops in the Retail Demo Store for A/B, interleaving, and multi-armed bandit testing.

You can use Amazon CloudWatch Evidently to record your A/B testing experiments.

AWS
Anna_G
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则