By using AWS re:Post, you agree to the Terms of Use
/Amazon Personalize/

Questions tagged with Amazon Personalize

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to correctly get the recommendation if the user data or item data change where the previous interactions were based on the old data?

Say, we have some users, items in an online shop. We also keep their interaction data. Now say, metadata(Categorical list) of a user had a little update i.e. some data is added to the list. So if we then do a full training, the training will lead to incorrect recommendations. Because the interactions of that user were when the user had the old data. Now one may say we need to delete those interactions and the problem is solved. But in that case, we are losing interaction which is not good. The metadata(Categorical list) is updated i.e. some data is added not replaced. So delete is not an option. Let me give an example. In a shop, each item/product has one or more prerequisite licenses. That means only the users who have those licenses can buy and use those items. The user collects the licenses from the other place which is not in the concern of the shop. They collect licenses when they need them. User Schema: ``` { "type": "record", "name": "Users", "namespace": "com.amazonaws.personalize.schema", "fields": [ { "name": "USER_ID", "type": "string" }, { "name": "LICENSES", //Currently available licenses of the user "type": "string", "categorical": true }, ], "version": "1.0" } ``` Item Schema: ``` { "type": "record", "name": "Items", "namespace": "com.amazonaws.personalize.schema", "fields": [ { "name": "ITEM_ID", "type": "string" }, { "name": "PREREQUISITE_LICENSES",//The licenses user need to have to buy and use the item "type": "string", "categorical": true } ], "version": "1.0" } ``` Now say a user has license L1 and L2 and he has bought and used some products which needed license L1, L2, or L1 & L2. We have all the interactions. Now if that user bought another License L3, and we need to a full training that time, it will seem like the user with license L1, L2, L3 has only interacted with those items(L1, L2 items). But in reality, the user interacted with those(L1, L2) items because at that time he had not the license L3. It will give wrong information to the model. This can happen for other users too. In that case, the recommendation will have lower accuracy. How to handle that situation? Is there any Idea?
1
answers
0
votes
6
views
asked 3 months ago

How do we give recommendations when users create/post content? Like in YouTube, TikTok etc

I've explored amazon personalize, etc for generating recommendations. Amazon personalize can be used when all the content is with the company/a single entity. For example, in Netflix, all the content (the catalogue of movies, tv shows etc.) is with them and they generate personalized movie/tv show recommendations. But what if there's a platform similar to Youtube, TikTok, where users can: - post content (users are continuously generating content) - view other users content and interact(like, share, repost, comment) - follow other users When there is user generated content like this and users follow other users (meaning they probably want recommendations from users they follow), how do we give recommendations? Can we do it with Amazon Personalize? What algorithms and tools can be used? **Lots of content - handling the cold start problem** And when there is user generated content, there is going to be lots of content being generated every minute. So how do we handle the cold start problem (i.e. how do we decide who to recommend all of this new influx of content too)? Usually we might experiment with this new content, like recommend it to some users , see how they're responding and appropriately decide how to recommend this content. But when there is a very high frequency of content being created, how do we reduce the amount of time it takes to give recommendations/push the new content to users quickly? And does anybody know if the questions mentioned above can be addressed using Amazon Personalize (in any way)? Open to any and all suggestions. Thank you!
1
answers
0
votes
10
views
asked 5 months ago

How to get recommendations from Domain dataset recommender with the SDK for Python (Boto3).

We use domain dataset group at Amazon Personalize. We finished to import data (interactions, users, items) and create an recommender (Recommended for you). We can get recommendations from the recommender with the Amazon Personalize console. Then, we would like to get recommendations with the SDK for Python (Boto3). We sent request per following Developer Guide, however ParamValidationError occured. Getting recommendations with a recommender (AWS SDKs): https://docs.aws.amazon.com/personalize/latest/dg/domain-dsg-recommendations.html#get-domain-rec-sdk How can we get recommendations from Domain dataset recommender with the SDK for Python (Boto3)? We tried following pattens; **pattern1** Request ``` response = personalizeRt.get_recommendations( recommenderArn = 'Recommender ARN', userId = 'User ID', numResults = 10 ) ``` Error ``` ParamValidationError: Parameter validation failed: Missing required parameter in input: "campaignArn" Unknown parameter in input: "recommenderArn", must be one of: campaignArn, itemId, userId, numResults, context, filterArn, filterValues ``` **pattern2** Request ``` response = personalizeRt.get_recommendations( campaignArn = 'Recommender ARN', userId = 'User ID', numResults = 10 ) ``` Error ``` InvalidInputException: An error occurred (InvalidInputException) when calling the GetRecommendations operation: The given campaign ARN is invalid: Recommender ARN ```
1
answers
0
votes
4
views
asked 5 months ago

Giving weights to event types in amazon personalize

1) For the VIDEO_ON_DEMAND domain, some use cases include multiple event types. For example, the 'Top picks for you' use case includes two event types 'watch' and 'click'. Is 'watch' given more weight than 'click' when training the model? In general, when there is more than one event type, do domain recommenders give more weight to some event types? 2) In our use case, we have a platform that recommends video content. However, we have multiple event types, and some events need to be given more weight than others. Below is the list of our event types in the order of their importance: SHARE > LIKE > WATCH_COMPLETE > WATCH_PARTIAL > STARTED > SKIP So when training the model, we would want 'SHARE' to have more weight than 'LIKE', and 'LIKE' to have more weight than 'WATCH_COMPLETE' and so on. I was looking into custom solutions. It looks like there is no way to give weights when using Personalize's custom solutions as mentioned in this [post](https://stackoverflow.com/questions/69456739/any-way-to-tell-aws-personalize-that-some-interactions-count-more-than-others/69483117#69483117)... --- **So when using Amazon Personalize, should we use domain recommenders or build custom solutions for our use case?** **If we cannot give weights to different event types using Personalize, then what are alternatives? **Should we use Amazon SageMaker and build models from scratch? *Open to any and all suggestions.* Thank you!
1
answers
0
votes
9
views
asked 5 months ago

How can we accurately define "better" for recommender metrics with CW Evidently?

I'm exploring using the new [CloudWatch Evidently](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Evidently.html) feature for measuring success of recommendation model deployments with [Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html). In this context, assigning a user or session to a particular feature variation (baseline recommendations list vs Personalize campaign 1 vs Personalize campaign 2) **might** trigger one *or more* valuable "events": - Maybe a click/view for an individual item (What's this item's price? How far down the recommendation list was it?) - Maybe a checkout for a basket of products (with an overall total price or total margin) If I understand right (?), Evidently experiment dashboards for "statistical significance" and "improvement" today look just at the **distribution of values** of recorded events in terms of averages and distribution, right? The number of data points is used for assessing "how significant" but not "what's better"? If so, this seems like a challenge for "optional" events: For example what if one treatment gives me really high basket value on average (only recommending expensive products), but very few users convert? I could see really high metrics for the new treatment, even though its overall value was very poor. Do I understand correctly here? And if so, how might you recommend defining Evidently metrics for these kinds of use cases? For example maybe we'd need to find a way of generating zero-value metric events when a session is abandoned?
0
answers
0
votes
6
views
EXPERT
asked 5 months ago

How do you give negative feedback to amazon personalize? i.e. tell personalize that a user doesn't like a certain item?

I did the amazon personalize deep dive series on youtube. At the timestamp [8:33 in the video](https://youtu.be/TEioktJD1GE?t=513), it was mentioned that **'Personalize does not understand negative feedback.' and that any interaction you submit is assumed to be a positive one.** But I think that giving negative feedback could improve the recommendations that we give on a whole. Personalize knowing that a user does not like a given item 'A' would help ensure that it does not recommend items similar to 'A' in the future. **Is there any way in which we can give negative feedback**(ex. user doesn't like items x,y,z) **to amazon personalize**? --- A possible way to give negative feedback that I thought of: Let's say users can give ratings out of 5 for movies. Every time a user gives a rating >= 3 in the interactions dataset, we add an additional interaction in the dataset (i.e we have two interactions saying that a user rated a movie >=3 in the interactions.csv instead of just one). However, if he gives a rating <=2 (meaning he probably doesn't like the movie), we just keep the single interaction of that in the interactions dataset (i.e. we only have one interaction saying that a user rated a movie <=2 in the interactions.csv file) Would this in any way help to convey to personalize that ratings <=2 are not as important/that the user did not like them?
1
answers
2
votes
13
views
asked 5 months ago
  • 1
  • 90 / page