Questions tagged with SaaS on AWS
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
programatic compare faces gives far different results than demo ui
when using the python boto3 compare_faces function with the same image I get far different results than when I comapre them in the demo ui. for ex around ~83 similarity+ ~99 confidence with the python code and ~1.9 similarity+ ~99 confidence in the ui demo. why? I thought that it might had to do with the QualityFilter but it can't be changes due to what the documentations says: `To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.` please let me know what am I missing see screenshots demo ui ![demo ui](https://repost.aws/media/postImages/original/IMxW_odd05Tcey02F2CHcJeg) programmatically ![programmatically](https://repost.aws/media/postImages/original/IMwYPo3o0HRR2LamQvIK7iiw) thanks
Triggering Bring Your Own DKIM (BYODKIM) validation
Hi there, I am using Amazon SES to allow customers to send emails from my SaaS application. Bring Your Own DKIM (BYODKIM) looks like a great solution because it requires a single DNS record for DKIM configuration and the fact that Amazon is used is not visible from DNS values. I followed the official tutorial to create a private and public key (https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-dkim-bring-your-own.html): ``` openssl genrsa -f4 -out private.key 2048 openssl rsa -in private.key -outform PEM -pubout -out public.key ``` Once done, I have created a new identity with BYODKIM in the dashboard. As a value, I have, again followed the docs: > You have to delete the first and last lines (-----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----, respectively) of the generated private key. Additionally, you have to remove the line breaks in the generated private key. The resulting value is a string of characters with no spaces or line breaks. Then, I configured the DNS TXT record on Cloudflare with the value `p=yourPublicKey` where `yourPublicKey` is, again, as the docs says: > When you publish (add) your public key to your DNS provider, it must be formatted as follows: > You have to delete the first and last lines (-----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY-----, respectively) of the generated public key. Additionally, you have to remove the line breaks in the generated public key. The resulting value is a string of characters with no spaces or line breaks. > You must include the p= prefix as shown in the Value column in the table above. A few hours have elapsed, and the DKIM configuration still appears as "Pending" in the dashboard. The TXT records is propagated since hours: I checked with `dig TXT myselector._domainkey.mydomain.com` from several locations. With CNAME records, validation seems to take some minutes at most. What is the expected time for BYODKIM? I mean, does the validation happens only once every X hours or days? Is there really no way to force trigger a check/validation via API for instance? Otherwise, my private or public key maybe have a wrong format. Does Amazon would have displayed a specific error if that's the case? BYODKIM is looking like a great solution for SaaS use cases, but if validation takes several hours or days, that's a kind of killer.
Evaluating Timestream DB for multi-tenant SaaS application - Per database encryption quota limit
We are evaluating Timestream DB for handling metrics for our SaaS application that we are developing on AWS. In order to maintain data isolation we are looking at encrypting each customer's data separately. For Timestream this appears to mean we need a separate database for each customer as the encryption is done at a database level. In short, this means we are limited to 500 databases per account (current quota limit) and therefore 500 customers if we stick to a single AWS account. I've checked Service Quota console and it appears none of these limits can be increased. Do you have a recommended pattern for overcoming this limitation or a different approach? Thanks,
Why does import fail with Jira data to QuickSight?
I've generated an API token for our Jira board on our Atlassian account. After going on [New Dataset] -> [Jira] in QuickSight, the connection is validated after entering the API token. But this message appears even though I am an admin: "The SaaS isn’t available. The URL might not be accessible, or the instance might not be available. You can check the service and try again. Contact your data source administrator for assistance."
EventBridge + Kinesis
We're trying to send an event to EventBridge after a producer publishes to a Kinesis Data Stream. We then want to configure EventBridge api destination to send event to Salesforce and create a platform event. Kinesis -> EventBridge-> Rule+ Bus + Api Destination-> Salesforce 1) We first thought that we can setup a rule on EventBridge so that certain kinesis events based on filtering criteria can be routed to an event bus. We were unsuccessful to find any kinesis events routed to EventBridge. **Do we need an additional layer between Kinesis and EventBridge? ** 2) We then tried lambda based approach. Wrote a lambda with Kinesis as trigger and EventBridge as destination. While we saw lambda function getting executed but destination never gets invoked. On further research it seems that on success destination isn't supported on a lambda where Kinesis is configured as trigger. Can someone confirm this? Looking for some guidance on if pattern 1 and pattern 2 are at all feasible and any pointers on implementation.
Is SNS mandatory to determine 'subscribe-success' when verifying a customer subscription for Saas-based platform?
When the customer is redirected to the Saas-based product landing page, they will have a x-amzn-marketplace-token that can be exchanged for the unique customer identiﬁer, customer AWS account Id, and corresponding product code. However, there are few places in the SaaS manual that state to not create any resources for the customer unless the 'subscribe-success' status is verified. Is the exchange of the marketplace token enough to verify the 'subscribe-success' status or is confirming an SNS notification with 'subscribe-success' mandatory? I've looked around the documentation but did not find any verification, so I would appreciate any clarification.