Browse through the questions and answers listed below or filter and sort to narrow down your results.
AWS Rekognition SearchFaces API Question
Hi all, If I've previously indexed a collection of say 100 million photos using the IndexFaces API and I want to run a one to many search against that entire collection of indexed photos for every new applicant that comes in to my system, would I just need to run a single SearchFaces API call? Or would I need to run 100 million SearchFaces or CompareFaces API calls? I'm just trying to estimate the pricing on using Rekognition in a facial recognition system and this answer obviously plays a massive factor in the pricing. Thanks!
Memory and CPU allocation for EC2 on Free Tier.
Hi all, I'm working on a project using AWS's free tier, however I'm running into a lot of problems and now thinking what I want to do isn't possible. I'm trying to create an EC2 Instance with Linux. On this, I want to install Docker which I'll use to run Airflow. However, I'm getting warnings that I should have 2 CPUs (not 1) and that I should have at least 4GB memory available (not 1). From what I can tell, there are no EC2 instances I can setup within the free tier that have 2 CPUs and more than 4GB memory. Is this correct? Also, I've noticed that when I try to run Airflow with Docker, my SSH connection eventually breaks, and I can't SSH back on to the EC2 instance (even after rebooting it). I end up just deleting it and creating a new one. Should note that I'm learning as I go. I was really hoping I could run this small scale project using AWS but it's not looking possible.
AWS Glue - Glue Jobs - Glue 2.0: Worker Types
Hey guys, I've got several questions regarding **Glue 2.0** worker types for AWS Glue Jobs. I have gone through this documentation https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-jobs-job.html as Im trying to figure out the type and amount of workers I need for my jobs, and the pricing as well, and I still have some questions left. 1. How many DPUs is 1 `Standard` worker type equivalent to? 2. How are resources being sliced by the second executor provided by a single `standard` worker? 3. Whats the difference between having 2 executors (standard) and 1 executor (G.1X)? In what situation should I use one over the other? 4. Assuming 1 Standard Worker = 1 DPU (Question 1 might answer this one too), am I being charge the same as a G.1X worker? 5. Documentation mentions that, while using Glue 2.0, you need to specify a `worker type` and `number of workers`, does this mean that if I am using a `standard` worker type, all workers (and executors) are going to be active in the execution? Docs specify that while using Glue 1.0, you just need to provide a `Max Number` so Im assuming not all of workers are necessarily active here. Really Appreciate the help guys, Regards
codecommit pricing question (multiple roles for each IAM User)
my client has around 400 repositories, there are 2 roles for each repository (so around 800 roles), the client has 700 users (so 700 IAM users) that access these repos, on average each user access around 7-8 repos, so each user reach these repos with around 15 different roles. it's unclear to me how the pricing apply.. is my client going to pay for 700 users, or is going to pay for 700 users * 15 average roles = 10,500 ?? thanks.
Do aws free-tier t3.micro servers have a max?
Does aws free-tier t3.micro servers have a max? Just created a Jenkins controller and a worker agent node, wondering how many agents I can add to the #DevOps ecosystem. I also plan to deploy a Nexus repository and a prometheus/Grafana monitoring server on yet other t3.micro servers. Will I incur a bill?
SageMaker Feature Store pricing
SageMaker Feature Store is full of questions for me, ever since I started working with it. The thing which is absolutely not clear for me is about integration with Athena/Glue. Since there is no built-in option in FeatureStore to, for instance, get all records, or get records by criteria, I have to use AthenaQuery branched off the FeatureGroup for that purpose. In case if I do something like ```python feature_group = FeatureGroup(name='<...>', sagemaker_session=sagemaker_session) query = feature_group.athena_query() query.run('SELECT * FROM <whatever> LIMIT 1000', output_location='s3://...') query.wait() query.as_dataframe() ``` Will I also be charged for querying Athena? What's about Hive? If yes, then what's the purpose of using FeatureStore? It's API is too poor for implementing continuous training, - I'm instead forced to use at least Athena, and, therefore, pay for the extra service besides I already pay for FeatureStore.
Billing of 'triggers'
> Rules triggered: $0.15 (per million rules triggered / per million actions executed) Does 'triggered' mean a rule matched? In other words if the rule is a 'select' and it didn't match the WHERE clause, does that count as a 'triggering' in terms of pricing? It takes CPU cycles to process those rules, even if they don't result in a match. It seems wrong to me that would be free.
Pricing of D-Wave's Quantum Annealer if qbsolv is used
I have the following question: If I use D-Wave's Quantum Annealer via Amazon Braket and if I also use the D-Wave method qbsolv, since my problem instance of my optimisation problem is too large for the quantum processing unit of D-Wave's machine, how exactly will this be charged? To be more precise: Will this still be charged as a single task, no matter how many subproblems qbsolv generates, or will every created subproblem be charged as a single task by Amazon Braket? I will call the method qbsolv only once. Thanks in advance.
S3 Individual Bucket Charges
Hello, Is there an easy way to check the cost of individual S3 buckets? I have multiple S3 buckets with video content in it. I also have cloudfront that accesses the video content from those s3 buckets. So, ideally I would want to know all price for outgoing data plus price for cloudfront for each time a video is watched.