Questions tagged with Cost Optimization
Content language: English
Sort by most recent
Secured ISP to Lambda?
Hi, I was wanting to "securely" stream data from a rack i have in an ISP to my AWS Lambda instance. I was wondering what the best solution might be? I thought of something sort of vpn and perhaps kinesis to lambda but not sure how i would initiate that from the on premises rack and that was a shot in the dark. Would appreciate any input. Thank you.
AWS Compute Savings Plan
Hi Team, Our customer has purchased $4 compute savings plan. When observed the savings plan coverage, it shows applied 67% for one instance type, 80% for another instance type, 79% for 3rd instance type. My understanding were, it should first cover 100% for one instance type (with max % savings) and then apply for other. But the above observation is confusion. Please someone clarify the doubt. Regards, Nikhil Shah
Permission problem in Cloudformation
Cloudformation create-stack generates an Athena access denied while writing error. The problem is generated when writing to the athena-results bucket. I'm logged in with a SSO role with AdministratorAccess access via CLI. I can create the specified object from the command line via "aws s3 cp" and I'm able to execute "aws athena start-query-execution" without trouble. It's only via cloudformation. Bellow the specific error: ResourceStatus: CREATE_FAILED ResourceStatusReason: 'Resource handler returned message: "[Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. Access denied when writing to location: s3://cost-athena-results-123456789012/8fefd451-2a3f-4bc9-881e-84061de8db91.csv [Execution ID: 8fefd451-2a3f-4bc9-881e-84061de8db91]" (RequestToken: b0d4b7d5-998b-6ca8-22c6-657fa2433fe8, HandlerErrorCode: null)' ResourceType: AWS::QuickSight::DataSource
Lambda using docker - Billed time very high for small duration
Hi, We are using lambda in a docker (ECR), The code is in python with a lot of dependencies (Docker size 700mb) We noticed that our code execution is very small (+-650ms) but the billed time is very high in comparison (4500ms) with a init time of (+- 3800ms) So basically init time is 80% of our billed time for each execution. My previous pipeline : Every 45min we spin 200 instances of the lambda with a different argument at the same time. (I guess 200 cold start then?) My new pipeline : Every 45min i spin the same lambda with no argument (1 Cold start) and this lambda spin 200 times the same lambda but with an argument (200 warm start?) but with the new pipeline the problem is still the same and I dont see any improvement: REPORT RequestId: f9d67348-2deb-410e-b874-b29e8b3569b2 Duration: 653.58 ms Billed Duration: 4353 ms Memory Size: 300 MB Max Memory Used: 284 MB Init Duration: 3698.46 ms So here are my 3 questions then: - Is 3800ms a normal cold start time? - In my new pipeline why is it not improving the init duration? - What approach do you recommend to reduce/fix the cost? Thank you !
AWS Data Transfer Costs with content download from website - S3
Hello Every one, I'm starting with AWS by trying to build a personal blog with downloadable content from S3 (PDF and images mostly for now) . I'm concerned at this point by data transfer costs when users download this content. Is my concern legitimate ? is it better to avoid downloadable content in this case if it will incur unpredictable costs ? Thanks in advance
Athena returns "FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. null"
Following the well architected labs 200: Cost and usage analysis I get the following error when adding partitions in Athena Query Editor: ``` MSCK REPAIR TABLE `cost_optimization_10XXXXXXXX321`; ``` and it returned the following error: > FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. null This query ran against the "costfubar" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 856e146a-8b13-4175-8cd8-692eef6d3fa5 The table was created correctly in Glue with ``` Name cost_optimization_10XXXXXXXXX21 Description Database costfubar Classification parquet Location s3://cost-optimization-10XXXXXXX321// Connection Deprecated No Last updated Wed Apr 20 16:46:28 GMT-500 2022 Input format org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat Output format org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat Serde serialization lib org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe Serde parameters serialization.format 1 Table properties sizeKey 4223322objectCount 4UPDATED_BY_CRAWLER costfubarCrawlerSchemaSerializerVersion 1.0recordCount 335239averageRecordSize 27exclusions ["s3://cost-optimization-107457606321/**.json","s3://cost-optimization-1XXXXXXXX21/**.csv","s3://cost-optimization-107457606321/**.sql","s3://cost-optimization-1XXXXXXXX321/**.gz","s3://cost-optimization-107457606321/**.zip","s3://cost-optimization-107457606321/**/cost_and_usage_data_status/*","s3://cost-optimization-107457606321/**.yml"]CrawlerSchemaDeserializerVersion 1.0compressionType nonetypeOfData file ``` and has the following partitions shown in Glue: ``` partition_0 partition_1 year month detailed-cur-1XXXXXXXX57 detailed-cur-1XXXXXXXX57 2018 12 View files View properties detailed-cur-1XXXXXXXXX57 detailed-cur-1XXXXXXXXX57 2022 4 View files View properties detailed-cur-1XXXXXXXXX57 detailed-cur-1XXXXXXXXX57 2018 11 View files View properties detailed-cur-1XXXXXXXX57 detailed-cur-1XXXXXXXX57 2018 10 View files View properties ```
What to do with source servers with replica after recovery into EC2 [Elastic Disaster Recovery]
Hello guys! I was able to successfully recover my instances after the outage in the premise office, currently, all my machines are running in EC2 and source servers are shown as "Ready, lag 9d" "Stalled" and I keep being charged for replica EBS volumes, as well as AWS Replication Instances (t3.small). While I don`t have any info on when it will be possible to perform the failback, I`d like to cut some costs and remove replica EBS volumes. I was thinking about the action "Disconnect from AWS" but it does not work with the message: > Some Source servers could not be processed: > s-1ed6*****9: Cannot disconnect Source Server s-1ed******9 because it has a Recovery Instance. What would be the correct action in such a case? (remove source server\remove just the volumes\something else) (the goal to cut the costs while keeping the possibility of failback later). Thanks! Regards,
How to tag unlimited CpuCredits for Cost Explorer?
I have a T3 Unlimited EC2 instance tagged with some tag *project:xyz*. The cost of "regular" CPU credits is being correctly shown tagged as *project:xyz* in Cost Explorer, so I know for a fact that the tag is correctly is applied. However, when the instance uses the "unlimited" CPU credit portion this cost is shown in Cost Explorer as USE2-CPUCredits:t3a without any tags. What am I missing?
RI/SavingsPlan share between AWS Organizations
Hi Guys, does anyone know if it's possible to share RI/SavingPlans between different AWS Organizations? (someone told me that it's possible but I can't get documentation on that) - I am pretty sure that is not possible but I want to be sure. Thanks in advance! :)