Cost optimisation for Kinesis Firehose

0

A customer is using Kinesis Firehose and not happy with his bill.

I looked in their stats from CloudWatch, divide deliveryToS3.Bytes by deliveryToS3.Record and realised that average size of record is 0.5KB. As I know for billing we round size of a record to 5 KB, which means they are paying use almost 10 times more.

Am I right that the only way to optimise cost is to redesign an App to be able to combine several record into one?

AWS
已提问 5 年前1351 查看次数
1 回答
0
已接受的回答

One option is to make sure the Buffer Interval is cranked up to the max of 900 seconds. That gives firehose longer to buffer the records together into larger objects. If the source of the data is direct Puts, you definitely want to use PutRecordBatch to send in several records. If the source is machines with the kinesis log agent you can increase the maxBufferAgeMillis to increase how long the client buffers data before sending it to firehose, the trade off being the amount of data lost if something happens to the host.

AWS
专家
Adam_W
已回答 5 年前
profile picture
专家
已审核 5 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则