Cost optimisation for Kinesis Firehose

0

A customer is using Kinesis Firehose and not happy with his bill.

I looked in their stats from CloudWatch, divide deliveryToS3.Bytes by deliveryToS3.Record and realised that average size of record is 0.5KB. As I know for billing we round size of a record to 5 KB, which means they are paying use almost 10 times more.

Am I right that the only way to optimise cost is to redesign an App to be able to combine several record into one?

AWS
已提問 5 年前檢視次數 1349 次
1 個回答
0
已接受的答案

One option is to make sure the Buffer Interval is cranked up to the max of 900 seconds. That gives firehose longer to buffer the records together into larger objects. If the source of the data is direct Puts, you definitely want to use PutRecordBatch to send in several records. If the source is machines with the kinesis log agent you can increase the maxBufferAgeMillis to increase how long the client buffers data before sending it to firehose, the trade off being the amount of data lost if something happens to the host.

AWS
專家
Adam_W
已回答 5 年前
profile picture
專家
已審閱 5 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南