Cost optimisation for Kinesis Firehose

0

A customer is using Kinesis Firehose and not happy with his bill.

I looked in their stats from CloudWatch, divide deliveryToS3.Bytes by deliveryToS3.Record and realised that average size of record is 0.5KB. As I know for billing we round size of a record to 5 KB, which means they are paying use almost 10 times more.

Am I right that the only way to optimise cost is to redesign an App to be able to combine several record into one?

AWS
demandé il y a 5 ans1349 vues
1 réponse
0
Réponse acceptée

One option is to make sure the Buffer Interval is cranked up to the max of 900 seconds. That gives firehose longer to buffer the records together into larger objects. If the source of the data is direct Puts, you definitely want to use PutRecordBatch to send in several records. If the source is machines with the kinesis log agent you can increase the maxBufferAgeMillis to increase how long the client buffers data before sending it to firehose, the trade off being the amount of data lost if something happens to the host.

AWS
EXPERT
Adam_W
répondu il y a 5 ans
profile picture
EXPERT
vérifié il y a 5 mois

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions