Cost optimisation for Kinesis Firehose

0

A customer is using Kinesis Firehose and not happy with his bill.

I looked in their stats from CloudWatch, divide deliveryToS3.Bytes by deliveryToS3.Record and realised that average size of record is 0.5KB. As I know for billing we round size of a record to 5 KB, which means they are paying use almost 10 times more.

Am I right that the only way to optimise cost is to redesign an App to be able to combine several record into one?

AWS
preguntada hace 5 años1349 visualizaciones
1 Respuesta
0
Respuesta aceptada

One option is to make sure the Buffer Interval is cranked up to the max of 900 seconds. That gives firehose longer to buffer the records together into larger objects. If the source of the data is direct Puts, you definitely want to use PutRecordBatch to send in several records. If the source is machines with the kinesis log agent you can increase the maxBufferAgeMillis to increase how long the client buffers data before sending it to firehose, the trade off being the amount of data lost if something happens to the host.

AWS
EXPERTO
Adam_W
respondido hace 5 años
profile picture
EXPERTO
revisado hace 5 meses

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas