- Newest
- Most votes
- Most comments
Issue resolved for anyone interested, by switching to direct PUTs into the delivery stream the files are properly aggregated.
However I was not able to find a reason for why the data stream -> delivery stream transition does not result in proper aggregation of data, although I suspect it may have to do with the data stream shards.
Kinesis firehose delivery stream has two options for buffering the data, Buffer Size and Buffer Interval. If buffering data exceed the Buffer Size, the data could be delivered to S3 in previous to Buffer Interval. What value is Buffer Size set?
https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#frequency
The frequency of data delivery to Amazon S3 is determined by the Amazon S3 Buffer size and Buffer interval value that you configured for your delivery stream. Kinesis Data Firehose buffers incoming data before it delivers it to Amazon S3. You can configure the values for Amazon S3 Buffer size (1–128 MB) or Buffer interval (60–900 seconds). The condition satisfied first triggers data delivery to Amazon S3. When data delivery to the destination falls behind data writing to the delivery stream, Kinesis Data Firehose raises the buffer size dynamically. It can then catch up and ensure that all data is delivered to the destination.
Relevant content
- asked a year ago
- asked a year ago
- AWS OFFICIALUpdated 4 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 4 years ago
- AWS OFFICIALUpdated a year ago
The size of the file never exceeds the max buffer size, which is anyways set to max (128)