By using AWS re:Post, you agree to the Terms of Use

Unanswered Questions tagged with Amazon Kinesis Data Streams

Sort by most recent
  • 1
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

KCL1.x Java don't consuming records

I am using KCL 1.14.8 for Java. My Kinesis only have one shard, and I put a record into Kinesis every 30 seconds. For each record, the RecordProcessor will process it, processing each record may take several seconds(<10 seconds). This is worker configuration. ``` configuration.withInitialPositionInStream(InitialPositionInStream.LATEST) .withRegionName("eu-west-1") .withMaxRecords(10) .withIdleTimeBetweenReadsInMillis(1000L) .withCallProcessRecordsEvenForEmptyRecordList(false) .withRetryGetRecordsInSeconds(1) .withFailoverTimeMillis(60_000); ``` When the application start, It don't consuming records at all. the logs show this: ``` Aug 21, 2022 @ 21:24:14.943 Skipping shard sync due to the reason - Hash range is complete. Aug 21, 2022 @ 21:24:14.785 Number of pending leases to clean before the scan : 0 Aug 21, 2022 @ 21:24:00.144 Sleeping ... Aug 21, 2022 @ 21:24:00.144 Current stream shard assignments: shardId-000000000000 Aug 21, 2022 @ 21:23:14.911 Elected leaders: xxxxx-187a-4ceb-9714-e39ab1c7bb71 Aug 21, 2022 @ 21:23:14.785 Number of pending leases to clean before the scan : 0 Aug 21, 2022 @ 21:22:59.128 Sleeping ... Aug 21, 2022 @ 21:22:59.128 Current stream shard assignments: shardId-000000000000 Aug 21, 2022 @ 21:22:14.937 Skipping shard sync due to the reason - Hash range is complete. Aug 21, 2022 @ 21:22:14.785 Number of pending leases to clean before the scan : 0 Aug 21, 2022 @ 21:21:58.114 Current stream shard assignments: shardId-000000000000 Aug 21, 2022 @ 21:21:58.114 Sleeping ... Aug 21, 2022 @ 21:21:14.785 Number of pending leases to clean before the scan : 0 ``` please help.
0
answers
0
votes
17
views
asked a month ago

When exactly does KCL drop records? Can't recreate (but need to handle)

From [AWS docs](https://docs.aws.amazon.com/streams/latest/dev/troubleshooting-consumers.html): "The most common cause of skipped records is an unhandled exception thrown from processRecords. The Kinesis Client Library (KCL) relies on your processRecords code to handle any exceptions that arise from processing the data records. Any exception thrown from processRecords is absorbed by the KCL. To avoid infinite retries on a recurring failure, the KCL does not resend the batch of records processed at the time of the exception. The KCL then calls processRecords for the next batch of data records without restarting the record processor. This effectively results in consumer applications observing skipped records. To prevent skipped records, handle all exceptions within processRecords appropriately." Our KCL client is written in node, following [this example](https://github.com/awslabs/amazon-kinesis-client-nodejs/blob/master/samples/basic_sample/consumer/sample_kcl_app.js). No matter how we tried crashing it, the KCL function simply exits and next time we start it starts from the SAME checkpoint. In other words the above statement doesn't seem to hold true. It does not skip records. For our application we can't afford dropped records and need to be 100% sure this can't happen. Can someone with more kinesis/aws experience comment how exactly the above can happen on the consumer side (we're handling the producer already)?
0
answers
0
votes
14
views
asked 5 months ago
  • 1
  • 12 / page