How do I troubleshoot a data delivery failure between Kinesis Data Firehose and Amazon S3?

Lesedauer: 5 Minute
0

I'm trying to send data from Amazon Kinesis Data Firehose to my Amazon Simple Storage Service (Amazon S3) bucket, but it fails. How do I troubleshoot this?

Short description

To confirm that Kinesis Data Firehose is trying to put data into your Amazon S3 bucket, check the DeliveryToS3.Success metric. If the DeliveryToS3.Success metric value is consistently at zero, then check the following areas:

  • Availability of resources
  • Incoming data records
  • Kinesis Data Firehose logs
  • AWS Identity and Access Management (IAM) role permissions
  • Kinesis Data Firehose server-side encryption
  • AWS KMS encrypted Amazon S3 bucket
  • AWS Lambda invocation

Resolution

Availability of resources

Confirm the availability of the S3 bucket that's specified in your Kinesis Data Firehose delivery stream. If you're using the data transformation feature, then be sure that the specified Lambda function exists.

Incoming data records

Check the IncomingRecords and IncomingBytes metrics to verify that there's data coming into Kinesis Data Firehose. A metric value of zero for IncomingRecords or IncomingBytes indicates that there are no records reaching Kinesis Data Firehose. If the delivery stream uses an Amazon Kinesis data stream as its source, then check the IncomingBytes and IncomingRecords metrics for the stream source. Also, verify whether DataReadFromKinesisStream.Bytes and DataReadFromKinesisStream.Records metrics are emitted from the delivery stream. For more information about these metrics, see Data delivery CloudWatch metrics.

If there is no data reaching Kinesis Data Firehose, then the issue might reside upstream. For a direct PUT operation, confirm that the PutRecord and PutRecordBatch APIs used for putting records to Kinesis Data Firehose are called correctly.

Kinesis Data Firehose logs

Check that you have logging enabled for Kinesis Data Firehose. If logging is not enabled, check the error logs for delivery failure. The error logs will provide specific reasons for delivery failure and allow you to troubleshoot the problematic areas. The format of the log group name is /aws/kinesisfirehose/delivery-stream-name.

Then, use the following permissions for your role:

"Action": [
               "logs:PutLogEvents"
           ],
           "Resource": [
               "arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:log-stream-name"
           ]

IAM role permissions

Be sure that the IAM role that is specified in your Kinesis Data Firehose delivery stream has the correct permissions. Depending on the parameters enabled on the stream, a number of permissions are required. For more information, see Grant Kinesis Data Firehose access to an Amazon S3 destination.

For Amazon S3 access, update your IAM policy like this:

"Action": [
                "s3:AbortMultipartUpload",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads",
                "s3:PutObject"
            ],     
            "Resource": [       
                "arn:aws:s3:::bucket-name",
                "arn:aws:s3:::bucket-name/*"                         
            ]

To allow data transformation of your Lambda function, update your policy like this:

"Action": [
               "lambda:InvokeFunction",
               "lambda:GetFunctionConfiguration"
           ],
           "Resource": [
               "arn:aws:lambda:region:account-id:function:function-name:function-version"
           ]

For a Kinesis data stream that is listed as a source, update your policy like this:

"Action": [
                "kinesis:DescribeStream",
                "kinesis:GetShardIterator",
                "kinesis:GetRecords",
                "kinesis:ListShards"
            ],
            "Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"

Kinesis Data Firehose server-side encryption

Kinesis Data Firehose supports Amazon S3 server-side encryption with AWS Key Management Service (AWS KMS) for encrypting data that's delivered to Amazon S3. To allow server-side encryption, update your IAM role policy to the following:

"Action": [
               "kms:Decrypt",
               "kms:GenerateDataKey"
           ],
           "Resource": [
               "arn:aws:kms:region:account-id:key/key-id"          
           ],
           "Condition": {
               "StringEquals": {
                   "kms:ViaService": "s3.region.amazonaws.com"
               },
               "StringLike": {
                   "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
               }
           }

AWS KMS encrypted S3 bucket

Confirm that the IAM role for the Kinesis Data Firehose delivery stream has the correct permissions. To deliver data to an Amazon S3 bucket that's AWS KMS encrypted, the Kinesis Data Firehose IAM role must be allowed in the key policy. For more information, see How do I resolve the "Access Denied" error in Kinesis Data Firehose when writing to an Amazon S3 bucket?

Lambda invocation

Confirm the availability of the Lambda function that is specified in your delivery stream. If the Lambda function is missing or deleted, then create a new Lambda function to invoke.

Check the Kinesis Data Firehose ExecuteProcessingSuccess and Errors metrics to be sure that Data Firehose tried to invoke your Lambda function. If the invocation is unsuccessful, then check the Amazon CloudWatch log group for the **</aws/lambda/functionname>**location to identify why Lambda isn't invoking. If there's a Lambda transformation and the Lambda function is invoked, then check the duration of the invocation. If the duration exceeds the timeout parameter, then your invocation will fail. For more information about invocation metrics, see Using invocation metrics.

If data transformation fails, then the unsuccessfully processed records are delivered to your S3 bucket in the processing-failed folder. The format of the records in Amazon S3 also contains the error message. For more information about data transformation failures, see Data transformation failure handling.

Note: Your S3 bucket policy can also encounter an explicit deny, such as aws:SourceIp or aws:SourceVpce. To verify whether your S3 bucket policy is explicitly denied, look for the S3.AccessDenied error code in CloudWatch Logs.


AWS OFFICIAL
AWS OFFICIALAktualisiert vor 2 Jahren