Resolution
Follow these troubleshooting steps for the error that you received.
"Last operation failed" error
If MSK Connect can't create the connector and the connector moves to the Failed state, then you receive the following error message:
"There is an issue with the connector Code: UnknownError.UnknownMessage: The last operation failed. Retry the operation."
To find the cause for the failure, review the log events for MSK Connect.
"Required field is missing" error
If you use a carriage return (/r) character at the end of a configuration, then you receive the following error message:
"Invalid parameter connectorConfiguration: The following required field is missing or has invalid value: tasks.max"
To resolve this issue, take the following actions:
- Manually enter the configuration information in the connector configuration dialog box. Don't copy and paste the configuration information from another source.
- For Windows operating systems (OSs), use a text editor to remove the carriage return and line feed (CRLF) characters and end-of-line (EOL) characters. To remove the carriage return, copy and paste the configuration into a text editor. On your text editor, choose View, and then choose Show Symbol. Then, choose Show All Characters. Replace all the CRLF characters, \r\n, with a line feed (LF) \n.
"Invalid parameter" error
If you used the MSK Connect service-linked role to create a connector, then you receive the following error message:
"Invalid parameter serviceExecutionRoleArn: A service linked role ARN cannot be provided as service execution role ARN."
MSK Connect returned this error because you can't use the AWSServiceRoleForKafkaConnect service-linked role as the service execution role. To resolve this error, create a separate IAM service execution role with the required permissions. Then, select that role to configure the connector.
"Failed to find any class that implements Connector" error
You receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches..."
To resolve this issue, take the following actions:
- Remove CRLF characters that are in the connector configuration.
- If the connector plugin requires multiple files, then include the files in your zipped file. The JAR files in the zipped file must also have the expected file structure for the plugin. It's a best practice to turn on logs for MSK Connect and review the logs to confirm that the file structure is correct.
- Make sure that all of the required dependencies and classes are included in the custom plugins.
"TimeOutException" error
If the connector can't reach your MSK cluster, then you receive the following error message:
"org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: fetchMetadata"
To resolve the preceding error message, take the following actions:
- Check whether the bootstrap servers that you specified in the properties and port number are valid.
- Make sure that the security group for your cluster allows inbound traffic from the client's security group that's associated with MSK Connect. If you use an MSK cluster, then add a rule that allows inbound traffic from the cluster.
- Make sure that the security group for your cluster allows outbound traffic to the security group that's associated with MSK Connect. If your MSK Cluster and MSK Connector use different security groups, then make sure that outbound rules allow traffic to the MSK cluster. Also, make sure that and outbound rules from the MSK cluster allow traffic to MSK Connect.
For more information, see Security group rules.
"SaslAuthenticationException" error
If your MSK cluster runs on a kafka.t3.small broker type with AWS Identity and Access Management (IAM) access control, then review your connection quota. The kafka.t3.small instance type accepts 4 TCP connections for each broker per second.
If you exceed your connection quota, then your creation test fails and you receive the following error message:
"org.apache.kafka.common.errors.SaslAuthenticationException: Too many connects"
For more information about MSK clusters and IAM access control, see How Amazon MSK works with IAM.
To resolve the "SaslAuthenticationException" error, take one of the following actions:
- In your MSK Connect worker configuration, update the values for reconnect.backoff.ms and reconnect.backoff.max.ms to 1000 or higher.
- Upgrade to a larger broker instance type, such as kafka.m5.large or higher. For more information, see Amazon MSK broker types and Best practices for Standard brokers.
"Unable to connect to S3" error
If the connector can't connect to Amazon Simple Storage Service (Amazon S3), then you receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to s3.us-east-1.amazonaws.com:443 failed: connect timed out"
To resolve this issue, you must create an Amazon Virtual Private Cloud (Amazon VPC) endpoint from the cluster's VPC to Amazon S3.
Complete the following steps:
- Open the Amazon VPC console.
- In the navigation pane, choose Endpoints.
- Choose Create endpoint.
- For Type, select AWS services.
- Under Services, choose the Service Name filter, and then select com.amazonaws.region.s3.
Note: Replace region with your AWS Region.
- Choose the Type filter, and then choose Gateway.
- For VPC, select the cluster's VPC.
- Under Route tables, select the route table that's associated with the cluster's subnets.
- Choose Create endpoint.
"Unable to execute HTTP request" error for Firehose
If the connector can't connect to Amazon Data Firehose, then you receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to firehose.us-east-2.amazonaws.com:443 failed: connect timed out"
To resolve this issue, follow the steps in the preceding section to create a VPC endpoint from the cluster's VPC to Amazon Data Firehose. Use the Service name filter com.amazonaws.region.kinesis-firehose.
"Access denied" error
If the IAM user for MSK Connect doesn't have the required permissions to create a connector, then you receive the following error message:
"Connection to node - 1 (b1.<cluster>.<region>.amazonaws.com) failed authentication due to : Access Denied"
When you create a connector with MSK Connect, you must specify an IAM role to use with it. Your service execution role must have the following trust policy so that MSK Connect can assume the role:
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "kafkaconnect.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
If the MSK cluster for your connector uses IAM authentication, then add the following permissions policy to the connector's service execution role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:DescribeCluster"
],
"Resource": [
"cluster-arn"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"ARN of the topic that you want a sink connector to read from"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:WriteData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"ARN of the topic that you want a source connector to write to"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:CreateTopic",
"kafka-cluster:WriteData",
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/__amazon_msk_connect_*"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": [
"arn:aws:kafka:region:account-id:group/cluster-name/cluster-uuid/__amazon_msk_connect_*",
"arn:aws:kafka:region:account-id:group/cluster-name/cluster-uuid/connect-*"
]
}
]
}
For more information, see Authorization policy resources.
If your organization uses service control policies (SCPs), then make sure that your SCPs don't restrict access to the IAM users and IAM roles. If your SCPs are configured with an allow list for AWS services, then make sure that the kafka-cluster action isn't included as an allowed service. MSK Connect requests are denied access to your MSK Cluster even if the service execution role includes the required permissions. Review your SCP permission sets and make sure that kafka-cluster actions are explicitly allowed.
For more information, see Service Control Policies (SCPs).
"Failed to find AWS IAM Credentials" error
If the IAM role that you used to create the connector doesn't have the required permissions, then you receive the following error message:
"ERROR Connection to node -3 (b-1.cluster.region.amazonaws.com/INTERNAL_IP ) failed authentication due to: An error: (java.security .PrivilegedActionException: javax.security .sasl.SaslException: Failed to find AWS IAM Credentials [Caused by aws_msk_iam_auth_shadow.com.amazonaws.SdkClientException: Unable to load AWS credentials from any ...........Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))"
To troubleshoot the preceding error message, review the access policies and trust relationship of the IAM role for the connector. For more information, see Understand service execution role.
This error might also occur when the required connector configuration properties are missing or incorrect. For example, Amazon S3 connectors must include the correct AWS Region. If the required properties aren't configured, then the connector might not authenticate with your AWS credentials and fail authentication.
To troubleshoot this issue, review the connector configuration and make sure that all required properties are correct. For more information on the required connector properties, see the connector plugin's third-party documentation for details.
Related information
How do I use the Kafka-Kinesis-Connector to connect to my Amazon MSK cluster?
Understand MSK Connect
Troubleshoot your Amazon MSK cluster
Troubleshoot issues in Amazon MSK Connect