- Newest
- Most votes
- Most comments
Hello,
To resolve the JSON parsing error when using Fluent Bit to send logs from EKS Fargate to Elastic Cloud, follow these steps:
1.Verify Log Data Format: Ensure your logs are correctly formatted JSON.
2.Escape Special Characters: Ensure special characters in your logs are properly escaped.
3.Correct Fluent Bit Configuration: Update your Fluent Bit configuration to match the correct JSON structure for Elasticsearch.
Here is a corrected example:
[OUTPUT]
Name es
Match *
Index leoh-logging
Host your-elasticsearch-host
Port 9243
HTTP_User elastic-beats
HTTP_Passwd your-password
tls On
tls.verify Off
Suppress_Type_Name On
Replace_Dots On
Logstash_Format On
Logstash_Prefix leoh-logging
Logstash_DateFormat %Y.%m.%d
Example Correct JSON Structure Ensure the log structure sent is correct:
{ "index": { "_index": "leoh-logging", "_type": "_doc" } }
{ "message": "your log message", "timestamp": "2024-05-24T10:00:00Z", "level": "info" }
Compatibility Ensure you are using compatible versions of Fluent Bit and Elasticsearch. Check the Fluent Bit documentation for version compatibility. Update Fluent Bit to the latest version if necessary.
Thank you very much for your prompt and informative response! Your explanation regarding the JSON parsing error in Fluent Bit and the steps to resolve it were clear and valuable. I especially appreciate the example configuration and the breakdown of the correct JSON structure.
Upon reviewing my configuration, I identified the mistake – the extra double quote mark (") within the index section. This explains the parsing error. I've corrected the configuration, and now my logs are flowing smoothly to Elastic Cloud.
The error you are encountering indicates that Fluent Bit is sending log data to Elasticsearch with a JSON parsing issue. Specifically, the error message points out an unexpected character 'l' at a certain position in the JSON string, which suggests a formatting issue in the data.
Verify Log Data Format. Ensure that the log data being sent to Elasticsearch is properly formatted JSON. Fluent Bit might be sending a malformed JSON string.
Escape Special Characters. If your logs contain special characters or are not properly escaped, it could cause parsing issues. Ensure that your log data is correctly escaped.
Check Fluent Bit Filters. If you use any filters to modify the log data before sending it to Elasticsearch, verify that these filters are correctly processing and formatting the data.
Example Correct JSON Structure
{
"create": {
"_index": "leoh-logging",
"_type": "_doc",
"_id": "1"
},
"log": {
"message": "your log message",
"timestamp": "2024-05-24T10:00:00Z",
"level": "info"
}
}
Thank you, @kranthi putti.
I'm also getting the following exception on one of the indices in fluent-bit logs:
{"error":{"root_cause":[{"type":"json_parse_exception", "reason":"Unexpected character ('p' (code 112)): was expecting comma to separate Object entries\n at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 24]"}],
"type":"json_parse_exception", "reason":"Unexpected character ('p' (code 112)): was expecting comma to separate Object entries\n at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 24]"},
"status":400}
[2025/02/18 19:46:51] [warn] [engine] failed to flush chunk '1-1739906399.925775362.flb', retry in 1838 seconds: task_id=117,
input=tail.0 > output=es.0 (out_id=0)
Would it also require updating the following "Output" configuration or "Filter" settings as well?
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
Labels On
Annotations On
[FILTER]
Name kubernetes
Match alphatec.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
Labels On
Annotations On
## https://docs.fluentbit.io/manual/pipeline/outputs
outputs: |
[OUTPUT]
Name es
Match kube.*
Host <host>
Logstash_Format On
Logstash_Prefix_Key kubernetes.namespace_name
Logstash_DateFormat %Y.%m
Retry_Limit False
Port 443
tls On
tls.verify Off
Suppress_Type_Name On
AWS_Auth On
AWS_Region eu-west-2
AWS_Role_ARN <AWS_Role_ARN>
AWS_Service_Name es
Replace_Dots Off
Trace_Error On
[OUTPUT]
Name es
Match host.*
Host <host>
Logstash_Format On
Logstash_Prefix node
Logstash_DateFormat %Y.%m
Retry_Limit False
Port 443
tls On
tls.verify Off
Suppress_Type_Name On
AWS_Auth On
AWS_Region eu-west-2
AWS_Role_ARN <AWS_Role_ARN>
AWS_Service_Name es
Replace_Dots Off
Trace_Error On
Relevant content
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 5 months ago

please accept the answer if it was useful