Elastci Search 7 Reindexing Error = TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark

0

Can any of you point me in the correct direction to fix this? Do i need to increase anything?

This is related to Elastic search 7 when a reindexing command was executed in magento 2.4.2 Thank you


{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"index [develop_product_1_v239] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];index [develop_product_1_v240] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"}],"type":"cluster_block_exception","reason":"index [develop_product_1_v239] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];index [develop_product_1_v240] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"},"status":429}


asked 6 months ago379 views
2 Answers
0

Hello,

This issue occur when Opensearch/Elasticsearch see that the disk is running low on space and put itself in read-only mode.

By default, the disk flood stage watermark is set to 95% and it depends upon the cluster. For example if you have 100GiB of space then if 5.3GiB

To fix this issue you can either delete unnecessary indices or increase the disk space. You can check the remaining disk space per node by running the below command in Kibana > Dev Tools:

GET _cat/allocation?v

And to check the disk flood stage watermark you can use the below API call :-

GET _cluster/settings

Watermark flood_stage is set as 95%

"persistent": {
        "cluster": {
                    "disk": {
                        "watermark": {
                            "low": "25.0gb",
                            "flood_stage": "10.0gb",
                            "high": "22.0gb"
                        } ....

For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.


Further, as you have mentioned that you are facing this issue during Re-indexing. Thus, I request you to kindly check the index size and see if you have the enough space left to create a new index of same size. You can check the size of that index using the following command :-

GET _cat/indices

Note :- If you don't have access to Kibana you can send the same API calls using curl as well.

[+] Troubleshooting Amazon OpenSearch Service - Cluster in read-only state - https://docs.aws.amazon.com/opensearch-service/latest/developerguide/handling-errors.html#troubleshooting-read-only-cluster

AWS
SUPPORT ENGINEER
answered 6 months ago
0

Thank you for the detailed anwer. What we did was We scanned the used disk size and we find out its running hot at 97% (you are correct in regards of space need it and our instance was set to 100GB) Application Magento 2.4.2 The logs were exponentially larger than usual-about 26GB -some older renamed folders to geenrate new logs, etc) Steps we did to fix 1 Renamed log>log1 2 Delete all older renamed log folders -to clear disk space 3 About 30Gb cleared up 4 Recheckd Cron jobs-all ok 5 In the process of checking folder/files permissions to avoid this problem in the future.


I am a little upset that

  • EC2 does not have a monitoring tool or a settings to trigger an alarm for Elastic Search issues when it hits the space limit, or at least a log with details

Your help is deeply appreciated and i hope those steps will hep anybody that run intot the same problem when Elastic Search is enabled in Magento/Adobe E-Commerce Community or when Catalog Search Index is stuck in processing mode =Magento/Adobe E commerce community

Based on what i Ifound online -same error could be present to different web applications (not only magento), but the solution has the same roadmap.

Thank you!

answered 6 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions