By using AWS re:Post, you agree to the Terms of Use
/Amazon Elasticsearch Service/

Questions tagged with Amazon Elasticsearch Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS ElasticSearch returning DIFFERENT results in Kibana and http request in browser for the exact same query

I am running this kibana query: I have this query in Kibana: GET nearby/_search { "from": 20, "size":20, "query": { "bool": { "must": { "match": { "X": "B" } }, "filter": { "geo_distance": { "distance": "3.0km", "PO": { "lat": 26.8466937, "lon": 80.94616599999999 } } } } } } and response to this is: all the responses are with X=B: 20 results are there, i have removed some fields and some docs to keep the post short { "took" : 228, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 71, "relation" : "eq" }, "max_score" : 2.5032558, "hits" : [ { "_index" : "nearby", "_type" : "_doc", "_id" : "n3YeKvJqvpu1okE7QDBp", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gfn0", "X" : "B" } }, { "_index" : "nearby", "_type" : "_doc", "_id" : "5FPJ2eyr0YoQ9F0xPYzW", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gfn0", "X" : "B" } }, { "_index" : "nearby", "_type" : "_doc", "_id" : "QJflnqGKF1dpOjEaY8vy", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gvr8", "X" : "B" } }] } } This is the browser REQUEST, QUERY REMAINS SAME: https://search-wul8888888.ap-south-1.es.amazonaws.com/nearby/_search?q="{"from":20,"size":20,"query":{"bool":{"must":{"match":{"X":"B"}},"filter":{"geo_distance":{"distance":"3km","PO":{"lat":26.8466937,"lon":80.94616599999999}}}}}}" This is the response: as u can see there are mostly X=I docs i.e. must-match isnt honoured, SECOND THING IS THAT I AM SENDING SIZE=20 BUT I GET 10 REULTS ONLY WHICH IS DEFAULT(BELOW I HAVE REMOVED EXTRA docs TO KEEP THE POST SHORT) {"took":149,"timed_out":false, "_shards":{"total":5,"successful":5,"skipped":0,"failed":0}, "hits":{"total":{"value":802,"relation":"eq"},"max_score":8.597985, "hits":[ {"_index":"nearby","_type":"_doc","_id":"iJ71MNq4a4TCkcT4vWSP","_score":8.597985,"_source":{//EXTRA FIELDS REMOVED "PO":"tuc8unwp7","X":"I","BI":"tRhKrWiDxFSt57tIH7g5"}}, {"_index":"nearby","_type":"_doc","_id":"PmngNe8WcC8aSraDMluz","_score":7.3973455,"_source":{"PO":"tuc8uhc5z",**"X":"I"**,"BI":"m3S6yEicvu1HFI1UOTIb"}}, {"_index":"nearby","_type":"_doc","_id":"lDqjflPZGYsymPGU8iHD","_score":7.1520696,"_source":{"PO":"tuc89wpg5","X":"B"}}, {"_index":"nearby","_type":"_doc","_id":"QIf2KsO4FpCjT3m7kH4I","_score":6.402881,"_source":{"PO":"tuc8uhc5z","X":"I","BI":"m3S6yEicvu1HFI1UOTIb"}}]}} PLEASE HELP. I TRIED EVERYTHING BUT NOT ABLE TO UNDERSTAND. MY HUNCH IS THAT EVERY TIME I M BEING RETURNED A STALE/old RESULT BUT dont know how to fix that. even in chrome incognito mode result for browser is same as above. Even if i change the radius in browser, result remains same which says clearly browser queries are getting the stale result.
0
answers
0
votes
2
views
PalmGini
asked 2 days ago

OpenSearch cluster green but stuck in processing for 2 weeks

My OpenSearch cluster has been stuck in processing since the last auto-tune event. the cluster status is green across the board. The cluster is usable without issue (reading, writing, Kibana), but this prevents me from performing an upgrade or applying other config changes. Monitoring shows: * Cluster status green * Instance count is 9, as expected: 3 master and 6 data nodes * JVM memory pressure looks good: seeing the expected "sawtooth" curve never exceed 75% and going as low as 45% * Most recent update was a Service software update to R20211203-P2. It seems to have taken 5 days, but looks like it completely well. (Judging by the instance count graph) * The cluster is usable without issue, Kibana is reachable and responsive, constantly writing to the cluster without error, nothing seems off Rough timeline: * 19.12.2021 - update to R20211203-P2, instance count is doubled to 18 (expected blue/green deployment) * 24.12.2021 - instance count drops back to the expect 9, cluster status green * 26.12.2021 - Notification "Auto-Tune is applying new settings to your domain", instance count doesn't rise, still at 9 * now - Cluster still stuck at "processing" even though everything is green What I tried: * `GET /_cluster/allocation/explain` responds with "unable to find any unassigned shards to explain" which makes sense * ` GET /_cat/indices?v` shows everything green, as expected * I also tried modifying the disk size to try and "kick" the cluster into doing a blue/green deployment and hopefully getting unstuck but that didn't seem to happen The only possible clue was in CloudWatch error logs, a repeating message appears since the last auto-tune event started on 26.12.2021: with "master not discovered yet", I'll try to pretty-print it below: ``` [2022-01-11T06:36:23,761][WARN ][o.o.c.c.ClusterFormationFailureHelper] [52cb02d8573b17516f7756d5fe05484d] master not discovered yet: have discovered [ {***}{***}{***}{__IP__}{__IP__}{dir}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__}, {***}{***}{***}{__IP__}{__IP__}{imr}{dp_version=20210501, distributed_snapshot_deletion_enabled=false, cold_enabled=false, adv_sec_enabled=false, __AMAZON_INTERNAL__, shard_indexing_pressure_enabled=true, __AMAZON_INTERNAL__} ]; discovery will continue using [__IP__, __IP__, __IP__, __IP__, __IP__, [__IP__]:9301, [__IP__]:9302, [__IP__]:9303, [__IP__]:9304, [__IP__]:9305, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__, __IP__] from hosts providers and [] from last-known cluster state; node term 36, last-accepted version 0 in term 0 ``` I masked the node IDs and replaced them with `***`. The log message lists 7 of them above, I can only recognize 3 IDs as my master nodes, cannot recognize the rest of the IDs (not my data nodes) and not sure I understand what's going on here. Any help would be appreciated.
0
answers
0
votes
4
views
ggmabob
asked 5 days ago
2
answers
0
votes
6
views
Arian Calabrese
asked 5 days ago

Multi Region strategy for API Gateway

If disaster recovery is not a requirement, what would be the best strategy for setting up API Gateway to server global customers. Here are three options that I can think of, not able to land on one. **Option 1**: Single Edge Optimized API Gateway serving traffic * Pros: save cost and avoid complexity of data replication (backend is opensearch) * Cons: Latency? not sure how much edge optimized API will help with latency, as customer will be hitting the API at nearest edge (ssl handshake, etc) and traffic flowing via backbone network. ( Question 1) **Option 2** Multiple Regional API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. * Cons: Data replication, Cost. Also, since there is no cloud front here, traffic will be flowing via internet to closest region API, lets say we have API deployed in two regions , US and Singapore, would users in Europe see latency , worse than the Option 1, where requests are going to nearest edge location and reaches API via backbone? **Option 3** Multiple Edge Optimized API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. Not sure how latency based routing works on an edge optimized endpoint, would it even help, since both endpoints are edge optimized. Not sure how smart is Route53 (Question 2) * Cons: Data replication, cost and uncertainty of Latency based routing. and Finally , one that I can think of could work, but haven't found too many solutions where people implemented. **Option 4** Multiple Regional API Gateway with single custom Cloudfront on top with cloudfront functions to do the routing. * Pros: customers hitting closest edge optimized location and routed to nearest API, this routing will be based on country of origin header from cloudfront. * Cons: Same Data Replication, Cost and predefined list of countries based routing. I need to spend time and run tests with multiple solutions. But wanted to seek community advise first. To summarize everything, if cost, complexity and disaster recovery are kept out of discussion, what would be best architecture for API Gateway to avoid latency issues.
2
answers
0
votes
18
views
Balu
asked 17 days ago

ElasticSearch Container failed to start - ECS deployment using docker compose up - /usr/share/elasticsearch/data/nodes/ AccessDeinedException

Hi I'm trying to start an elasticsearch container via - docker compose (aws-cli and switching to ecs context), but it fails to start - AccessDeinedExcception - cant write to /usr/share/elasticsearch/data/nodes/ directory. I have researched the issue on google and its because of the permission on that folder - from my understanding I need to fix the permissions in the host directory mapped to /usr/share/elasticsearch/data/nodes/ (I think) running sudo chown -R 1000:1000 [directory} However my container shuts down and how am I supposed to update the permission on that directory? this is my docker-compose - any help appreciated version: '3.8' services: elasticsearch01: user: $USER image: docker.elastic.co/elasticsearch/elasticsearch:7.14.1 #image: 645694603269.dkr.ecr.eu-west-2.amazonaws.com/smpn_ecr:latest container_name: es02 restart: unless-stopped environment: cluster.name: docker-es-cluster discovery.type: single-node bootstrap.memory_lock: "true" # ES_JAVA_OPTS: "-Xms2g -Xmx2g" xpack.security.enabled: "false" xpack.monitoring.enabled: "false" xpack.watcher.enabled: "false" node.name: es01 network.host: 0.0.0.0 logger.level: DEBUG ulimits: memlock: soft: -1 hard: -1 volumes: - es_data01:/usr/share/elasticsearch/data:rw ports: - "9200:9200" - "9300:9300" healthcheck: test: "curl -f http://localhost:9200 || exit 1" networks: - smpn_network volumes: es_data01: driver: local networks: smpn_network: driver: bridge
0
answers
0
votes
12
views
AWS-User-2395837
asked 18 days ago

ElasticSearch (Open Search) - FORBIDDEN/10/cluster create-index blocked (api) error when creating ism policy

I am trying to create index state management policy on my current running domain with below ``` UT _opendistro/_ism/policies/delete_after_2d { "policy": { "policy_id": "delete_after_3d", "description": "Maintains the indices open by 2 days, then closes those and delete indices after 3 days", "default_state": "ReadWrite", "schema_version": 1, "states": [ { "name": "ReadWrite", "actions": [ { "read_write": {} } ], "transitions": [ { "state_name": "ReadOnly", "conditions": { "min_index_age": "2d" } } ] }, { "name": "ReadOnly", "actions": [ { "read_only": {} } ], "transitions": [ { "state_name": "Delete", "conditions": { "min_index_age": "1d" } } ] }, { "name": "Delete", "actions": [ { "delete": {} } ] } ] } } ``` I constantly got error with ``` { "error" : { "root_cause" : [ { "type" : "index_create_block_exception", "reason" : "blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];" } ], "type" : "index_create_block_exception", "reason" : "blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];" }, "status" : 403 } ``` On this aws [https://aws.amazon.com/premiumsupport/knowledge-center/opensearch-403-clusterblockexception/](page), the trouble shooting stating either not enough disk space or jvm memory pressure is high. However I checked both, and don't believe my search domain has issue with either of the root cause stated in this documentation. * On my domain there is total free space of 37.00 GiB * JVM pressure never exceed 17% Any suggestion on what is going on?
0
answers
0
votes
4
views
AWS-User-3610377
asked 24 days ago

Red ElasticSearch 5.5 cluster due to NODE_LEFT with running snapshot

There is a cluster that, due to losing a couple of nodes has a single shard in UNASSIGNED state. **TL;DR;**: The shard can not be rerouted due to AWS limitations, index can not be deleted due to running snapshot (for over 18 hours now), cluster has scaled to double its regular size for no obvious reason and snapshot can not be cancelled because it is one of the automated ones. What could be done to get the cluster back to green health? Data loss of that single index should not be a problem. ## Detailed explanation ### Symptom Cluster in red health status due to a single unnasigned shard. A call to `/_cluster/allocation/explain` returns the following: ``` { "index": "REDACTED", "shard": 1, "primary": true, "current_state": "unassigned", "unassigned_info": { "reason": "NODE_LEFT", "at": "2021-12-01T21:27:04.905Z", "details": "node_left[REDACTED]", "last_allocation_status": "no_valid_shard_copy" }, "can_allocate": "no_valid_shard_copy", "allocate_explanation": "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster", ... ``` ### Cluster rerouting Regular troubleshooting on the matter indicates that one could take the data loss by reallocating the shard to empty using something like: ``` $ curl -XPOST '/_cluster/reroute' -d '{"commands": [{ "allocate_empty_primary": { "index": "REDACTED", "shard": 1, "node": "REDACTED", "accept_data_loss": true }}] }' {"Message":"Your request: '/_cluster/reroute' is not allowed."} ``` But that endpoint is not available in AWS. ### Closing/Deleting the index Other suggestions include closing the index for operations, but that is not supported by AWS: ``` $ curl -X POST '/REDACTED/_close' {"Message":"Your request: '/REDACTED/_close' is not allowed by Amazon Elasticsearch Service."} ``` Another solution is to delete the index. But, as there is a running snapshot, it can not be deleted: ``` $ curl -X DELETE '/REDACTED' {"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[REDACTED][indices:admin/delete]"}],"type":"illegal_argument_exception","reason":"Cannot delete indices that are being snapshotted: [[REDACTED]]. Try again after snapshot finishes or cancel the currently running snapshot."},"status":400} ``` ### Cancelling the snapshot As the previous error message states, you can try cancelling the snapshot: ``` curl -X DELETE '/_snapshot/cs-automated-enc/REDACTED' {"Message":"Your request: '/_snapshot/cs-automated-enc/REDACTED' is not allowed."} ``` Apparently that is because the snapshot is part of the automated ones. Had it been a manual snapshot I would have been able to cancel it. Problem is that the snapshot has been running for over 10 hours and is still initializing: ``` $ curl '/_snapshot/cs-automated-enc/REDACTED/_status' { "snapshots": [ { "snapshot": "2021-12-12t20-38-REDACTED", "repository": "cs-automated-enc", "uuid": "REDACTED", "state": "INIT", "shards_stats": { "initializing": 0, "started": 0, "finalizing": 0, "done": 0, "failed": 0, "total": 0 }, "stats": { "number_of_files": 0, "processed_files": 0, "total_size_in_bytes": 0, "processed_size_in_bytes": 0, "start_time_in_millis": 0, "time_in_millis": 0 }, "indices": {} } ]} ``` As it can be seen from the timestamp, it has been that way for almost 20 hours now (for reference, previous snapshots show to have run in a couple of minutes).
1
answers
0
votes
6
views
Matias
asked a month ago

_nodes/http info missing lots of info

The Elasticsearch client for golang uses info derived from the follow ES api call: GET /_node/http <https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html> Normally this call returns lot of information about the node including IP address but when making the call to the AWS service this information is missing. This causes the golang client to fail to be able to _discover_ any nodes. Here are two examples which highlight the problem: regular node i have running locally: curl "http://localhost:9200/_nodes/http?pretty" { "_nodes" : { "total" : 1, "successful" : 1, "failed" : 0 }, "cluster_name" : "docker-cluster", "nodes" : { "hnsnurlLTG2-XbBQFyYg9w" : { "name" : "16c3f8d15864", "transport_address" : "172.25.0.4:9300", "host" : "172.25.0.4", "ip" : "172.25.0.4", "version" : "7.8.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65", "roles" : \[ "data", "ingest", "master", "ml", "remote_cluster_client", "transform" ], "attributes" : { "ml.machine_memory" : "25225474048", "xpack.installed" : "true", "transform.node" : "true", "ml.max_open_jobs" : "20" }, "http" : { "bound_address" : \[ "0.0.0.0:9200" ], "publish_address" : "172.25.0.4:9200", "max_content_length_in_bytes" : 104857600 } } } } AWS equivalent: curl -XGET https://vpc-address....amazonaws.com/_nodes/http?pretty { "_nodes" : { "total" : 3, "successful" : 3, "failed" : 0 }, "cluster_name" : "XXX", "nodes" : { "8oxT_9maSMif5iXoQptpKA" : { "name" : "d026e0070020663bdd93028c2c801292", "version" : "7.7.0", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "roles" : \[ "ingest", "master", "data", "remote_cluster_client" ] }, "8joBdfy6Q-aULryA3aP00w" : { "name" : "3992f18f3b80c28fbb6d5b5624c4bdfe", "version" : "7.7.0", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "roles" : \[ "ingest", "master", "data", "remote_cluster_client" ] }, "2v572nbWQK6zrXmwGUiRmQ" : { "name" : "41d42ffdc2fdaf9c7a425e29c6ba92d6", "version" : "7.7.0", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "roles" : \[ "ingest", "master", "data", "remote_cluster_client" ] } } } Obviously there are going to be some differences local Vs AWS with things like plugins but returning the actually http information seems pretty fundemental no?
1
answers
0
votes
0
views
biffta
asked a year ago

How to automatically apply ILM to an new index created by Firehose

We have a Kinesis Firehose sending logs to AWS Elasticsearch and it creates a new index each day. I created an Index Lifecycle Policy to delete an index after 2 days and that works fine if I apply it via Kibana Index Management manually. What I need to do is apply the Policy to an index when it is created by the firehose. The Elasticsearch docs shows this can be done with the below template but AWS Kibana does not support the index.lifecycle.name. I get an unknown setting error. Does anyone know how I can automatically apply an ILM Policy to an Index and I must be able to wildcard it since the Firehose tacks on a date. Thanks in advance -------- This is the template I am trying to PUT ----------------- PUT _template/testindex_template { "index_patterns": \["test-index*"], "settings": { "number_of_shards": 1, "number_of_replicas": 1, "index.lifecycle.name": "delete_index_policy" } } ------- This is the error ----------------- { "error": { "root_cause": \[ { "type": "remote_transport_exception", "reason": "\[b81eac543fece630b97c35b7b84acae3]\[x.x.x.x:9300]\[indices:admin/template/put]" } ], "type": "illegal_argument_exception", "reason": "unknown setting \[index.lifecycle.name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings" }, "status": 400 } P.S. I also create a Rollover Lifecycle policy but same issue with that. The Delete policy doesn't get applied to the index that is rolled over.
3
answers
0
votes
0
views
KevC
asked 2 years ago

Elasticsearch ignoring, rejecting requests

Hello, Approximately 10 days ago my elasticsearch cluster stopped accepting new documents to some indices. I am writing log data to elasticsearch from an ECS task using <https://github.com/internetitem/logback-elasticsearch-appender>, which formats log messages for submission to the Elasticsearch /_bulk REST API. I am using the AWS java SDK version 1.11.632 to access the default credential provider chain and sign requests. The task is assigned a specific task role in the task definition, and this role has been added explicitly to my ES domain&#39;s access policy. The ES cluster is managed in a "shared" AWS account while the ECS task is running in a separate "dev" account. The domain name is "logs". The access policy looks like this (account numbers redacted) ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::DEV_ACCT_NUM:role/ecsTaskExecutionRole", "arn:aws:iam::DEV_ACCT_NUM:role/MyTaskRole" //Plus some more roles for other tasks ] }, "Action": "es:*", "Resource": "arn:aws:es:ca-central-1:SHARED_ACCT_NUM:domain/logs/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SHARED_ACCT_NUM:role/Cognito_Kibana_IdentitiesAuth_Role" }, "Action": "*", "Resource": "arn:aws:es:ca-central-1:SHARED_ACCT_NUM:*" } ] } ``` Logs are written to an index named {environment}-{date}, e.g. "dev-2020-04-23". I expect that if the so named index doesn&#39;t exist it will be created when I write to it. This has worked so far. **THE PROBLEM:** When I write to ES, I sometimes receive a 403 response indicating that the request signature is invalid. Nothing has changed about the way I retrieve credentials or sign requests, nor has my access policy changed. More often the response is 200 OK, but the logs do not appear in Kibana and the index does not appear in the AWS management console. I can still write to existing indices which leads me to believe the problem is in creating new indices and not in writing documents. I was not able to find any documented limit to the number of indices. This behaviour was first observed when the cluster was ES version 7.1. Upgrading to 7.4 did not help. Thanks in advance for any assistance.
1
answers
0
votes
1
views
jamestwiggmarble
asked 2 years ago

ElasticSearch scaling and multi-tenancy considerations

Hello, A customer is growing their usage of Elastic and adding more customers, types of documents and indexes. Elastic is a main part of their multi-tenant SaaS offering. **TL;DR** - They would like to transform their Elastic setup to be multi-tenant in order to create better isolation and accommodate the expected growth and have a couple of questions. They consider a couple of points regarding going forward: **1.** Currently each Elastic index contains the documents of all of their tenants. However, moving forward to new indexes, they consider creating a separate index per tenant. According to plan, they may have millions of documents per tenant. For example, an index for all emails, which is currently called ‘emailMessage’ will be split into many ‘emailMessage-TENANTID’. - What does it mean from system resources point of view in case they expect to have a few thousand tenants? Since each index requires at least one separate shard and each shard means system resources they are not sure if they'll will not hit some system limit at some point which will prevent them from adding additional tenants. **Customer wording regarding two additional questions -** **2.** How bad / good does ES handle modifications – in one of our planned indices we expect to store from hundreds of thousands up to few millions of documents per tenant. We are also expecting that about 50% of them will be changed on a daily basis. Since ES basically deletes a document on each update we are worried that ES indices will and data will get fragmented, which will cause performance decrease. The question is if you have an experience with ES indices that take so many updates and how do they perform and / or is there any actions we should take when creating the indices as well as maintaining them to keep ES perform well as time goes by. **3.** API – we are currently using ES’s RestHighLevelClient and we experience issues with keeping up with the latest versions of it – as it turns up the developers of this client does not value backward compatibility too much and some upgrades require a few days of development and testing for keeping the existing code work as it did before the upgrade. The question is if you have any recommendations for alternate (Java) client with which you have good experience. *The customer use Amazon ElasticSearch service.
1
answers
0
votes
1
views
Oren Reuveni
asked 3 years ago
  • 1
  • 90 / page