I have mounted the EFS as Kubernetes volume in the OpenSearch instance running in the EKS Cluster but the size of EFS remains 30kb.
Initially, I thought my OpenSearch wasn't using EFS for storing the traces because its size was not increasing but when I removed the same and ran again I got the below error in Jaeger UI which is using OpenSearch as the backend storage, therefore, I restored the EFS, and the error wasn't coming again so can anyone tell me the reason that if OpenSearch is using EFS for storing the traces then why it's size is not increasing.
Although I can see the size of spans of my application as below in OpenSearch Dashboard which is also using OpenSearch as backend storage.
{"level":"info","ts":1706710346.7175736,"caller":"spanstore/reader.go:583","msg":"es search services failed","traceQuery":{"ServiceName":"Prior-Auth","OperationName":"HTTP GET","Tags":{},"StartTimeMin":"2024-01-31T13:12:26.252Z","StartTimeMax":"2024-01-31T14:12:26.252Z","DurationMin":0,"DurationMax":0,"NumTraces":20},"error":"elastic: Error 400 (Bad Request): all shards failed [type=search_phase_execution_exception]"}
{"level":"error","ts":1706710346.717812,"caller":"app/http_handler.go:492","msg":"HTTP handler, Internal Server Error","error":"search services failed: elastic: Error 400 (Bad Request): all shards failed [type=search_phase_execution_exception]","stacktrace":"github.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).handleError\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/http_handler.go:492\ngithub.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).search\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/http_handler.go:241\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngithub.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).handleFunc.traceResponseHandler.func2\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/http_handler.go:536\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngithub.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).handleFunc.WithRouteTag.func3\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.47.0/handler.go:281\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.47.0/handler.go:225\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.47.0/handler.go:83\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/jaegertracing/jaeger/cmd/query/app.createHTTPServer.additionalHeadersHandler.func4\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/additional_headers_handler.go:28\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngithub.com/jaegertracing/jaeger/cmd/query/app.createHTTPServer.CompressHandler.CompressHandlerLevel.func6\n\tgithub.com/gorilla/handlers@v1.5.1/compress.go:141\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\ngithub.com/gorilla/handlers.recoveryHandler.ServeHTTP\n\tgithub.com/gorilla/handlers@v1.5.1/recovery.go:78\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2938\nnet/http.(*conn).serve\n\tnet/http/server.go:2009"}
@Debolek, I am also facing similar problem, df -h , its not refecting current uasge space correctly, there is some more than 10 mins dealy, could you please guide on this
I am using EKS with fargate node type and EFS volume is successfully mounted in the pod as well but I am not sure how I run the df -h command on the fargate node.