AWS Neptune Vector Sampling OOM

0

I was trying gremlin random walk algorithms while just trying to sample one node and was hitting OOM, so I decided to just reduce to g.V().sample(1), and I'm still hitting OOM even on that simple operation. The DB itself is large, with 200m relationships and 45m vectors, but I'm surprised at this performance, any idea how I could trace the issue?

MarkT
已提问 2 个月前106 查看次数
1 回答
1
已接受的回答

Hello and thanks for the question. The ApacheTinkerPop Gremlin sample step currently uses an implementation that reads data into memory before taking a sample from it. That may change in the future, but for now are you able to put a limit in front of the step, so perhaps sample from something like limit(50000) ?

Depending upon the instance size being used, a larger instance type may sufficiently increase the memory enough for the whole sample to complete, but perhaps try with different limit sizes as a short term way to get at least some results.

So g.V().limit(50000).sample(1)

AWS
AWS-KRL
已回答 2 个月前
profile picture
专家
已审核 2 个月前
  • Would there be a way to ensure I can tend towards capturing all nodes? If I wanted the random walk to occur across all nodes, would the limit impact the directions the walk could take?

  • For reference as well - this is a single instance DB, running now on r5.2xlarge

  • If you had something like g.V().limit(100).sample(1).out() then yes it would mean the random walk could only ever start from one of the 100 found using the limit step.

  • You could also potentially look at something like the coin step with a very small value, say perhaps coin(0.0001).sample(1)

  • Gotcha - good to know, thank you! Might just have to run neptune exporter and not sample

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则