ERROR: Max frame length of 65536 has been exceeded

0

not a particularly large data set but I receive the following error:
[code]ERROR org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler - Could not process the response
io.netty.handler.codec.http.websocketx.CorruptedWebSocketFrameException: Max frame length of 65536 has been exceeded.
at io.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder.protocolViolation(WebSocket08FrameDecoder.java:426)
at io.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder.decode(WebSocket08FrameDecoder.java:286)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1518)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Max frame length of 65536 has been exceeded.[/code]
I can query up to around 20 of them but more of them I get this error (only 31 total)

The vertices are all of one type/label - query is g.V().hasLabel('DATALABEL').valueMap(true).toList()
the vertices do have a good few properties including set property types of json strings

asked 3 years ago1239 views
2 Answers
0
Accepted Answer

It is likely the large number of properties that you're trying to fetch. You will need to increase the maxContentLength() on your client (per: https://stackoverflow.com/a/53188692/10130372) to address this issue.

Unless you are trying to do this for simplicity sake, it is typically of little benefit to store things like JSON strings or extraneous properties in the graph that are not used as part of your graph traversals. Presently, Gremlin cannot "break open" a JSON string and query the contents within it. Also realize that every vertex, vertex property, edge, and edge property in Neptune is indexed on write. These extra properties will contribute to the overall capacity needed in your cluster. Again, a lot of developers will choose to go this route for simplicity. Another approach would be to store such values in a key/value table (DynamoDB) or S3 and fetch them separately from the graph traversal.

profile pictureAWS
answered 3 years ago
0

thanks. external storage/fetch isnt a great option for the current usecase -simplicity is part of it but not all - I also receive the data for the vertices as objects with those fields as json already. I am considering breaking out the objects as separate vertices though - the dev on that part is still uncertain if he want to retain multiple or just the latest. if multiple will likely move to external vertices
would breaking the objects out be an improvement or no particular benefit for size/efficiencies. They json fields are never used for searches

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions