2 個答案
- 最新
- 最多得票
- 最多評論
1
I was unable to get the $project
or $addFields
stages to work with more than 50 fields and could not find any documentation surrounding this limit.
However, at least in the case of $project
, it appears you can work around the limitation by combining multiple $addFields
stages followed by a $replaceRoot
stage:
db.mytable.aggregate([ { $addFields: { out: { /** fields 1-50 */ }, }, }, { $addFields: { out: { /** fields 51-100 */ }, }, }, { $addFields: { out: { /** fields 101-150 */ }, }, }, { $replaceRoot: { newRoot: "$out", }, }, ]);
已回答 1 年前
-1
Hi. You'll need to identify which actual stage in the aggregation pipeline is triggering the error. I've seen this error before on the $group stage, when there are too many fields to group by, there's a limit of 50 group fields.
已回答 2 年前
相關內容
- AWS 官方已更新 8 個月前
- AWS 官方已更新 8 個月前
Our team has hit the same issue when using an aggregate call with a state that contains about 90 items. That 50 limit that you suggest. Is it mentioned anywhere on the DocumentDB documentation page? For example on the "Amazon DocumentDB Quotas and Limits" section - I am not able to find any information about that limit there.
We are experiencing this exact same error trying to do an aggregate() with a single $project stage. In testing an arbitrary list of with null valued $project fields, any field count over 50 produces a
Too many fields.
error from the server. I currently have a need to transform our documents with over 250 fields and because of this limitation, am not sure how to do it. I'm hoping this is configurable or I can find a workaround.