- Newest
- Most votes
- Most comments
Owen,
Thank you for taking time to post about your experience with the S3 File Gateway. I work on the storage gateway product team and I would be interested in learning more about how you use storage gateway. Would you be interested in connecting?
Jesse
I've found the answer, and I still think its a terrible limitation.....
we were applying filtering to the eventbridge rule as such:
{ "detail": { "object-size": [{ "numeric": [">", 0] }] }, "detail-type": ["Storage Gateway Object Upload Event"], "source": ["aws.storagegateway"] }
Turns out, the numeric test can only deal with values in the range -5.0e9 and +5.0e9 inclusive[ 1 ].... so objects over 5GB in size were being filtered out due to the numeric comparison not being able to do a simple 'is the object greater than 0 bytes in size' test. The point of the test is to stop our step functions triggering when folders are created on the storage gateway, and only trigger for actual file objects.
Is there a better way to achieve what we want within the limits of the json filtering - ie trigger on the upload event, but not if the event was for a folder object being created (ie a 0 byte object).
can I also ask why does such a numeric limitation even exist?
Owen.
EDIT: I have found a solution by adding a choice in the step function where I can use the NumericGreaterThan and NumericEquals choices. These seem to not have the odd limitations put on the Eventbridge filters...... but it now means I'm running the step function on every object upload instead of only when I know the lambda is actually needed. So it does have cost implications.
Relevant content
- asked 2 years ago
- Accepted Answerasked 2 years ago
- asked 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 7 months ago
TL;dr we are wanting to store long term data, that doesn't yet have a known 'end date' in terms of retention. some data could eventually be removed within a few months, others years... but we don't know that at the point of upload.
To aid in malware protection, Bucket versioning is enabled, and we want to enable 'legal hold' on the objects as soon as we can after upload. again, we cannot use the default retention rules as we don't have an end date, so a legal hold allows us that 'infinite' hold.
So we want to trigger a lambda, after a short-ish delay to apply the hold to the objects. However, and fully understandably, the Storage Gateway eventbridge events will fire when folder objects are created. This is 100% understandable. We wanted however, to not trigger the stepfunction we are using to create the time delay every single time as it was, at that time, just a simple start -> wait -> trigger lambda -> end process, so triggering the step function would trigger the lambda, all adding to costs, yes very small, but 1000's of very small become very big very quickly. and with many 10,000's of files, and maybe as many folder objects it gets expensive.
putting a filter onto the eventbridge rule that simply says 'is the uploaded object larger than 0 bytes' is a no brainer... bar the very odd and quite bizarre limitation of the numeric compare...
the range given (-5.0e9 and +5.0e9 inclusive ) is what appears to be a 32bit signed int. I'm not sure why that limitation is imposed.