1 Answer
- Newest
- Most votes
- Most comments
2
It sounds like you don't want to keep the millions of object version but also can't disable object versioning because of possible ransomware attacks. I'd like to propose you to use S3 Lifecycle + Intelligent-Tiering storage class.
- You can create a lifecycle rule to keep only certain number of versions. For instance, keep current version and additionally keep maximum five newer versions for objects aged more than 5 days. In S3 console -> lifecycle configuration, please check "Number of newer versions to retain - Optional" feature.
- Although Glacier Deep Archive's storage cost is cheap, it's PUT request call is quite expensive. Also, it has minimum storage duration of 180 days. In many cases, this is not a best option for many of customer unless indeed archive usage. Somehow, the request fee could be more expensive than storage cost.
- You may instead consider Intelligent-Tiering storage class. You can directly put objects to this class by Storage Gateway file share settings. It will automatically tiering your objects to the best storage class. In addition, you can additionally enable Archive Access tiers, which auto tier objects to Deep Archive Access Tier - same as Glacier Deep Archive storage cost. Although Intelligent-Tiering class costs for monitoring fee, considering PUT request call fee of regular Glacier Deep Archive class, this storage class can be much more cost effective. Each auto-tiering inter Intelligent-Tiering class are all free of charges.
- You can enable default retention for Object Lock with Governance mode. In this case, the only governed user can remove the object version within retention period. Once retention expires, any allowed user to the bucket can remove the object.
answered 2 years ago
Relevant content
- asked 2 years ago
- asked 2 years ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 6 months ago
Thank you for taking the time to respond...
I'm still confused as to the best way to deal with 2 versions being created immediately when a file is copied to the File Gateway NFS, or why AWS doesn't already have a solution to mitigate this as I see NO use-case where that behaviour is actually desirable...
I'm going to assume that if there was a way to delay the File Gateway 'upload' until after the file was completely written to this would then stop the 2-version thing? I can 100% see a user configurable option to either delay upload from the on prem cache by a set time (either 0 for immediate, or any integer number of minutes/hours) basically solves most issues...
Object locking is an important option for archive data, but its also kind of annoying that the whole versioning thing makes lifecycle management harder.
Sorry to comment again, I was just wondering if it was possible to have 'some' level of feedback from my previous comment re: dealing with the 2 object versions that are immediately created upon upload. It seems Like I may need some form of Lambda to trigger on each PUT into the bucket from the gateway to then check if the object is the newest and automatically remove the 2nd version.... it does seem a bit of a 'hacky' way to deal with a problem that (in my opinion) shouldn't really exist in the first place.....