By using AWS re:Post, you agree to the AWS re:Post Terms of Use

S3 Intelligent Tiering on existing Deep Glacier data

0

Hi,

We have a bucket with Deep Glacier data. It was put there by a LifeCycle rule " at 180 days, move for S3 standard to deep glacier".

I want to edit the rule to further optimize costs:

option 1 : at 0 days => intelligent tiering ; at 180 days => deep glacier

option 2: at 0 days => intelligent tieren + async tiers for deep glacier at 180 days.

For either options my main concern is : will applying this lifecycle rule bring existing data out of deep glacier into intelligent tier ? Or will the data in any "deeper" tier remain there.

Thank you !

3 Answers
0
Accepted Answer

Applying a new lifecycle rule to your bucket with existing Deep Glacier data will not automatically move that data out of Deep Glacier into the Intelligent-Tiering storage class. The existing data will remain in its current Deep Glacier tier.

For your proposed options:

Option 1: This rule would apply to new objects or objects that haven't yet reached Deep Glacier. New objects would start in Intelligent-Tiering and then move to Deep Glacier after 180 days of no access.

Option 2: This is a good approach for cost optimization. New objects would be placed in Intelligent-Tiering immediately, and the async tiers feature would allow objects to automatically move to the Deep Archive Access tier (equivalent to Glacier Deep Archive) after 180 days of no access, all within the Intelligent-Tiering storage class.

In both cases, your existing Deep Glacier data will stay where it is. Lifecycle rules are forward-looking and do not retroactively change the storage class of objects that have already transitioned to a "deeper" archival tier.

If you want to move your existing Deep Glacier data to Intelligent-Tiering, you would need to restore the objects first and then explicitly move them to the Intelligent-Tiering storage class. This process could incur retrieval costs and should be carefully planned.

Remember that S3 Intelligent-Tiering is designed to optimize costs by automatically moving data between access tiers based on usage patterns. It includes a small monthly monitoring and automation charge per object, but it can lead to significant savings for data with changing access patterns.
Sources
Save on storage costs using Amazon S3
How S3 Intelligent-Tiering works - Amazon Simple Storage Service
Using S3 Intelligent-Tiering - Amazon Simple Storage Service
Managing S3 Intelligent-Tiering - Amazon Simple Storage Service

profile picture
answered a month ago
0

Thank you for the quick reply. I have a small follow-up question:

Option 2: This is a good approach for cost optimization. New objects would be placed in Intelligent-Tiering immediately, and the async tiers feature would allow objects to automatically move to the Deep Archive Access tier (equivalent to Glacier Deep Archive) after 180 days of no access, all within the Intelligent-Tiering storage class.

Do you mean here that using option 2 (async tiers in intelligent) totally removes the costs of $0.05 per 1000 items INTO deep glacier, AND removes the costs of $0.0025/Gb retrieved from deep glacier (bulk) ?

so to calculate costs benefits, I need to track the number of monitored items versus the cost of archiving and occasional retrieval using lifecycles and manual restores ?

answered a month ago
0

That's right @Geert_VDW. Intelligent-Tiering (INT) adds the monitoring cost, but all the transitions and restore operations within the Intelligent-Tiering class are included in that fixed fee, except for separately requested expedited restore requests from the offline tiers.

However, note that functionally, the two options are not equivalent. When you rely on (INT) to auto-tier the objects to the archive tiers, that will only happen after the object hasn't been accessed even once in 180 consecutive days. If you use a lifecycle rule to transition the objects from INT to S3 Glacier Deep Archive, that will happen on the day you set it to happen, regardless of access patterns.

On the other hand, restore operations from the INT Deep Archive Access tier are more streamlined than with the regular S3 Glacier Deep Archive. When you execute a RestoreObject operation in INT, the object will be automatically lifted back up to the Frequent Access tier, where it starts the regular automatic tiering process from the beginning.

By comparison, when you restore an object from the regular S3 Glacier Deep Archive storage class, the archived object will remain in Deep Archive, but a temporary copy will be produced and kept available for the amount of time you configured when you requested the restore. If you want to move the object back to online storage, you have to make that copy yourself, make sure it doesn't get transitioned to Deep Archive unless you want it to, and delete the original object from Deep Archive (since a copy would now exist in INT or Standard), to avoid having to pay for both copies of the same object.

EXPERT
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions