permissions to create but not delete or update

0

What I want/need:

Let an IAM user add new files to a S3 bucket - but not overwrite or delete them.

It seems like S3 Object Lock goes somewhat in that direction. I assume versioning is required due to underlying infrastructure reasons for avoid central coordination. But this still lets the user delete - it just means the original version remains.

When another user downloads from the bucket the file is marked deleted. That's not desired. The user should get the original/first version.

Can I ensure that always the first/oldest version is delivered on a GET? Or is there any other way than Object Lock to disallow PUT/DELETE for existing objects?

profile picture
tcurdt
asked 4 months ago244 views
2 Answers
1

Hello,

  • Use S3 object versioning and IAM policies to control access. You can create a policy that allows the IAM user to PUT new object versions but denies them permission to delete any versions. When other users GET objects, they will retrieve the oldest/first version by default since that is the versioning behaviour.
  • Use S3 object locking with the "Governance" mode to prevent any overwrite or deletion of objects that have been locked. You would need to write a process to lock objects after they are initially uploaded. Other users would still retrieve the first version.
  • Maintain a separate "immutable" bucket or folder only for initial uploads, then copy objects to a second "mutable" bucket after they are no longer meant to be changed. Control access separately between the two locations.
  • Create a Lambda function triggered by S3 PUT events that validate the object being uploaded is new (by checking the key) before allowing the write to succeed. For GET, no additional logic is needed since the first version is returned.
  • Let me know if any of these approaches could work for your use case or if you have any other questions! Versioning combined with IAM is likely the simplest option to implement.

Thanks

Abhinav

answered 4 months ago
  • The first bullet point is incorrect. If an object is overwritten Amazon S3 adds a new object version in the bucket. The previous version remains in the bucket and becomes a noncurrent version. You can restore the previous version. the default GET is current/latest version.

  • The 4th point is also incorrect. You cant run s3 events before they are written only after an object is created. The rest sound good options.

0

Just like Gary said. It seems like the default behaviour is to get the most recent version. Which isn't particular great for a write-once scenario.

The best I could come up is this setup:

BUCKET=foo-`uuidgen | tr -d '-' | tr '[:upper:]' '[:lower:]' | cut -c 1-60 `
REGION=eu-central-1

aws s3api create-bucket --bucket $BUCKET --region $REGION --create-bucket-configuration LocationConstraint=$REGION

aws s3api put-bucket-versioning --bucket $BUCKET --versioning-configuration Status=Enabled

aws s3api put-bucket-lifecycle-configuration --bucket $BUCKET --lifecycle-configuration '{
    "Rules": [
        {
            "Expiration": {
                "Days": 7
            },
            "ID": "ExpireDataRule",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled"
        }
    ]
}'

aws s3api put-object-lock-configuration --bucket $BUCKET --object-lock-configuration '{
    "ObjectLockEnabled": "Enabled",
    "Rule": {
        "DefaultRetention": {
            "Mode": "GOVERNANCE",
            "Days": 7
        }
    }
}'

aws s3 rb s3://$BUCKET

But that's no really ideal.

profile picture
tcurdt
answered 4 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions