By using AWS re:Post, you agree to the Terms of Use
/Storage/

Storage

AWS offers a complete range of services for you to store, access, govern, and analyze your data to reduce costs, increase agility, and accelerate innovation. Select from object storage, file storage, and block storage services, backup, and data migration options to build the foundation of your cloud IT environment.

Recent questions

see all
1/18

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
0
answers
0
votes
5
views
asked 13 hours ago

Enabling S3 Encryption-at-rest on a go-forward basis with s3fs

Hi, We have some buckets (have been around for a while, approx 200GB+ data) and we want to **turn on** encryption-at-rest using SSE-S3 (the most "transparent" way) https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html The S3 buckets are mounted to our Linux VMs using S3FS https://github.com/s3fs-fuse/s3fs-fuse which has support for this (seems fairly transparent) So, it seems like the way this works is that you can only enable this on files on a go-forward basis so the older files that already exist will not be in encrypted-at-rest (which is ok, we can backfill this later) Has anybody tried to do this before using this combo? If we mount the bucket using s3fs with `-o use_sse` option, what will happen as the files will be half-and-half? Will it "just work"? s3fs will be mounted with `-o use_see` and it will be able to handle files that are BOTH the old way (not encrypted-at-rest) and the newer files (encrypted-at-rest) ... we can then start backfilling the older files and we have time or will this fail catastrophically the minute we mount the s3 bucket :( Is the solution to just start a new bucket and do the SSE-S3 and then just start moving the files over (we have done this before in terms of having code in our application check for a file in multiple buckets before giving up) Of course, we will test all this stuff, just wanted to ask a quick question in case we are worried about this too much and if this is a "no big deal" or "be very careful" Thanks!
0
answers
0
votes
7
views
asked 2 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1