By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Encryption

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS instance end Credentials

Good afternoon. I want to apologize for the possibly wrong question. I am not a native English speaker and my question may be misunderstood. But I will try to ask my question as correctly as possible in order to find a way to solve it. There is a client-server application. The client has an instance. The server is in the Enclaves. In order for the client to connect to the server, the client must send a request to Credenshales. On the client, the script creates a file that temporarily creates credentials for such a connection. These credentials are copied from the server's memory and copied into this file. (temporarily) This file is then deleted. I would like to somehow protect myself and somehow encrypt this file or find an alternative **SAFE** solution how to bypass this process and use other tools that Amazon AWS has. Is it possible to somehow automate this process and make the transfer of credentials that the client takes from the server and inserts into its application. Because the credentials are temporarily stored unencrypted, I think this is a serious vulnerability for my application. It is enough for me to give an idea to solve my problem. Then I'll try to figure it out myself. AWS contains a fairly large amount of materials and it is very difficult to find the right topic. I am sure that in his tools he will be able to offer a solution to my problem. Thanks.
2
answers
0
votes
80
views
asked 4 months ago

Enabling S3 Encryption-at-rest on a go-forward basis with s3fs

Hi, We have some buckets (have been around for a while, approx 200GB+ data) and we want to **turn on** encryption-at-rest using SSE-S3 (the most "transparent" way) https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html The S3 buckets are mounted to our Linux VMs using S3FS https://github.com/s3fs-fuse/s3fs-fuse which has support for this (seems fairly transparent) So, it seems like the way this works is that you can only enable this on files on a go-forward basis so the older files that already exist will not be in encrypted-at-rest (which is ok, we can backfill this later) Has anybody tried to do this before using this combo? If we mount the bucket using s3fs with `-o use_sse` option, what will happen as the files will be half-and-half? Will it "just work"? s3fs will be mounted with `-o use_sse` and it will be able to handle files that are BOTH the old way (not encrypted-at-rest) and the newer files (encrypted-at-rest) ... we can then start backfilling the older files and we have time or will this fail catastrophically the minute we mount the s3 bucket :( Is the solution to just start a new bucket and do the SSE-S3 and then just start moving the files over (we have done this before in terms of having code in our application check for a file in multiple buckets before giving up) Of course, we will test all this stuff, just wanted to ask a quick question in case we are worried about this too much and if this is a "no big deal" or "be very careful" Thanks!
1
answers
0
votes
49
views
asked 5 months ago