Questions tagged with Encryption
Content language: English
Sort by most recent
JDBC and RDS PostgreSQL TLS Encryption connection problem
We used AWS EC2 instance and RDS PostgreSQL, and we deployed java program on EC2 instance, using jdbc for query. According to the AWS&JDBC documentation, AWS RDS PostgreSQL supports TLS encrypted connection by default, jdbc also uses encrypted connection by default(We did not set the sslmode parameter). But when I capture the packet on EC2, I see the packet in plaintext, why? Command on EC2 Instance: sudo tcpdump -i any port 5432 -w pgtest.pcap I was expecting to see TLS1.3 or TLS1.2, but the packet content is all PGSQL
Evaluating Timestream DB for multi-tenant SaaS application - Per database encryption quota limit
We are evaluating Timestream DB for handling metrics for our SaaS application that we are developing on AWS. In order to maintain data isolation we are looking at encrypting each customer's data separately. For Timestream this appears to mean we need a separate database for each customer as the encryption is done at a database level. In short, this means we are limited to 500 databases per account (current quota limit) and therefore 500 customers if we stick to a single AWS account. I've checked Service Quota console and it appears none of these limits can be increased. Do you have a recommended pattern for overcoming this limitation or a different approach? Thanks,
AWS instance end Credentials
Good afternoon. I want to apologize for the possibly wrong question. I am not a native English speaker and my question may be misunderstood. But I will try to ask my question as correctly as possible in order to find a way to solve it. There is a client-server application. The client has an instance. The server is in the Enclaves. In order for the client to connect to the server, the client must send a request to Credenshales. On the client, the script creates a file that temporarily creates credentials for such a connection. These credentials are copied from the server's memory and copied into this file. (temporarily) This file is then deleted. I would like to somehow protect myself and somehow encrypt this file or find an alternative **SAFE** solution how to bypass this process and use other tools that Amazon AWS has. Is it possible to somehow automate this process and make the transfer of credentials that the client takes from the server and inserts into its application. Because the credentials are temporarily stored unencrypted, I think this is a serious vulnerability for my application. It is enough for me to give an idea to solve my problem. Then I'll try to figure it out myself. AWS contains a fairly large amount of materials and it is very difficult to find the right topic. I am sure that in his tools he will be able to offer a solution to my problem. Thanks.
Will Master Key in KMS gets rotated ? What happens when Master Key gets rotated ?
We are planning to encrypt the data in service using data key. AWS Encryption Library take master key ARN as input parameter to do encryption. My understanding is that the data key will be created from KMS service and plain key will be returned as well as encrypted data key using the Master key. The encrypted data key will be added to the encrypted data. During the decryption, the data encrypted data key will be decrypted using KMS + Master Key. Now question is - 1. If some one get access to master key ARN, they can use it to get the plain information right. In that case, how does the KMS ensure the protection ? 2. I remember the KMS will rotate the master key (I hope I am correct here). If the key gets rotated, what will happen to all data keys which are encrypted using old master key ?
deploy new cdk Aurora Cluster
Hi team, I have an aurora DB cluster already created via CDK, for compliance reasons, now I changed my CDK code to enable encryption on my Aurora DB when deploying the CDK code, a new cluster was created (with encryption option enable this time) but I had this error : ``` Myst1Stack | 2/9 | 10:15:20 a.m. | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | Myst1Stack Export Myst1Stack:ExportsOutputFnGetAttdb123D8BCEndpointAddressABC1244 cannot be updated as it is in use by Myst2Stack, Myst3Stack and Myst4Stack ``` any idea how can I resolve the dependencies issues to be able to deploy my new CDK code Thank you
migrate data to new Aurora cluster
Hi team, I modified my CDK code to add the encryption option in my Existing Aurora DB cluster => which created a brand new cluster. now I have 2 clusters : - the old cluster (non-encrypted) - the new cluster created after CDK changes (the encrypted one) how can I take existing data from my old cluster (the non-encrypted one) and inject them into the newly created cluster( the encrypted one created following CDK changes)? Thank you :)
Restore encrypted Snapshot
Hi Team, I have a few questions about encryption in Amazon Aurora: 1. I have an RDS aurora MySQL Multi-AZ with 1 reader and 1 writer, this cluster is un-encrypted now I want to make it encrypted, please what are the steps to follow to encrypt the aurora Cluster? I saw I can use snapshot and encrypt it but the exact steps are not listed 2. my cluster is created via CDK, If I want to update the cluster via CDK to add the encryption option, this will create a new cluster or update the existing one? how can I restore (encrypted) data to the cluster once it's updated? 3. I tried to create the snapshot at the cluster level, but there is no option to take the snapshot, the option to take a snapshot is only available at the reader Or the writer level. the Snapshot must be taken at the writer level? if yes, when I restore that Snapshot that will create the whole cluster (cluster + reader + writer) Multi-AZ
DataSync with EFS Source fails when policy requires encryption in transit.
We have been using data sync with no issues with an EFS drive which does not require encryption in transit. For compliance reasons, we have moved to a drive which requires encryption in transit. DataSync to the new drive fails. When I remove the policy, the task completes. When I restore the policy, the task fails. Now what?
Route53 weighted routing to multiple cloudfront resources in same hosted zone doesnt work.
I have 2 cloudfront dists with alternate domain names as `a.example.com` and `b.example.com` and theyre setup in route53 so when i access either one, it gets to the correct cf dist. Now I want to create a new subdomain `c.example.com` in route53 which will use weighted routing (125-125) so the traffic is evenly split and I want to route traffic to `a.example.com` and `b.example.com`. I've setup routing, but im getting the following error when i access `c.example.com` ``` Secure Connection Failed An error occurred during a connection to c.example.com. Cannot communicate securely with peer: no common encryption algorithm(s). Error code: SSL_ERROR_NO_CYPHER_OVERLAP The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem. Learn more… ``` My cert is managed by ACM, and it has both `*.example.com` and `example.com` domains listed. My goal is to be able to create a new subdomain which can route traffic to existing cf dists. Any ideas if this is possible?
Enabling S3 Encryption-at-rest on a go-forward basis with s3fs
Hi, We have some buckets (have been around for a while, approx 200GB+ data) and we want to **turn on** encryption-at-rest using SSE-S3 (the most "transparent" way) https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html The S3 buckets are mounted to our Linux VMs using S3FS https://github.com/s3fs-fuse/s3fs-fuse which has support for this (seems fairly transparent) So, it seems like the way this works is that you can only enable this on files on a go-forward basis so the older files that already exist will not be in encrypted-at-rest (which is ok, we can backfill this later) Has anybody tried to do this before using this combo? If we mount the bucket using s3fs with `-o use_sse` option, what will happen as the files will be half-and-half? Will it "just work"? s3fs will be mounted with `-o use_sse` and it will be able to handle files that are BOTH the old way (not encrypted-at-rest) and the newer files (encrypted-at-rest) ... we can then start backfilling the older files and we have time or will this fail catastrophically the minute we mount the s3 bucket :( Is the solution to just start a new bucket and do the SSE-S3 and then just start moving the files over (we have done this before in terms of having code in our application check for a file in multiple buckets before giving up) Of course, we will test all this stuff, just wanted to ask a quick question in case we are worried about this too much and if this is a "no big deal" or "be very careful" Thanks!