By using AWS re:Post, you agree to the Terms of Use
/AWS Key Management Service/

Questions tagged with AWS Key Management Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

1
answers
0
votes
9
views
asked a month ago

Cannot access encrypted files from RDS in S3 bucket

I export data from an Aurora Postgres instance to S3 via the `aws_s3.query_export_to_s3` function. The destination bucket does not have default encryption enabled. When I try to download one of the files I get the following error: > The ciphertext refers to a customer mast3r key that does not exist, does not exist in this region, or you are not allowed to access. Note: I had to change the word mast3r because this forum doesn't allow me to post it as it is a "non-inclusive" word... The reasons seems to be that the files got encrypted with the AWS managed RDS key which has the following policy: ``` { "Version": "2012-10-17", "Id": "auto-rds-2", "Statement": [ { "Sid": "Allow access through RDS for all principals in the account that are authorized to use RDS", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:CreateGrant", "kms:ListGrants", "kms:DescribeKey" ], "Resource": "*", "Condition": { "StringEquals": { "kms:CallerAccount": "123456789", "kms:ViaService": "rds.eu-central-1.amazonaws.com" } } }, { "Sid": "Allow direct access to key metadata to the account", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:root" }, "Action": [ "kms:Describe*", "kms:Get*", "kms:List*", "kms:RevokeGrant" ], "Resource": "*" } ] } ``` I assume that the access doesn't work because of the `ViaService` condition when trying to decrypt the file via S3. I tried to access to files with the root user instead of an IAM user and it works. Is there any way to get access with an IAM user? As far as I know, you cannot modify the policy of an AWS managed key. I also don't understand why the root user can decrypt the file as the policy doesn't explicitly grant decrypt permissions to it other than the permissions when called from RDS.
1
answers
0
votes
4
views
asked a month ago

Application side data protection with FIPS 140-2 Level 3 : what to use out of Encryption SDK, KMS or Cloud HSM?

Hello there, I do have a requirement in my application to encrypt and decrypt data using a symmetric key algorithm (mostly AES/CBC/PKCS5Padding). CONSTRAINT and Requirements are 1. I need to use FIPS 140-2 Level 3 compliant key storage solution 2. This is an existing encrypted data and hence I should be able to import my existing keys (plain keys) to whatever solution I use. 3. Even in the future, keys should be open for EXPORT so that encrypted data with this new solution WILL NOT require another re-encryption with new keys. Keeping the above points in mind, I came across below solutions so far and need guidance and help if someone finds that not a good solution or it will break any of the above requirements I listed. 1. I can use AWS Encryption SDK with AWS KMS using a custom key store where the custom key store would be my own Cloud HSM. 2. I can directly use Cloud HSM by leveraging standard Cloud HSM integration using Cloud HSM JCE provider and client SDK. 3. I can AWS KMS with KMS API with a custom key store where the custom key store would be my own Cloud HSM. I knew #2 will work without breaking any of my requirement and compliance list but I want to see if I can use Encryption SDK and/or KMS for my use case as I can get help of SDK to choose best industry practices to write cryptography code instead of I write whole code (in case of Cloud HSM integration) but below points will stop me. 1. Custom key stores can not work with imported keys so it will break my requirement #2. 2. I can use AWS Encryption SDK with KMS but as import does not work for custom key stores, it's not usable any more. Can I use AWS Encryption SDK somehow to help me with data encryption directly with Cloud HSM? 3. Data enveloper protection (by AWS Encryption SDK) is really more secure for symmetric key encryption. If I use that today and later want to move to Cloud HSM, will it break the decryption flow? Any suggestion/experience learning/insights or architectural direction is greatly appreciated.
1
answers
0
votes
7
views
asked 2 months ago

S3 bucket creation with encryption is failing because of AWSSamples::S3BucketEncrypt::Hook

Hi, I have activated **AWSSamples::S3BucketEncrypt::Hook** with the following configuration but S3 bucket creation with encryption enabled seems to be failing because of the hook. It works when I disable the hook. Could this be an issue? ``` { "CloudFormationConfiguration": { "HookConfiguration": { "TargetStacks": "ALL", "FailureMode": "FAIL", "Properties": { "minBuckets": "1", "encryptionAlgorithm": "AES256" } } } } ``` ``` { "CloudFormationConfiguration": { "HookConfiguration": { "TargetStacks": "ALL", "FailureMode": "FAIL", "Properties": { "minBuckets": "1", "encryptionAlgorithm": "aws:kms" } } } } ``` [AWSSamples::S3BucketEncrypt::Hook configuration](https://imgur.com/w9NnjEP) [AWSSamples::S3BucketEncrypt::Hook](https://imgur.com/OsETMvV) **CloudFormation for S3 bucket with AES256 encryption** - Expected to Pass ``` AWSTemplateFormatVersion: 2010-09-09 Description: S3 bucket with default encryption Resources: EncryptedS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: !Sub 'encryptedbucket-${AWS::Region}-${AWS::AccountId}' BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: 'AES256' DeletionPolicy: Delete ``` **CloudFormation for S3 bucket with KMS encryption** - Expected to Pass ``` AWSTemplateFormatVersion: "2010-09-09" Description: This CloudFormation template provisions an encrypted S3 Bucket Resources: EncryptedS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: !Sub 'encryptedbucket-${AWS::Region}-${AWS::AccountId}' BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: 'aws:kms' KMSMasterKeyID: !Ref EncryptionKey BucketKeyEnabled: true Tags: - Key: "keyname1" Value: "value1" EncryptionKey: Type: AWS::KMS::Key Properties: Description: KMS key used to encrypt the resource type artifacts EnableKeyRotation: true KeyPolicy: Version: "2012-10-17" Statement: - Sid: Enable full access for owning account Effect: Allow Principal: AWS: !Ref "AWS::AccountId" Action: kms:* Resource: "*" Outputs: EncryptedBucketName: Value: !Ref EncryptedS3Bucket ```
0
answers
0
votes
9
views
asked 3 months ago

How to properly use KMS in Step Functions?

I'm working on SAML identification workflows in Step Functions where SAML messages has to be signed and returned Assertion is also encrypted. I will use KMS to store two different asymmetric keys (one for sign/verify and other for encrypt/decrypt) and tried to use for example 'kms:Sign' and 'kms:Decrypt' from SF through SDK integrations meaning task ARNs 'arn:aws:states:::aws-sdk:kms:sign' and 'arn:aws:states:::aws-sdk:kms:decrypt' but can only retrieve binary data in responses, which are not Base64-encoded. That's correct based on documentation: "When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded." Can I somehow always return Base64-encoded response or use binary response in context of SF json payloads? I can't figure out neither. Am I correct that SF can't decode/encode Base64? I also tried proxying through API gateway (which will use HTTP API as I think) but KMS responds always with 400 because CiphertextBlob can't be null. It isn't null, value is properly visible in step "request body payload after transformations" and I also can't figure out what prevents to call KMS through API gateway. If I will use Lambda to decode Base64 from request body, call KMS operation and encode Base64 from response body, all works nicely. Except including that SDK into Lambda code will increase total latency with multiple hundreds milliseconds because cold starts are much slower with SDK imported. Can I somehow avoid those overheads coming from Lambda and use KMS straight from SF or through API gateway?
0
answers
0
votes
6
views
asked 3 months ago

Cross Account Copy S3 Objects From Account B to AWS KMS-encrypted bucket in Account A

My Amazon Simple Storage Service (Amazon S3) bucket in Accounts A is encrypted with a AWS Managed AWS Key Management Service (AWS KMS) key. I have created lambda function to copy objects from Account B to Account A which has AWS Managed KMS key used as Server Side Encryption on S3 bucket. When function executes and tries to copy objects from AWS account (Account B) to Account A S3 bucket, I get an Access Denied error. I came across an Knowledge-center article which talks about the same scenario **except one difference ** and In that they are talking about **Customer Managed Key ** server side encryption mechanism. Because they have using Customer Managed Encryption Key they are able to modify KMS policy to allow Lambda function Role ARN permission to the **kms:Decrypt** action. As mentioned earlier, S3 bucket encrypted with AWS managed keys, we cant modify the key policy because it is managed by AWS. So, my question is how do we copy objects from S3 buckets from Account B to Account A ( with AWS Managed KMS encryption enabled)? Reference Links: * https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-denied-error-s3/ **Changing a key policy documentation** https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-how-to-console-policy-view Thanks in advance.
2
answers
0
votes
31
views
asked 4 months ago

Running glue crawler on encrypted S3 objects present in different account

Hi All, We have a S3 bucket in Account A, with SSE-KMS encryption enabled. We wants to provide the access of the objects present in the bucket, to a glue crawler present in Account B. For this we have applied following steps: 1. Added bucket policy in Account A to provide S3 objects access to AccountB > { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:root" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::AccountA_Bucket/test/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:root" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::AccountA_Bucket", "Condition": { "StringLike": { "s3:prefix": "test/*" } } } ] } 2. Added KMS key policy to provide kms:Decrypt action to Account B > { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:root" }, "Action": "kms:Decrypt", "Resource": "*" } 3. In Account B, created an IAM role for glue crawler, which has access to get objects from S3 in Account A and has access for kms:Decrypt of KMS key present in Account A. > { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::AccountA_Bucket/test/*" ] }, { "Action": [ "kms:Decrypt" ], "Effect": "Allow", "Resource": "KMSKeyARNOfAccountA" } ] } After doing the above changes, the glue crawler is able to run successfully and create a table but the schema is not as expected and when we try to run Athena queries on the table created, we are receiving following error: HIVE_UNKNOWN_ERROR: serDe should not be accessed from a null StorageFormat. I am thinking this is happening because the table created by glue crawler is based on the encrypted object, i.e. it does not decrypt the object before creating the table schema, as earlier when we don't have any encryption for the S3 bucket, table schema was created as expected and Athena queries were running on it. My question is, what changes needs to be done so that glue crawler first decrypt the objects it is receiving from S3 bucket in Account A, before creating the table schema?
1
answers
0
votes
5
views
asked 4 months ago

Understanding usage of s3

Hello, I'm making small my web-application for studying, using ec2, rds, s3 in aws. I am a free-tier user. and I want to understand the usage of s3, because my usage of s3 is increasing too high. I just made a s3 and put 81 objects in it. and I accessed my web-application 3 times. My web-application is public but no one knows it but me. I found my usage of s3, 'Put,Copy,Post or List Requests' part is increasing about 200, and 'Get Requests' part is increasing about 500. and my key I don't get why the requests increased that much, I wonder if I access S3 in aws console, just click my bucket,(not any single file) would it make the requests of s3 increase? if so, the amount of request is equal to the amount of objects(in mycase 81 objects)? also, is there any possiblilty that my usage of s3 is increasing, from ec2 or rds..? (my usage of key management service is also increasing too high all of a sudden, I'm running my ec2 for 2days and if it cause? or like I ask above, if I access to Key Management Service in aws console, and click the key part, the usage of Key Management Service increase? ) Because I had this problem a few weeks ago, I coudn't solve this problem. As I get worried about costs, if it increased too much like this, I cannot afford to pay that much At that time the requests are increasing about 2~3000 a day, when I don't put anything to my bucket, and get from my bucket as well. I changed my bucket to be secret from public after 2 days, but still it happened. I've just deleted my buket, sdk. (At that time I used sdk for S3 as well. but I removed all of them now) and nothing was happened until I made my bucket again. I'm very confused and worried about the costs, Please, I will appriciate if anyone helps me to understand about those..
2
answers
0
votes
13
views
asked 4 months ago

Cognito - CustomSMSSender InvalidCiphertextException: null on Code Decrypt (Golang)

Hi, i followed this document to customize cognito SMS delivery flow https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html I'm not working on a Javascript environment so wrote this Go snippet: ``` package main import ( "context" golog "log" "os" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" ) // USING THIS TYPES BECAUSE AWS-SDK-GO DOES NOT SUPPORTS THIS // CognitoEventUserPoolsCustomSmsSender is sent by AWS Cognito User Pools before each mail to send. type CognitoEventUserPoolsCustomSmsSender struct { events.CognitoEventUserPoolsHeader Request CognitoEventUserPoolsCustomSmsSenderRequest `json:"request"` } // CognitoEventUserPoolsCustomSmsSenderRequest contains the request portion of a CustomSmsSender event type CognitoEventUserPoolsCustomSmsSenderRequest struct { UserAttributes map[string]interface{} `json:"userAttributes"` Code string `json:"code"` ClientMetadata map[string]string `json:"clientMetadata"` Type string `json:"type"` } func main() { lambda.Start(sendCustomSms) } func sendCustomSms(ctx context.Context, event *CognitoEventUserPoolsCustomSmsSender) error { golog.Printf("received event=%+v", event) golog.Printf("received ctx=%+v", ctx) config := aws.NewConfig().WithRegion(os.Getenv("AWS_REGION")) session, err := session.NewSession(config) if err != nil { return err } kmsProvider := kms.New(session) smsCode, err := kmsProvider.Decrypt(&kms.DecryptInput{ KeyId: aws.String("a8a566c5-796a-4ba1-8715-c9c17c6f0cb5"), CiphertextBlob: []byte(event.Request.Code), }) if err != nil { return err } golog.Printf("decrypted code %v", smsCode.Plaintext) return nil } ``` i'm always getting `InvalidCiphertextException: : InvalidCiphertextException null`, can someone help? This is how lambda config looks on my user pool: ``` "LambdaConfig": { "CustomSMSSender": { "LambdaVersion": "V1_0", "LambdaArn": "arn:aws:lambda:eu-west-1:...:function:cognito-custom-auth-sms-sender-dev" }, "KMSKeyID": "arn:aws:kms:eu-west-1:...:key/a8a566c5-796a-4ba1-8715-c9c17c6f0cb5" }, ```
1
answers
0
votes
16
views
asked 4 months ago

Unable to delete KMS customer-managed key (CMK) using `AdministratorAccess` Role or root login credentials

A user in one of our accounts accidentally created a KMS managed key with an incorrect policy. Neither the assumed role `AdministratorAccess` nor the root account can delete this key, nor update the policy to enable key deletion. Using AWS CLI: ```shell ➜ aws --region us-east-1 kms schedule-key-deletion --key-id <REDACTED> --pending-window-in-days 7 An error occurred (AccessDeniedException) when calling the ScheduleKeyDeletion operation: User: arn:aws:sts::<REDACTED>:assumed-role/AdministratorAccess/<REDACTED> is not authorized to perform: kms:ScheduleKeyDeletion on resource: arn:aws:kms:us-east-1:<REDACTED>:key/<REDACTED> because no resource-based policy allows the kms:ScheduleKeyDeletion action ➜ aws --region us-east-1 kms put-key-policy --policy-name default --policy file://key_policy.json --key-id <REDACTED> An error occurred (AccessDeniedException) when calling the PutKeyPolicy operation: User: arn:aws:sts::<REDACTED>:assumed-role/AdministratorAccess/<REDACTED> is not authorized to perform: kms:PutKeyPolicy on resource: arn:aws:kms:us-east-1:<REDACTED>:key/<REDACTED> because no resource-based policy allows the kms:PutKeyPolicy action An error occurred (AccessDeniedException) when calling the PutKeyPolicy operation: User: arn:aws:sts::<REDACTED>:assumed-role/AdministratorAccess/<REDACTED> is not authorized to perform: kms:PutKeyPolicy on resource: arn:aws:kms:us-east-1:<REDACTED>:key/<REDACTED> because no resource-based policy allows the kms:PutKeyPolicy action ``` As root on the web console, I get `root is not authorized to perform: kms:DescribeKey on resource` and am unable to view details, or change the deletion schedule (which never succeeds). If I try to issue a new key deletion request using the console, I get `Select only keys that aren't already scheduled for deletion.`
3
answers
0
votes
84
views
asked 4 months ago

KMS key policy to allow access to the key only to the role used to create the key

Looking for a KMS key policy that satisfies the following requirement. The role that a user/program assumes to create a KMS key is specified in the key's policy as the only role/user which being assumed may access the key in the future. If such policy is possible, then what it is? If not, then which specific feature of KMS doesn't allow it? The approach described below has been unsuccessful. Through the AWS console in account 444444444444, I can create a KMS key with a policy (see below). KMSCreateCustomerKeyRole in account 444444444444 is a cross-account role for account 3333333333333. In the latter there is encryption_key_manager user in a user group that can assume KMSCreateCustomerKeyRole. Logging into AWS console as a encryption_key_manager user and assuming KMSCreateCustomerKeyRole, a key with the same policy also gets created successfully. { "Version": "2012-10-17", "Id": "key-default-1", "Statement": [ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::444444444444:root" }, "Action": "kms: *", "Resource": "*" }, { "Sid": "Deny for everyone except the specified role and user", "Effect": "Deny", "NotPrincipal": { "AWS": [ "arn:aws:iam::444444444444:root", "arn:aws:sts::444444444444:assumed-role/KMSCreateCustomerKeyRole/ encryption_key_manager", "arn:aws:iam::444444444444:role/KMSCreateCustomerKeyRole" ] }, "Action": "kms: *", "Resource": "*" } ] } So far, it works in the sense that other users/role can't access the key through the console. However, key creation should be done programmatically, so to test this policy I try to create a key via cli. CLI uses the KMSCreateCustomerKeyRole profile, which points to the default encryption_key_manager profile with aws credentials file as [default] role_arn = arn:aws:iam::444444444444:role/KMSCreateCustomerKeyRole source_profile = encryption_key_manager [encryption_key_manager] aws_access_key_id = ..... get-caller-identity returns { "UserId": "AAAAAAAAAAAAAAA:botocore-session-1640070193", "Account": "444444444444", "Arn": "arn:aws:sts::444444444444: assumed-role/KMSCreateCustomerKeyRole/botocore-session-1640070193" } Key generation request aws kms create-key --description another_key --policy file: //policy.json --region us-east-2 results in an error occurred (MalformedPolicyDocumentException) when calling the CreateKey operation: The new key policy will not allow you to update the key policy in the future. Same error is produced for encryption_key_manager profile aws kms create-key --description another_key --policy file: //policy.json --profile encryption_key_manager --region us-east-2 I thought that this may happen because while using command line we pass to AMS KMS an ARN with session-id, and KMS can't match this ARN to any of ARN in the array for NotPrincipal in Deny part of the policy. I have added the ARN with session id (arn:aws:sts::444444444444: assumed-role/ KMSCreateCustomerKeyRole/botocore-session-1640070193) to the ARN's array, but the error stays the same.
1
answers
0
votes
293
views
asked 5 months ago
5
answers
0
votes
15
views
asked 2 years ago

KMS Key policy ignored over IAM Role

I have a "poweruseraccess" policy applied to a "Developer" role in my account that is used by multiple users. This role allows access to AWS resources, as such anyone with this role can encrypt/decrypt using keys in KMS. I want to restrict encryption/decryption actions on a particular kms key. For this, I added a deny section to the default kms policy on this specific key as below. This denies encrypt/decrypt action to any principal except if their userid is the root (12345) or specific role AROAADMINROLE (account admins), AROALAMBDAROLE (captures assumerole) and an IAM user AIDAMYIAMUSER. In spite of this explicit deny section, the users with Developer role are still able to encrypt/decrypt with the key. Can someone please help me figure out the issue? Similar policies works for restricting our S3 bucket access. I followed this article for building the policies. https://aws.amazon.com/premiumsupport/knowledge-center/explicit-deny-principal-elements-s3/ . It's the same principle for below policy by using wildcards and StringNotLike in conditions. **KMS policy** ``` { "Id": "my-key-consolepolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::12345:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::12345:user/my_iam_user" }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "ExplicitDenyEncryptDecryptAccess", "Effect": "Deny", "Principal": "*", "Action": [ "kms:Encrypt", "kms:Decrypt" ], "Condition": { "StringNotLike": { "aws:userid": [ "12345", "AROAADMINROLE", "AROAADMINROLE:*", "AIDALAMBDAROLE:*", "AIDALAMBDAROLE", "AIDAMYIAMUSER:*", "AIDAMYIAMUSER" ] } } }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::12345:user/my_iam_user", "arn:aws:iam::12345:role/my_lambda_role" ] }, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": "true" } } } ] } ``` Edited by: swatic on Aug 20, 2019 10:49 AM
1
answers
0
votes
5
views
asked 3 years ago

Documentation on encryption context in contradiction with the behaviour?

The AWS documentation on encryption context ( <https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context> ) states that: _"When an encryption context is provided in an encryption request, it is cryptographically bound to the ciphertext such that the same encryption context is required to decrypt (or decrypt and re-encrypt) the data. If the encryption context provided in the decryption request is not an exact, case-sensitive match, the decrypt request fails."_ In our case we have an SQS queue with encryption enabled and a lambda function triggered by messages arriving to the queue. In CloudTrail events for GenerateDataKey the encryption context contains the key "aws:sqs:arn" as expected: ``` "encryptionContext": { "aws:sqs:arn": "arn:aws:sqs:eu-west-1:accountnr:queuename" }, ``` However, Decrypt events in CloudTrail contain a very different encryption context: ``` "encryptionContext": { "aws:lambda:FunctionArn": "arn:aws:lambda:eu-west-1:accountnr:function:functionname" } ``` So, the contexts do not contain the same key in both cases, even though you would expect so, based on the above quoted sentence from AWS documentation. Also you would expect decrypting to have failed due to differing context but it seems to work just fine. Presumably I cannot use the same KMS key policy condition for readers and writers in this case to verify that "aws:sqs:arn" contains a specific value since the Decrypt context does not contain such key (?). Did I misunderstand the documentation sentence or why does it seem to work differently?
2
answers
0
votes
1
views
asked 3 years ago
  • 1
  • 90 / page