By using AWS re:Post, you agree to the Terms of Use
/AWS Command Line Interface/

Questions tagged with AWS Command Line Interface

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

What is the suggested method to track user's actions after assuming a cross-account role

I need to be able to guarantee that a user's actions can always be traced back to their account regardless of which role they have assumed in another account. What methods are required to guarantee this for? * Assuming a cross-account role in the console * Assuming a cross-account role via the cli I have run tests and can see that when a user assumes a role in the CLI, temporary credentials are generated. These credentials are seen in CloudTrail logs under responseElements.credentials for the assumeRole event. All future events generated by actions taken in the session include the accessKeyId and I can therefore track all of the actions in this case. Using the web console, the same assumeRole event is generated, also including an accessKeyId. Unfortunately, future actions taken by the user don't include the same accessKeyId. At some point a different access key is generated and the session makes use of this new key. I can't find any way to link the two and therefore am not sure of how to attribute actions taken by the role to the user that assumed the role. I can see that when assuming a role in the console, the user can't change the sts:sessionName and this is always set to their username. Is this the suggested method for tracking actions? Whilst this seems appropriate for roles within the same account, as usernames are not globally unique I am concerned about using this for cross account attribution. It seems placing restrictions on the value of sts:sourceIdentity is not supported when assuming roles in the web console.
0
answers
1
votes
33
views
asked 5 days ago

ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region.

Hi Dears, I am building ML model using DeepAR Algorithm. I faced this error while i reached to this point : Error : ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region. ------------------- Code: from sagemaker.tuner import ( IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner, ) from sagemaker import image_uris container = image_uris.retrieve(region= 'af-south-1', framework="forecasting-deepar") deepar = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type="ml.m5.2xlarge", use_spot_instances=True, # use spot instances max_run=1800, # max training time in seconds max_wait=1800, # seconds to wait for spot instance output_path="s3://{}/{}".format(bucket, output_path), sagemaker_session=sess, ) freq = "D" context_length = 300 deepar.set_hyperparameters( time_freq=freq, context_length=str(context_length), prediction_length=str(prediction_length) ) Can you please help in solving the error? I have to do that in af-south-1 region. Thanks Basem hyperparameter_ranges = { "mini_batch_size": IntegerParameter(100, 400), "epochs": IntegerParameter(200, 400), "num_cells": IntegerParameter(30, 100), "likelihood": CategoricalParameter(["negative-binomial", "student-T"]), "learning_rate": ContinuousParameter(0.0001, 0.1), } objective_metric_name = "test:RMSE" tuner = HyperparameterTuner( deepar, objective_metric_name, hyperparameter_ranges, max_jobs=10, strategy="Bayesian", objective_type="Minimize", max_parallel_jobs=10, early_stopping_type="Auto", ) s3_input_train = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/train/".format(bucket, prefix), content_type="json" ) s3_input_test = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/test/".format(bucket, prefix), content_type="json" ) tuner.fit({"train": s3_input_train, "test": s3_input_test}, include_cls_metadata=False) tuner.wait()
1
answers
0
votes
6
views
asked 5 days ago

Possible CLI Bug: Dynamo DB endpoint URL does not work locally with active and correct credentials set

**Summary**: Dynamo DB commands from the CLI do not work when real credentials are set up. The 'endpoint-url' flag should work around this and recognize that localhost endpoints can be hit with no credentials given the default setup of the AWS Dynamo Docker image. Output of command after setting credentials: `An error occurred (ResourceNotFoundException) when calling the DescribeTable operation: Cannot do operations on a non-existent table` Is there a fix or workaround for this? **System**: MacOS Monterey version 12.0.1, Macbook Pro - M1 - 2020 ``` 'aws --version' -> aws-cli/2.4.11 Python/3.9.10 Darwin/21.1.0 source/arm64 prompt/off ``` **To reproduce**: -- Start from a terminal that does NOT have AWS Credentials set up via environment variables or anything else -- Start up a local Dynamo DB Instance on Docker: ``` docker pull amazon/dynamodb-local docker run -p 8000:8000 --name=ddblocal -d amazon/dynamodb-local ``` -- Create a table: ``` aws dynamodb create-table --attribute-definitions "[{ \"AttributeName\": \"key\", \"AttributeType\": \"S\"}, { \"AttributeName\": \"valueA\", \"AttributeType\": \"S\"}]" --table-name test_table --key-schema "[{\"AttributeName\": \"key\", \"KeyType\": \"HASH\"}, {\"AttributeName\": \"valueA\", \"KeyType\": \"RANGE\"}]" --endpoint-url "http://localhost:8000" --provisioned-throughput "{\"ReadCapacityUnits\": 100, \"WriteCapacityUnits\": 100}" --region local ``` -- Query the table (to prove it works): ``` aws dynamodb describe-table --table-name test_table --region local --endpoint-url "http://localhost:8000" ``` -- Set your real AWS Credentials: ``` export AWS_ACCESS_KEY_ID="<REAL KEY ID HERE>" export AWS_SECRET_ACCESS_KEY="<REAL SECRET KEY HERE>" export AWS_SESSION_TOKEN="REAL TOKEN HERE>" ``` -- Query the table again (This one fails for me - see output above) ``` aws dynamodb describe-table --table-name test_table --region local --endpoint-url "http://localhost:8000" ```
1
answers
0
votes
16
views
asked 9 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
4
views
asked 10 days ago

Manual remediation config works, automatic remediation config fails

SOLVED! There was a syntax problem in the runbook, that is not detected when manually remediating. In the content of the remediation doc (that was created using Cloudformation), I used a parameter declaration: parameters: InstanceID: type: 'AWS::EC2::Instance::Id' It should be: parameters: InstanceID: type: String ===================================================================================== I have a remediation runbook that creates Cloudwatch alarms for the metric 'CPUUtilization' for any EC2 instances that have none defined. The runbook is configured as a remediation document for a config rule that checks for the absence of such alarms. When I configure the remediation on the rule as manual, all goes well. When I configure the remediation with the exact same runbook as automatic, the remediation fails with this error (snippet): "StepDetails": [ { "Name": "Initialization", "State": "FAILED", "ErrorMessage": "Invalid Automation document content for Create-CloudWatch-Alarm-EC2-CPUUtilization", "StartTime": "2022-05-09T17:30:02.361000+02:00", "StopTime": "2022-05-09T17:30:02.361000+02:00" } ], This is the remediation configuration for the automatic remediation. The only difference with the manual remediation configuration is obviously the value for key "Automatic" being "false" { "RemediationConfigurations": [ { "ConfigRuleName": "rul-ensure-cloudwatch-alarm-ec2-cpuutilization-exists", "TargetType": "SSM_DOCUMENT", "TargetId": "Create-CloudWatch-Alarm-EC2-CPUUtilization", "TargetVersion": "$DEFAULT", "Parameters": { "AutomationAssumeRole": { "StaticValue": { "Values": [ "arn:aws:iam::123456789012:role/rol_ssm_full_access_to_cloudwatch" ] } }, "ComparisonOperator": { "StaticValue": { "Values": [ "GreaterThanThreshold" ] } }, "InstanceID": { "ResourceValue": { "Value": "RESOURCE_ID" } }, "Period": { "StaticValue": { "Values": [ "300" ] } }, "Statistic": { "StaticValue": { "Values": [ "Average" ] } }, "Threshold": { "StaticValue": { "Values": [ "10" ] } } }, "Automatic": true, "MaximumAutomaticAttempts": 5, "RetryAttemptSeconds": 60, "Arn": "arn:aws:config:eu-west-2:123456789012:remediation-configuration/rul-ensure-cloudwatch-alarm-ec2-cpuutilization-exists/5e3a81a7-fc55-4cbe-ad75-6b27be8da79a" } ] } The error message is rather cryptic, I can't find documentation on possible root causes. Any suggestions would be very welcome! Thanks!
1
answers
0
votes
5
views
asked 12 days ago

s3 create Presigned Multipart Upload URL using API

I'm trying to use the AWS S3 API to perform a multi-part upload with Signed URLs. This will allow us to send a request to the server (which is configured with the correct credentials), and then return a pre-signed URL to the client (which will not have credentials configured). The client should then be able to complete the request, computing subsequent signatures as appropriate. This appears to be possible as per the AWS S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) - Amazon Simple Storage Service - AWS Documentation As described in the Overview, when authenticating requests using the Authorization header, you have an option of uploading the payload in chunks. You can send data in fixed size or variable size chunks. This section describes the signature calculation process in chunked upload, how you create the chunk body, and how the delayed signing works where you first upload the chunk, and send its ... docs.aws.amazon.com The main caveat here is that it seems to need the Content-Length​ up front, but we won't know the value of that as we'll be streaming the value. Is there a way for us to use signed URLs to do multipart upload without knowing the length of the blob to be uploaded beforehand?
0
answers
0
votes
0
views
asked 15 days ago

Error when running vsock_sample AWS nitro tutorial

I have configured and build the enclave instance as per https://docs.aws.amazon.com/enclaves/latest/user/enclaves-user.pdf . But when I tried to run in it throws the following error ``` $ nitro-cli run-enclave --eif-path vsock_sample.eif --cpu-count 2 --enclave-cid 6 --memory 512 --debug-mode Start allocating memory... Started enclave with enclave-cid: 6, memory: 512 MiB, cpu-ids: [1, 5] [ E36 ] Enclave boot failure. Such error appears when attempting to receive the `ready` signal from a freshly booted enclave. It arises in several contexts, for instance, when the enclave is booted from an invalid EIF file and the enclave process immediately exits, failing to submit the `ready` signal. In this case, the error backtrace provides detailed information on what specifically failed during the enclave boot process. For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E36 If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-27T03:41:39.495653281+00:00.log" Failed connections: 1 [ E39 ] Enclave process connection failure. Such error appears when the enclave manager fails to connect to at least one enclave process for retrieving the description information. For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E39 If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-27T03:41:39.495889864+00:00.log" ``` Action: Run Enclave Subactions: Failed to handle all enclave process replies Failed to connect to 1 enclave processes Root error file: src/enclave_proc_comm.rs Root error line: 349 Build commit: not available ``` How to fix this error ?
0
answers
0
votes
1
views
asked 24 days ago

"aws cli cp" command gives inconsistent results

I am using the following command to download files from S3 to my local server "aws s3 cp s3://bucket-name/dir-name/ . --recursive --debug" Sometimes I will get the files downloaded successfully. If I run the same command a few times, sometimes I will get error. With --debug flag, this is the output: GET / encoding-type=url&list-type=2&prefix=2022-04-18%2F host:glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20220423T041824Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2022-04-22 22:18:24,445 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20220423T041824Z 20220423/ca-central-1/s3/aws4_request 184d4f7de08e4ea90234c5717ce78cfd7c31c01cfe854a3d11fa94381f9ab1c3 2022-04-22 22:18:24,445 - MainThread - botocore.auth - DEBUG - Signature: 87d083e547678eebd416afb6691a541b05fa930456c5bb57bb3f9a650cf8c276 2022-04-22 22:18:24,445 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url, headers={'User-Agent': b'aws-cli/2.5.4 Python/3.9.11 Linux/3.10.0-1160.45.1.el7.x86_64 exe/x86_64.rhel.7 prompt/off command/s3.cp', 'X-Amz-Date': b'20220423T041824Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=AKIAVLZENK7ICR7W3PXG/20220423/ca-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=87d083e547678eebd416afb6691a541b05fa930456c5bb57bb3f9a650cf8c276'}> 2022-04-22 22:18:24,445 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/v2/2.5.4/dist/awscli/botocore/cacert.pem 2022-04-22 22:18:24,446 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (3): glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com:443 2022-04-22 22:18:24,448 - MainThread - botocore.endpoint - DEBUG - Exception received when sending HTTP request. Traceback (most recent call last): File "urllib3/connection.py", line 174, in _new_conn File "urllib3/util/connection.py", line 95, in create_connection File "urllib3/util/connection.py", line 85, in create_connection ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/httpsession.py", line 358, in send File "urllib3/connectionpool.py", line 785, in urlopen File "urllib3/util/retry.py", line 525, in increment File "urllib3/packages/six.py", line 770, in reraise File "urllib3/connectionpool.py", line 703, in urlopen File "urllib3/connectionpool.py", line 386, in _make_request File "urllib3/connectionpool.py", line 1040, in _validate_conn File "urllib3/connection.py", line 358, in connect File "urllib3/connection.py", line 186, in _new_conn urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7fad81a9d7c0>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/endpoint.py", line 199, in _do_get_response File "awscli/botocore/endpoint.py", line 271, in _send File "awscli/botocore/httpsession.py", line 387, in send botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" 2022-04-22 22:18:24,448 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.ListObjectsV2: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x7fad81af65e0>> 2022-04-22 22:18:24,448 - MainThread - botocore.retries.standard - DEBUG - Max attempts of 3 reached. 2022-04-22 22:18:24,448 - MainThread - botocore.retries.standard - DEBUG - Not retrying request. 2022-04-22 22:18:24,449 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.ListObjectsV2: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fad81af6670>> 2022-04-22 22:18:24,449 - MainThread - awscli.customizations.s3.results - DEBUG - Exception caught during command execution: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" Traceback (most recent call last): File "urllib3/connection.py", line 174, in _new_conn File "urllib3/util/connection.py", line 95, in create_connection File "urllib3/util/connection.py", line 85, in create_connection ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/httpsession.py", line 358, in send File "urllib3/connectionpool.py", line 785, in urlopen File "urllib3/util/retry.py", line 525, in increment File "urllib3/packages/six.py", line 770, in reraise File "urllib3/connectionpool.py", line 703, in urlopen File "urllib3/connectionpool.py", line 386, in _make_request File "urllib3/connectionpool.py", line 1040, in _validate_conn File "urllib3/connection.py", line 358, in connect File "urllib3/connection.py", line 186, in _new_conn urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7fad81a9d7c0>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/customizations/s3/s3handler.py", line 149, in call File "awscli/customizations/s3/fileinfobuilder.py", line 31, in call File "awscli/customizations/s3/filegenerator.py", line 142, in call File "awscli/customizations/s3/filegenerator.py", line 322, in list_objects File "awscli/customizations/s3/utils.py", line 412, in list_objects File "awscli/botocore/paginate.py", line 252, in __iter__ File "awscli/botocore/paginate.py", line 329, in _make_request File "awscli/botocore/client.py", line 304, in _api_call File "awscli/botocore/client.py", line 620, in _make_api_call File "awscli/botocore/client.py", line 640, in _make_request File "awscli/botocore/endpoint.py", line 101, in make_request File "awscli/botocore/endpoint.py", line 155, in _send_request File "awscli/botocore/endpoint.py", line 199, in _do_get_response File "awscli/botocore/endpoint.py", line 271, in _send File "awscli/botocore/httpsession.py", line 387, in send botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" fatal error: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" 2022-04-22 22:18:24,450 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread, shutting down result thread.
1
answers
0
votes
3
views
asked a month ago

Can't see EBS Snapshot tags from other accounts

Hi, I have private snapshots in one account (source) that I have shared with another account (target). I am able to see the snapshots themselves from the target account, but the tags are not available, neither on the console nor via the cli. This makes it impossible to filter for a desired snapshot from the target account. For background, the user in the target account has the following policy in effect: ``` "Effect": "Allow", "Action": "ec2:*", "Resource": "*" ``` Here's an example of what I'm seeing; from the source account: ``` $ aws --region us-east-2 ec2 describe-snapshots --snapshot-ids snap-XXXXX { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Tags": [ { "Value": "test-snapshot", "Key": "Name" } ], "Encrypted": true, "VolumeId": "vol-XXXXX", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:XXXXX:key/mrk-XXXXX", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "XXXXX", "SnapshotId": "snap-XXXXX" } ] } ``` but from the target account ``` $ aws --region us-east-2 ec2 describe-snapshots --owner-ids 012345678900 --snapshot-ids snap-11111111111111111 { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Encrypted": true, "VolumeId": "vol-22222222222222222", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:012345678900:key/mrk-00000000000000000000000000000000", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "012345678900", "SnapshotId": "snap-11111111111111111" } ] } ``` Any ideas on what's going on here? Cheers!
1
answers
0
votes
4
views
asked a month ago

Using aws s3api put-object --sse-customer-key-md5 fails with CLI

I'm trying to use aws s3api put-object/get-object with server side encryption with customer keys. I'm using Powershell, but I don't believe that is the source of my issue. On the surface, sse-customer-key-md5 appears to be a pretty simple input: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error. put-object works when I don't use --sse-customer-key-md5: >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > { "SSECustomerKeyMD5": "ezatpv/Yg0KkjX+5ZcsxdQ==", "SSECustomerAlgorithm": "AES256", "ETag": "\"0d44c3df058c4e190bd7b2e6d227be73\"" } I agree with the SSECustomerKeyMD5 result: >$key = "testaes256testaes256testaes25612" $md5 = new-object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider $utf8 = new-object -TypeName System.Text.UTF8Encoding $hash = $md5.ComputeHash($utf8.GetBytes($key)) $EncodedString =[Convert]::ToBase64String($hash) Write-Host "Base64 Encoded String: " $EncodedString Base64 Encoded String: ezatpv/Yg0KkjX+5ZcsxdQ== Now I resubmit my put request with the --sse-customer-key-md5 option. Before anyone jumps on the base64 encoding, I've tried submitting the MD5 hash in Base64, Hexidecimal (With and without delimiters), JSON of the MD5 hash result, and upper case and lower case versions of the aforementioned. None work. Has anyone gotten this to work and, if so, format did you use? >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --sse-customer-key-md5 "ezatpv/Yg0KkjX+5ZcsxdQ==" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > aws : At line:1 char:1 + aws s3api put-object ` + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided. Thanks
2
answers
0
votes
3
views
asked a month ago

AWS CLI Updating Network Firewall Rules

I've been trying to determine a method to streamline/automate the updating of AWS Network firewall rules. The AWS CLI looked promising but I've consistently seen failure when trying to push a new ruleset. For example, running the command: aws network-firewall describe-rule-group --rule-group-arn <arn> Returns the JSON as expected with the content as a flat string: "RuleGroup": { "RulesSource": { "RulesString": "pass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".example.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:1; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"example.com\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:2; rev:1;)\npass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".google.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:3; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"www.google.com\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:4; rev:1;)\npass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".ubuntu.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:5; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"ipinfo.io\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:6; rev:1;)\npass tcp $HOME_NET any <> $EXTERNAL_NET 80 (flow:not_established; sid:899998; rev:1;)\npass tcp $HOME_NET any <> $EXTERNAL_NET 443 (flow:not_established; sid:899999; rev:1;)" When trying to update the flat string with a new string including more entries though, I receive an error: aws network-firewall update-rule-group --cli-input-yaml file://example.yaml Error received: An error occurred (InvalidRequestException) when calling the UpdateRuleGroup operation: parameter is invalid I've tried the JSON/YAML/CLI methods and I encounter the issue using any of those methods. I've also tried using the --rule-group vs --rules options to update. I suspected there was an issue with string formatting but I've failed to find a resolution. Updating the rules via the console works without issue. Could anyone provide a pointer where I'm going wrong or even a working method they are using? Not too bothered if it is via CLI, SDK etc. as I may revert to python as it is the language I know best.
0
answers
0
votes
3
views
asked a month ago

Aws Iot Device Client Setup not working

Hello, i have been trying to setup a rpi using the tutorials [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-dc-install-configure.html) and when i run the command `./aws-iot-device-client --config-file ~/dc-configs/dc-testconn-config.json ` i am getting errors on the terminal saying that the aws crt sdk is not found with a fatal error like this ``` 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {template-name} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {csr-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {device-key} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {publish-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Key {subscribe-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Shadow Name {shadow-name} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Input file {shadow-input-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Output file {shadow-output-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [INFO] {Config.cpp}: Successfully fetched JSON config file: { "endpoint": "a32vqcn021ykiy-ats.iot.ap-south-1.amazonaws.com", "cert": "~/certs/testconn/device.pem.crt", "key": "~/certs/testconn/private.pem.key", "root-ca": "~/certs/AmazonRootCA1.pem", "thing-name": "Triton_Dp_Office", "logging": { "enable-sdk-logging": true, "level": "DEBUG", "type": "STDOUT", "file": "" }, "jobs": { "enabled": false, "handler-directory": "" }, "tunneling": { "enabled": false }, "device-defender": { "enabled": false, "interval": 300 }, "fleet-provisioning": { "enabled": false, "template-name": "", "template-parameters": "", "csr-file": "", "device-key": "" }, "samples": { "pub-sub": { "enabled": true, "publish-topic": "test/dc/pubtopic", "publish-file": "", "subscribe-topic": "test/dc/subtopic", "subscribe-file": "" } }, "config-shadow": { "enabled": false }, "sample-shadow": { "enabled": false, "shadow-name": "", "shadow-input-file": "", "shadow-output-file": "" } } 2022-04-11T07:38:13.851Z [DEBUG] {Config.cpp}: Did not find a runtime configuration file, assuming Fleet Provisioning has not run for this device 2022-04-11T07:38:13.852Z [DEBUG] {EnvUtils.cpp}: Updated PATH environment variable to: /home/pi/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games:/snap/bin:/home/pi/.aws-iot-device-client:/home/pi/.aws-iot-device-client/jobs:/home/pi/aws-iot-device-client/build:/home/pi/aws-iot-device-client/build/jobs 2022-04-11T07:38:13.852Z [INFO] {Main.cpp}: Now running AWS IoT Device Client version v1.5.19-868465b 2022-04-11T07:38:13.860Z [ERROR] {FileUtils.cpp}: Failed to create empty file: /var/log/aws-iot-device-client/sdk.log errno: 17 msg: File exists 2022-04-11T07:38:13.860Z [ERROR] {Main.cpp}: *** AWS IOT DEVICE CLIENT FATAL ERROR: Failed to initialize AWS CRT SDK. AWS IoT Device Client must abort execution, reason: Failed to initialize AWS CRT SDK Please check the AWS IoT Device Client logs for more information Aborted ``` I need this setup asap to work on deploying a fleet and test out AWS IOT Jobs. Any help is appreciated.
1
answers
0
votes
15
views
asked a month ago

Creating Dynamic Frame using MongoDB connection, successfully able to crawl data in Glue Data Catalog

Hi All, I created a mongodb connection successfully, my connection tests successfully and was able to use a Crawler to create metadata in the Glue Data Catalog. However, when i use below where i am adding my mongodb database name and collection name in additional_options parameter i get an error: ***data_catalog_database = 'tinkerbell' data_catalog_table = 'tinkerbell_funds' glueContext.create_dynamic_frame_from_catalog( database = data_catalog_database, table_name = data_catalog_table, additional_options = {"database":"tinkerbell", "collection":"funds"}) *** following is the error: An error was encountered: An error occurred while calling o177.getDynamicFrame. : java.lang.NoSuchMethodError: com.mongodb.internal.connection.DefaultClusterableServerFactory.<init>(Lcom/mongodb/connection/ClusterId;Lcom/mongodb/connection/ClusterSettings;Lcom/mongodb/connection/ServerSettings;Lcom/mongodb/connection/ConnectionPoolSettings;Lcom/mongodb/connection/StreamFactory;Lcom/mongodb/connection/StreamFactory;Lcom/mongodb/MongoCredential;Lcom/mongodb/event/CommandListener;Ljava/lang/String;Lcom/mongodb/MongoDriverInformation;Ljava/util/List;)V When i use it without the additional_parameters: ***glueContext.create_dynamic_frame_from_catalog( database = data_catalog_database, table_name = data_catalog_table)*** I get following error: An error was encountered: Missing collection name. Set via the 'spark.mongodb.input.uri' or 'spark.mongodb.input.collection' property Traceback (most recent call last): File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/context.py", line 179, in create_dynamic_frame_from_catalog return source.getFrame(**kwargs) File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/data_source.py", line 36, in getFrame jframe = self._jsource.getDynamicFrame() File "/home/glue_user/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/home/glue_user/spark/python/pyspark/sql/utils.py", line 117, in deco raise converted from None pyspark.sql.utils.IllegalArgumentException: Missing collection name. Set via the 'spark.mongodb.input.uri' or 'spark.mongodb.input.collection' property Can someone please help me pass these parameters correctly?
0
answers
0
votes
1
views
asked 2 months ago

Required Capabilities Cloudformation Template

I am getting an exception when Deploying a cloud formation template regarding Requires capabilities : [CAPABILITY_IAM]. I have done some research and found out that when using IAM resources in the template we have to explicitly tell AWS that we are aware of IAM resources in the template. I have done that. Below is my command $ ./update.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM $ ./update.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM $ ./create.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND Tried all 3 commands but still, the output shows: An error occurred (InsufficientCapabilitiesException) when calling the UpdateStack operation: Requires capabilities : [CAPABILITY_IAM] Here is the actual code : This is the Role I have created for S3 ``` IamS3Role: Type: AWS::IAM::Role Properties: ManagedPolicyArns: - "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - ec2.amazonaws.com Action: - 'sts:AssumeRole' Path: / ``` Instance Profile attachment ``` ProfileWithRolesForApp: Type: AWS::IAM::InstanceProfile Properties: Path: "/" Roles: - !Ref IamS3Role ``` Please let me know where I am wrong . Thanks in advance
2
answers
0
votes
23
views
asked 2 months ago

mongodb-org-4.0.repo : No such file or directory al instalar el mongo shell en mi AWS Cloud9

I try to connect to my cluster DocumentDB on AWS from AWS C9 with [this tutorial][1]. But every time I try to connect I get connection failed after 6 attempts: (scr_env) me:~/environment/sephora $ mongo --ssl --host xxxxxxxxxxxxx:xxxxx --sslCAFile rds-combined-ca-bundle.pem --username username --password mypassword MongoDB shell version v3.6.3 connecting to: mongodb://xxxxxxxxxxxxx:xxxxx/ 2022-03-22T23:12:38.725+0000 W NETWORK [thread1] Failed to connect to xxx.xx.xx.xxx:xxxxx after 5000ms milliseconds, giving up. 2022-03-22T23:12:38.726+0000 E QUERY [thread1] Error: couldn't connect to server xxxxxxxxxxxxx:xxxxx, connection attempt failed : connect@src/mongo/shell/mongo.js:251:13 @(connect):1:6 exception: connect failed Indeed it seems to be missing the VPC configuration. So I tried to do with [this documentation][2]. But I do not know how to install the mongo shell on my AWS Cloud9. Indeed, it seems that I cannot create the repository file with the `echo -e "[mongodb-org-4.0] \name=MongoDB repository baseurl=...`. returns: `mongodb-org-4.0.repo: No such file or directory`. Also, when I tried to install the mongo shell with `sudo yum install -y mongodb-org-shell` which I did not have, and which I installed, it returns `repolist 0`. [1]: https://www.youtube.com/watch?v=Ild9ay9U_vY [2]: https://stackoverflow.com/a/17793856/4764604
2
answers
0
votes
2
views
asked 2 months ago

aws s3 sync syncstrategy shows incorrect timestamp

My typical use is to sync a series of directories/subdirectories from S3 to a cifs mounted SMB share on a local Linux machine. After a recent local server reboot/remount of network storage, the sync command now re-transfers EVERYTHING in the directories every time I run it. My belief is that the sync command is pulling the CURRENT time as the local timestamp instead of the modify time of the files. I ran it with dryrun and debug, and got a series of syncstrategy statements that appear to show a comparison between the S3 and local file. If I'm reading this correctly, the filesize is the same, the S3 timestamps showing correctly, but the local file timestamp is showing the immediate current time. The local linux environment ls shows the correct modified timestamp, which matches the s3 ls of the same file. Here is example output from the debug. Note the "modify time:" section. I believe that this shows the correct modify time for the S3 files, but shows the time the command was run for the local files. (modified time: 2022-03-18 16:52:02-07:00 -> 2022-03-24 12:48:39.973111-07:00 <-- the command was run at this datetime, and it seems to climb with each file as the seconds tick by) ``` 2022-03-24 12:48:40,066 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-10.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-10.mp3, size: 2827911 -> 2827911, modified time: 2022-03-18 16:52:02-07:00 -> 2022-03-24 12:48:39.973111-07:00 (dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-10.mp3 to FZAOD/album-VariousArtists_TimelessHits-10.mp3 2022-03-24 12:48:40,066 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-11.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-11.mp3, size: 3248378 -> 3248378, modified time: 2022-03-18 16:52:12-07:00 -> 2022-03-24 12:48:39.945111-07:00 (dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-11.mp3 to FZAOD/album-VariousArtists_TimelessHits-11.mp3 2022-03-24 12:48:40,067 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-12.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-12.mp3, size: 4518138 -> 4518138, modified time: 2022-03-18 16:52:12-07:00 -> 2022-03-24 12:48:39.981111-07:00 (dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-12.mp3 to FZAOD/album-VariousArtists_TimelessHits-12.mp3 2022-03-24 12:48:40,067 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-13.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-13.mp3, size: 8270994 -> 8270994, modified time: 2022-03-18 16:53:03-07:00 -> 2022-03-24 12:48:40.001111-07:00 (dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-13.mp3 to FZAOD/album-VariousArtists_TimelessHits-13.mp3 2022-03-24 12:48:40,068 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-14.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-14.mp3, size: 5135882 -> 5135882, modified time: 2022-03-18 16:52:33-07:00 -> 2022-03-24 12:48:39.941111-07:00 ``` Does anyone have any insight into how this timestamp is pulled, or what might stop s3 sync from retrieving the correct modify time of local files?
1
answers
0
votes
2
views
asked 2 months ago

AWS Go SDK not finding the credentials file at C:/###/.aws/credential.

I am using Amazon Kinesis and the [Go SDK for AWS](https://github.com/aws/aws-sdk-go), but I'm getting an error. This is my code: ```go package main import ( "math/rand" "strings" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" _kinesis "github.com/aws/aws-sdk-go/service/kinesis" ) func main() { session, err := session.NewSession(&aws.Config{ Region: aws.String("us-east-1"), }) handleErr(err) kinesis := _kinesis.New(session) laugh := strings.Builder{} laughingSounds := []string{"haha", "hoho", "hehe", "hehehe", "*snicker*"} for i := 0; i < 10; i++ { laugh.WriteString(laughingSounds[rand.Intn(len(laughingSounds))]) } _, err = kinesis.PutRecord(&_kinesis.PutRecordInput{ Data: []byte(laugh.String()), PartitionKey: aws.String("laughs"), StreamName: aws.String("laughs"), }) handleErr(err) } func handleErr(err error) { if err != nil { panic(err) } } ``` However, when I run this I get an error: ``` panic: UnrecognizedClientException: The security token included in the request is invalid. status code: 400, request id: dc139793-cd38-fb30-86a3-f92b6410e1c7 goroutine 1 [running]: main.handleErr(...) C:/Users/####/----/main.go:5 main.main() C:/Users/####/----/main.go:34 +0x3ac exit status 2 ``` I have run `aws configure`: ``` $ aws configure AWS Access Key ID [None]: #### AWS Secret Access Key [None]: #### Default region name [None]: us-east-1 Default output format [None]: ``` and the `C:/users/####/.aws/credentials` file is created with the correct configuration. But my program still wouldn't execute successfully. When that didn't work, I also set an environment variable like this: ``` $ $env:aws_access_key_id="####" ``` It still doesn't work. > Version info: ``` $ pwsh -v PowerShell 7.2.2 $ aws -v aws-cli/2.4.27 Python/3.8.8 Windows/10 exe/AMD64 prompt/off ``` OS: Windows 11 (version 21H2). Thanks in advance!
0
answers
0
votes
1
views
asked 2 months ago

Error log when i try to authenticate my SMTP

I get the below error when i try to authenticate, my word press SMTP using my SES credentials. This is the error log below, how do i fix this? Versions: WordPress: 5.9.2 WordPress MS: No PHP: 7.4.27 WP Mail SMTP: 3.3.0 Params: Mailer: smtp Constants: No ErrorInfo: SMTP Error: data not accepted.SMTP server error: DATA END command failed Detail: Message rejected: Email address is not verified. The following identities failed the check in region US-EAST-1: SMTP code: 554 Host: email-smtp.us-east-1.amazonaws.com Port: 587 SMTPSecure: tls SMTPAutoTLS: bool(true) SMTPAuth: bool(true) Server: OpenSSL: OpenSSL 1.1.1d 10 Sep 2019 Debug: Email Source: WP Mail SMTP Mailer: Other SMTP SMTP Error: data not accepted.SMTP server error: DATA END command failed Detail: Message rejected: Email address is not verified. The following identities failed the check in region US-EAST-1: SMTP code: 554 SMTP Debug: 2022-03-17 22:48:33 Connection: opening to email-smtp.us-east-1.amazonaws.com:587, timeout=300, options=array() 2022-03-17 22:48:33 Connection: opened 2022-03-17 22:48:33 SERVER -> CLIENT: 220 email-smtp.amazonaws.com ESMTP SimpleEmailService-d-BCF0QJ2IG JBrz7mJEs78kGQwGHZFv 2022-03-17 22:48:33 CLIENT -> SERVER: EHLO 2022-03-17 22:48:33 SERVER -> CLIENT: 250-email-smtp.amazonaws.com250-8BITMIME250-STARTTLS250-AUTH PLAIN LOGIN250 Ok 2022-03-17 22:48:33 CLIENT -> SERVER: STARTTLS 2022-03-17 22:48:33 SERVER -> CLIENT: 220 Ready to start TLS
1
answers
0
votes
14
views
asked 2 months ago

cdk destroy deleting the stacks from cdk.out/manifest.json not my stage stacks

Hello AWS Community, in my team we have just on AWS Account, and that way we deploy th **PreProd**and ro stages in tghe same account. the Problem is: i wanant to delete all stacks produced through the **PreProd **stage before moving to the **Pre **stage because of duplicat names ... etc. i tried using the command ``` cdk destroy --app 'npx ts-node ./bin/AutBusBackend.ts' --all --force" ``` but the destroy here is deleting the stacks without the prefix ***Pre***, and in this time we don't have anway this stacks. Do you know how can i get ride of this problem to delete the stacks that are preduced in the current stage before and not the default stacks names in cdk.out ? here is my pipeline code ``` const repo = codecommit.Repository.fromRepositoryName(this, 'AutbusBackendRepo', 'AutbusBackend'); const pipeline = new CodePipeline(this, 'AutBusPipeline', { pipelineName: 'AutBusPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.codeCommit(repo, 'master'), commands: [ 'npm install -g npm', 'npm install', 'npm ci', 'npm run build', 'npm run cdk -- synth' ] }) }); const preProd = pipeline.addStage(new AppStage(this, 'PreProd',{ env: { account: account, region: region } })); const step1 = new ShellStep('IntegrationTesting', { commands: [ 'npm install', 'npm test' ] }); const step2 = new ManualApprovalStep('Manual approval before Prod'); const step3 = new ShellStep('Delete deployed Stacks', { commands: [ 'npm install', 'npm install -g aws-cdk', "cdk destroy --app 'npx ts-node ./bin/AutBusBackend.ts' --all --force" ] }); // step2.addStepDependency(step1); // step3.addStepDependency(step2); // preProd.addPost(step3); // preProd.addPost(step2); // preProd.addPost(step1); const prodStage = pipeline.addStage(new AppStage(this, 'Prod', { env: { account: account, region: region } })); ``` Thanks in adavance for any new insiring idea
0
answers
0
votes
1
views
asked 2 months ago

MSK Custom Configuration using Cloudformation

Hi AWS Users, I am trying to spin up a MSK cluster with a custom MSK configuration using my serverless app. I wrote the cloudformation template for the generation of the MSK Cluster and was able to successfully bring it up. I recently saw that AWS added cloudformation template of `AWS::MSK::Configuration`. [1] I was trying that out to create a custom configuration. The Configuration requires a `ServerProperties`key that is usually a PlainText in AWS console. An example of Server Properties: ``` auto.create.topics.enable=true default.replication.factor=2 min.insync.replicas=2 num.io.threads=8 num.network.threads=5 num.partitions=10 num.replica.fetchers=2 replica.lag.time.max.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 unclean.leader.election.enable=true zookeeper.session.timeout.ms=18000 ``` `AWS::MSK::Configuration` accepts base64 (api functionality) and I have been trying to implement this. I am using the cloudformation `Fn::Base64` functionality. e.g: ``` Resources: ServerlessMSKConfiguration: Type: AWS::MSK::Configuration Properties: ServerProperties: Fn::Base64: auto.create.topics.enable=true ``` This gives me back a 400 error during deploy. ``` Resource handler returned message: "[ClientRequestToken: xxxxx] Invalid request body (Service: Kafka, Status Code: 400, Request ID: 1139d840-c02d-4fdb-b68c-cee93673d89d, Extended Request ID: null)" (RequestToken: xxxx HandlerErrorCode: InvalidRequest) ``` Can someone please help me format this ServerProperties properly, not sure how to give the proper base64 string in the template. Any help is much appreciated. [1] - [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-msk-configuration.html](MSK::Configuration)
0
answers
0
votes
5
views
asked 2 months ago

Getting AccessDenied Error Trying to Get Wildcard SSL with Certbot and Route53 Plugin

I have been tasked with setting up Wilcard SSL for some domains. These domains are hosted through AWS Route53. I am using **Certbot** on an **Ubuntu 20.4** machine (we're using Lightsail), where the apps are hosted. I have also installed the Route53 DNS plugin for Certbot. I run this command: ``` sudo certbot certonly --dns-route53 --email 'me@derp.com' --domain 'mywebsite.rocks' --domain '*.mywebsite.rocks' --agree-tos --non-interactive ``` *Real domains remove for security reasons* I get this error: ``` An error occurred (AccessDenied) when calling the ListHostedZones operation: User: arn:aws:sts::789148085273:assumed-role/AmazonLightsailInstanceRole/i-0871f2572906140c4 is not authorized to perform: route53:ListHostedZones because no identity-based policy allows the route53:ListHostedZones action ``` Let me explain first how I set up the IAM user in the AWS console. 1. I created a new Policy with this config ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:GetHostedZone", "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/WHAT-EVER-MY-ID-IS-HERE" }, { "Effect": "Allow", "Action": "route53:ListHostedZones", "Resource": "*" } ] } ``` *Replacing `WHAT-EVER-MY-ID-IS-HERE` with my actual domain's Hosted Zone ID* 2. I then created a new **IAM User** and during set-up, I attached the above Policy to the user. 3. I then created an **Access Key** for my new User and took note of the `AccessKeyId` and `SecretAccessKey`. This has access to be used programmatically. 4. On the server, I created a config file at `/root/.aws/config` as instructed in the documentation. *I also tried `~/.aws/config`* but as I am using `sudo` the former seemed the preferred location (I could be wrong though, and during my tests, neither worked anyway) And as previously aforementioned, I run the command and get the error. Searched the web high and low for a solution, but cannot find one. Appreciate any help I can get from folk.
0
answers
0
votes
3
views
asked 2 months ago

Accessing S3 across accounts I can do it if logged in the origin account but not if assuming a role from another account

When I log directly in the origin account I have access to target account S3: > [cloudshell-user@ip-10-0-91-7 ~]$ aws sts get-caller-identity { "UserId": "AIDAxxxxxxxxJBLJ34", "Account": "178xxxxxx057", "Arn": "arn:aws:iam::178xxxxxx057:user/adminCustomer" } > [cloudshell-user@ip-10-0-91-7 ~]$ aws s3 ls s3://target-account-bucket 2022-03-10 01:28:05 432 foobar.txx However if I do it after assuming a Role in that account I can't access the target account > [cloudshell-user@ip-10-1-12-136 ~]$ aws sts get-caller-identity { "UserId": "AROAxxxxxxF5HI7BI:test", "Account": "178xxxxxx057", "Arn": "arn:aws:sts::178xxxxxx4057:assumed-role/ReadAnalysis/test" } > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://targer-account-bucket > An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied [cloudshell-user@ip-10-1-12-136 ~]$ however I do have access to buckets in the origin account > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://origin-account > 2022-03-09 21:19:36 432 cli_script.txt the policy in the target-account-bucket is as follows: > { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::178xxxxxx057:root" }, "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::targer-account-bucket/*", "arn:aws:s3:::targer-account-bucket" ] }, there are no any explicit Deny policies that may apply thank you for any advice you can provide
1
answers
0
votes
4
views
asked 2 months ago

S3 bucket creation with encryption is failing because of AWSSamples::S3BucketEncrypt::Hook

Hi, I have activated **AWSSamples::S3BucketEncrypt::Hook** with the following configuration but S3 bucket creation with encryption enabled seems to be failing because of the hook. It works when I disable the hook. Could this be an issue? ``` { "CloudFormationConfiguration": { "HookConfiguration": { "TargetStacks": "ALL", "FailureMode": "FAIL", "Properties": { "minBuckets": "1", "encryptionAlgorithm": "AES256" } } } } ``` ``` { "CloudFormationConfiguration": { "HookConfiguration": { "TargetStacks": "ALL", "FailureMode": "FAIL", "Properties": { "minBuckets": "1", "encryptionAlgorithm": "aws:kms" } } } } ``` [AWSSamples::S3BucketEncrypt::Hook configuration](https://imgur.com/w9NnjEP) [AWSSamples::S3BucketEncrypt::Hook](https://imgur.com/OsETMvV) **CloudFormation for S3 bucket with AES256 encryption** - Expected to Pass ``` AWSTemplateFormatVersion: 2010-09-09 Description: S3 bucket with default encryption Resources: EncryptedS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: !Sub 'encryptedbucket-${AWS::Region}-${AWS::AccountId}' BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: 'AES256' DeletionPolicy: Delete ``` **CloudFormation for S3 bucket with KMS encryption** - Expected to Pass ``` AWSTemplateFormatVersion: "2010-09-09" Description: This CloudFormation template provisions an encrypted S3 Bucket Resources: EncryptedS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: !Sub 'encryptedbucket-${AWS::Region}-${AWS::AccountId}' BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: 'aws:kms' KMSMasterKeyID: !Ref EncryptionKey BucketKeyEnabled: true Tags: - Key: "keyname1" Value: "value1" EncryptionKey: Type: AWS::KMS::Key Properties: Description: KMS key used to encrypt the resource type artifacts EnableKeyRotation: true KeyPolicy: Version: "2012-10-17" Statement: - Sid: Enable full access for owning account Effect: Allow Principal: AWS: !Ref "AWS::AccountId" Action: kms:* Resource: "*" Outputs: EncryptedBucketName: Value: !Ref EncryptedS3Bucket ```
0
answers
0
votes
9
views
asked 3 months ago

Unable to resolve "Learn Python on AWS Workshop Python" error involving looping over JSON

I haven't been able to resolve the the following error that keeps showing up at the end of the [Looping over JSON](https://catalog.us-east-1.prod.workshops.aws/workshops/3d705026-9edc-40e8-b353-bdabb116c89c/en-US/loops/lab-6/step-2#looping-over-dictionaries-and-json) portion of Lab 6 in the "Learn Python on AWS Workshop" module: ``` error: the following arguments are required_ --file ``` Following the directions throughout the lab, I created the JSON file called `translate_input.json` and copied the list of dictionaries contained within. Then I created a new python file called `lab_6_step_2_loops.py` , typed in the text as directed, and ran the program with the command in the terminal python `lab_6_step_2_loops.py --file translate_input.json` after which the above error appears. I reached out to some coworkers who were also working on this but none of them have gotten back to me yet. Additionally, I've gone over all of the previous labs of this Python workshop to see what/if I missed anything, read numerous explanations, and watched several tutorials on YouTube regarding argparse and json. All of this was helpful but danced around the general issue without actually helping me resolve it but leads me to think the issue is related to the first section of the [code](https://catalog.us-east-1.prod.workshops.aws/workshops/3d705026-9edc-40e8-b353-bdabb116c89c/en-US/loops/lab-6/step-2#looping-over-dictionaries-and-json). ``` parser = argparse.ArgumentParser(description="Provides translation between one source language and another of the same set of languages.") parser.add_argument( '--file', dest='filename', help="The path to the input file. The file should be valid json", required=True) ``` Leaving this code as is, inputting the file name, file path, destination, or some combination keeps bringing up the same error as above. When I follow the directions in the link and literally just copy and paste- no typing- the text, I still come up with this error. Any thoughts?
1
answers
0
votes
5
views
asked 3 months ago

I'm trying to use AWS CLI, using Powershell on a virtual server, but I cannot use the configure command correctly. What am I doing wrong?

We installed AWS CLI 2.4.20 on a Windows 2012 server. We allow outbound on 80 and 443. I have the credentials I need to enter for the configuration, but when I enter "aws configure", the PowerShell cursor just blinks at me. I am running PS in administrator mode as well. Here is the debug results if this helps. Did I miss a step. I have looked at all the online AWS documentation for configuring AWS CLI and have watched a couple of videos. It does not look like it should be this difficult. PS C:\Windows\system32> aws configure --debug aws : 2022-02-23 16:27:48,313 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.4.20 Python/3.8.8 Windows/2012ServerR2 exe/AMD64 At line:1 char:1 + aws configure --debug + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (2022-02-23 16:2...verR2 exe/AMD64:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError 2022-02-23 16:27:48,313 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['configure', '--debug'] 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_s3 at 0x000000DE405A7DC0> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_ddb at 0x000000DE403FCA60> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.configure.configure.ConfigureCommand'>> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x000000DE403A9280> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x000000DE403B13A0> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function alias_opsworks_cm at 0x000000DE405B6820> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_history_commands at 0x000000DE4044B940> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.devcommands.CLIDevCommand'>> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_waiters at 0x000000DE405ADA60> 2022-02-23 16:27:48,360 - MainThread - botocore.loaders - DEBUG - Loading JSON file: C:\Program Files\Amazon\AWSCLIV2\awscli\data\cli.json 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_types at 0x000000DE404FD940> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function no_sign_request at 0x000000DE404FF4C0> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_verify_ssl at 0x000000DE404FF430> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_read_timeout at 0x000000DE404FF5E0> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_connect_timeout at 0x000000DE404FF550> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <built-in method update of dict object at 0x000000DE406545C0> 2022-02-23 16:27:48,360 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.4.20 Python/3.8.8 Windows/2012ServerR2 exe/AMD64 prompt/off 2022-02-23 16:27:48,360 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['configure', '--debug'] 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_timestamp_parser at 0x000000DE405A9430> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x000000DE40154D30> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_binary_formatter at 0x000000DE40616CA0> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function no_pager_handler at 0x000000DE40152160> 2022-02-23 16:27:48,360 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x000000DE401ABA60> 2022-02-23 16:27:48,375 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/ 2022-02-23 16:27:48,375 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x000000DE4044B820> 2022-02-23 16:27:48,375 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_json_file_cache at 0x000000DE403F9940> 2022-02-23 16:27:48,375 - MainThread - botocore.hooks - DEBUG - Event building-command-table.configure: calling handler <function _add_wizard_command at 0x000000DE40616C10> 2022-02-23 16:27:48,375 - MainThread - botocore.hooks - DEBUG - Event building-command-table.configure: calling handler <function add_waiters at 0x000000DE405ADA60> 2022-02-23 16:27:48,375 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.configure.anonymous: calling handler <awscli.paramfile.URIArgumentHandler object at 0x000000DE4079DE80>
1
answers
0
votes
3
views
asked 3 months ago

CDK on local environment issue

Whenever I attempt to run a cdk command on my local machine, I receive the following error. For context, I am running CDK v2, I have a Windows device, I have Python 3.7.9 and running cdk --version returns a 2.19.0. I have attempted uninstalling and reinstalling CDK multiple times. I would appreciate anyone's help. The same CDK repository works for my 2 other teammates as well. ``` Traceback (most recent call last): File "app.py", line 4, in <module> import aws_cdk as cdk File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\__init__.py", line 24257, in <module> from . import aws_apigateway File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_apigateway\__init__.py", line 1549, in <module> from ..aws_certificatemanager import ICertificate as _ICertificate_c194c70b File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_certificatemanager\__init__.py", line 184, in <module> from ..aws_cloudwatch import ( File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_cloudwatch\__init__.py", line 500, in <module> from ..aws_iam import Grant as _Grant_a7ae64f8, IGrantable as _IGrantable_71c4f5de File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_iam\__init__.py", line 654, in <module> "policy_dependable": "policyDependable", File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_iam\__init__.py", line 662, in AddToPrincipalPolicyResult policy_dependable: typing.Optional[constructs.IDependable] = None, AttributeError: module 'constructs' has no attribute 'IDependable' Subprocess exited with error 1 ```
1
answers
0
votes
34
views
asked 3 months ago

credentials working with CLI but not with Java SDK

I'm having trouble getting a set of credentials to work with the Java SDK, when they work with the CLI. Background - I had some code working on an AWS Elastic Beanstalk instance, where I was setting the environment variables "aws.accessKeyId" and "aws.secretKey", and using the SystemPropertiesCredentialsProvider to build clients for accessing SQS, S3, etc. Following a security review by an internal team, I'm attempting to update this to use a different method of finding credentials - namely, storing the credentials in an external file, instead of an environment variable. To that end, here's what I've done: 1. I'm using an IAM user on this account, which belongs to a group that has the AmazonSQSFullAccess policy (among others) attached. This is unchanged from the working version of my app, where I had the same user but just a different credentials provider. 2. I have regenerated the security credentials for this user and verified it is active. 3. To test, I have the following set up locally in my ~/.aws/credentials file: ``` [sdk_temp_test] aws_access_key_id = <redacted> aws_secret_access_key = <redacted> ``` and ~/.aws/config file: ``` [profile sdk_temp_test] region = us-east-1 ``` 4. At a shell prompt, if I then do "export AWS_PROFILE=sdk_temp_test" I can run the following commands that show that the credentials work and are able to access basic SQS functionality - I'm not including the output here, but the returned data from the following commands shows that I am calling CLI functions as the user I expect, and I am retrieving the queues I expect to see in the us-east-1 region for this account. ``` aws sts get-caller-identity aws sqs list-queues ``` So far, so good. However, I then attempt to do something like the following: 5. create a file called "localtest.properties" that contains the following and is accessible on the classpath of my Java application: ``` accessKey="<redacted>" secretKey="<redacted>" ``` 6. run code like so, this is in a standalone example that illustrates the problem: ``` AWSCredentialsProvider provider = new ClasspathPropertiesFileCredentialsProvider("localtest.proper ties"); AWSCredentials credentials = provider.getCredentials(); String accessKeyId = credentials.getAWSAccessKeyId(); String secret = credentials.getAWSSecretKey(); System.out.println("accesskey is '" + accessKeyId + "'; secret is '" + secret + "'"); AmazonSQSClient client = (AmazonSQSClient)AmazonSQSClientBuilder.standard() .withRegion(Regions.US_EAST_1) .withCredentials(provider) .build(); System.out.println("LIST QUEUES TEST"); ListQueuesResult lqr = client.listQueues(); ``` This debug line correctly prints out the credentials I expect. But when attempting to listQueues it throws the following exception: ``` Exception in thread "main" com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: <redacted>; Proxy: null) ``` So I'm a little stuck. The credentials are good, b/c they work on my CLI test. My code I think is ok; I am just switching my credentials provider. And the new provider appears to be finding the correct credentials based on the output debugging. But - put it all together, and it is not working for me when trying an SDK call, as I get that exception. How do I troubleshoot this? Is it possible to get more details beyond the "InvalidClientTokenId" - what specifically is wrong? Can I look up the request ID somewhere to troubleshoot? Does the ClasspathPropertiesFileCredentialsProvider need something that the SystemPropertiesCredentialsProvider I used to use did not? I opened a ticket with AWS support and they said SDK issues were a little out of scope; they pointed me towards articles on the credentials chain, and some sample code for the NodeJS SDK which is structured a little differently. re: the credentials chain, I think with a custom provider I should bypass that? Just in case, I've ensured there are no environment variables like AWS_ACCESS_KEY_ID, no Java properties like aws.accessKeyId, I've even temporarily deleted my ~/.aws/credentials and config files while running the above code, to make sure that no other credential is "sneaking in", but I still get the same exception. I do get some warnings while running the above Java code: ``` Feb 21, 2022 12:00:45 PM com.amazonaws.auth.profile.internal.BasicProfileConfigLoader loadProfiles WARNING: Your profile name includes a 'profile ' prefix. This is considered part of the profile name in the Java SDK, so you will need to include this prefix in your profile name when you reference this profile from your Java code. (this repeats a number of times) WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.amazonaws.util.XpathUtils (file:/Users/tfeiler/.m2/repository/com/amazonaws/aws-java-sdk-core/1.11.964/aws-java-sdk-core-1.11.964.jar) to method com.sun.org.apache.xpath.internal.XPathContext.getDTMManager() WARNING: Please consider reporting this to the maintainers of com.amazonaws.util.XpathUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ( this occurs right before the exception is thrown) ``` They are just warnings, and I think the profile prefix warning is related to the entry in my ~/.aws/config file; so I don't think this is related to my problem, but including it here just in case. Anyone got any advice on things to try or how to troubleshoot this?
1
answers
0
votes
69
views
asked 3 months ago
  • 1
  • 90 / page