By using AWS re:Post, you agree to theΒ Terms of Use
/AWS Command Line Interface/

Questions tagged withΒ AWS Command Line Interface

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unable to activate AWS Genomics CLI

I would like to use the AWS Genomics CLI, but have failed to activate it both locally and on an EC2 instance. I do not have permissions to alter my IAM roles, and my IT team has tried to setup my account to work based on the info provided in the Setup documentation. However, I have had no luck. I have tried with both existing VPC and S3 bucket, as well as without. Any help is appreciated. I would like to use AWS's resources! I am pasting one error below. ``` agc account activate 2022-06-17T18:19:28+02:00 π’Š Activating AGC with bucket '' and VPC '' Bootstrapping CDK... [--o-] 51s 2022-06-17T18:20:21+02:00 ✘ ⏳ Bootstrapping environment aws://272554863871/us-east-2... 2022-06-17T18:20:21+02:00 ✘ Using default execution policy of 'arn:aws:iam::aws:policy/AdministratorAccess'. Pass '--cloudformation-execution-policies' to customize. 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit: creating CloudFormation changeset... 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:43 PM | REVIEW_IN_PROGRESS | AWS::CloudFormation::Stack | Agc-CDKToolkit User Initiated 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:50 PM | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | Agc-CDKToolkit User Initiated 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::ECR::Repository | ContainerAssetsRepository 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | FilePublishingRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::SSM::Parameter | CdkBootstrapVersion 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | CloudFormationExecutionRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | LookupRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::S3::Bucket | StagingBucket 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | ImagePublishingRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | FilePublishingRole Resource creation Initiated 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:55 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | LookupRole Resource creation Initiated 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_IN_PROGRESS | AWS::IAM::Role | CloudFormationExecutionRole Resource creation Initiated 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::S3::Bucket | StagingBucket cdk-agc-assets-272554863871-us-east-2 already exists 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::IAM::Role | FilePublishingRole Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::IAM::Role | ImagePublishingRole Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::IAM::Role | LookupRole Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::IAM::Role | CloudFormationExecutionRole Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:56 PM | CREATE_FAILED | AWS::SSM::Parameter | CdkBootstrapVersion Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:57 PM | CREATE_FAILED | AWS::ECR::Repository | ContainerAssetsRepository Resource creation cancelled 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:19:57 PM | ROLLBACK_IN_PROGRESS | AWS::CloudFormation::Stack | Agc-CDKToolkit The following resource(s) failed to create: [ImagePublishingRole, FilePublishingRole, CdkBootstrapVersion, LookupRole, StagingBucket, CloudFormationExecutionRole, ContainerAssetsRepository]. Rollback requested by user. 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_IN_PROGRESS | AWS::IAM::Role | CloudFormationExecutionRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_IN_PROGRESS | AWS::SSM::Parameter | CdkBootstrapVersion 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_IN_PROGRESS | AWS::IAM::Role | FilePublishingRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_SKIPPED | AWS::S3::Bucket | StagingBucket 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_IN_PROGRESS | AWS::ECR::Repository | ContainerAssetsRepository 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 0/12 | 6:20:10 PM | DELETE_IN_PROGRESS | AWS::IAM::Role | LookupRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 1/12 | 6:20:10 PM | DELETE_COMPLETE | AWS::IAM::Role | ImagePublishingRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 2/12 | 6:20:11 PM | DELETE_COMPLETE | AWS::SSM::Parameter | CdkBootstrapVersion 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 3/12 | 6:20:11 PM | DELETE_COMPLETE | AWS::IAM::Role | FilePublishingRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 4/12 | 6:20:11 PM | DELETE_COMPLETE | AWS::ECR::Repository | ContainerAssetsRepository 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 5/12 | 6:20:11 PM | DELETE_COMPLETE | AWS::IAM::Role | CloudFormationExecutionRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 6/12 | 6:20:12 PM | DELETE_COMPLETE | AWS::IAM::Role | LookupRole 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 7/12 | 6:20:13 PM | ROLLBACK_COMPLETE | AWS::CloudFormation::Stack | Agc-CDKToolkit 2022-06-17T18:20:21+02:00 ✘ 2022-06-17T18:20:21+02:00 ✘ Failed resources: 2022-06-17T18:20:21+02:00 ✘ Agc-CDKToolkit | 6:19:56 PM | CREATE_FAILED | AWS::S3::Bucket | StagingBucket cdk-agc-assets-272554863871-us-east-2 already exists 2022-06-17T18:20:21+02:00 ✘ ❌ Environment aws://272554863871/us-east-2 failed bootstrapping: Error: The stack named Agc-CDKToolkit failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE 2022-06-17T18:20:21+02:00 ✘ at waitForStackDeploy (/Users/Moneill/.agc/cdk/node_modules/aws-cdk/lib/api/util/cloudformation.ts:311:11) 2022-06-17T18:20:21+02:00 ✘ at processTicksAndRejections (node:internal/process/task_queues:96:5) 2022-06-17T18:20:21+02:00 ✘ at prepareAndExecuteChangeSet (/Users/Moneill/.agc/cdk/node_modules/aws-cdk/lib/api/deploy-stack.ts:376:26) 2022-06-17T18:20:21+02:00 ✘ at /Users/Moneill/.agc/cdk/node_modules/aws-cdk/lib/cdk-toolkit.ts:575:24 2022-06-17T18:20:21+02:00 ✘ at async Promise.all (index 0) 2022-06-17T18:20:21+02:00 ✘ at CdkToolkit.bootstrap (/Users/Moneill/.agc/cdk/node_modules/aws-cdk/lib/cdk-toolkit.ts:572:5) 2022-06-17T18:20:21+02:00 ✘ at initCommandLine (/Users/Moneill/.agc/cdk/node_modules/aws-cdk/lib/cli.ts:342:12) 2022-06-17T18:20:21+02:00 ✘ 2022-06-17T18:20:21+02:00 ✘ The stack named Agc-CDKToolkit failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE 2022-06-17T18:20:21+02:00 ✘ error="exit status 1" Error: an error occurred invoking 'account activate' with variables: {bucketName: vpcId: publicSubnets:false customTags:map[] subnets:[] amiId:} caused by: exit status 1 ```
0
answers
0
votes
17
views
asked 7 days ago

Exec linux command inside a container

Hi team, I connected to my envoy container using this command : ``` aws ecs execute-command --cluster cluster-name --task task-id --container container-name --interactive --command "/bin/sh" ``` once inside the container I'm trying to execute this Linux command: ` ps aux` I have this error : `sh: ps: command not found` the version of the distribution inside the envoy container is " ``` Linux version 4.14.276-211.499.amzn2.x86_64 (mockbuild@ip-xx-x-xx-225) (gcc version 7.3.1 20180712 (Red Hat 7.3.1-13) (GCC)) #1 SMP Wed Apr 27 21:08:48 UTC 2022 ``` I tried to install ps : `yum install -y procps` I have this error : ``` Loaded plugins: ovl, priorities Could not retrieve mirrorlist http://amazonlinux.default.amazonaws.com/2/core/latest/x86_64/mirror.list error was 14: curl#56 - "Recv failure: Connection reset by peer" One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=<repoid> ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> or subscription-manager repos --disable=<repoid> 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true Cannot find a valid baseurl for repo: amzn2-core/2/x86_64 ``` is there a way to run basic commands inside the envoy container like ps, map...? Thank you.
1
answers
0
votes
40
views
asked 8 days ago

My local MongoDB is refusing to connect with AWS SAM Lambda in python

I have set up an AWS Lambda function using the AWS SAM app. I have also downloaded local MongoDB on my machine. I am trying to make a connection between AWS Lambda and MongoDB. You can see my code below: ``` import json import pymongo client = pymongo.MongoClient('mongodb://localhost:27017/') mydb = client['Employee'] def lambda_handler(event, context): information = mydb.employeeInformation record = { 'FirstName' : 'Rehan', 'LastName' : 'CH', 'Department' : "IT" } information.insert_one(record) print("Record added") return { "statusCode": 200, "body": json.dumps( { "message": "hello world", # "location": ip.text.replace("\n", "") } ), } ``` When I run the sam app using command ``` sam local invoke ``` it throws an error that you can see below: ``` [ERROR] ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 62b16aa14a95a3e56eb0e7cb, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localho raise ServerSelectionTimeoutError(, line 227, in _select_servers_looprtn_support ``` I also have searched for this error and eventually, I found some but didn't get help from them. That's why I have to post it again. Its my first time interaction with MongoDB. Can someone tell me how do I resolve this error, or where I am doing wrong?
2
answers
0
votes
27
views
asked 8 days ago

Expo build - APK upload fails when using aws-cli command, via GitHub Actions but works from terminal(local)

Command used in GitHub Actions to download APK from expo: latest_build=$(npx eas-cli build:list --status="finished" --distribution="store" --json | jq '[.[] | select(.releaseChannel=="development")][1].artifacts.buildUrl') Command used in GitHub Actions for create-upload and upload in device farm response_upload_app=$(aws devicefarm create-upload --project-arn $DEV_PROJECT_ARN --name latest_build.apk --type ANDROID_APP) curl -T latest_build.apk $url_upload_app Same command when run locally in terminal when the APK is available in a folder, works perfectly fine. Also, at times, when running in local terminal, it was giving request timeout error This is the error log in GitHub Actions in when running get-upload command for the corresponding create upload arn in device farm: "metadata": "{\"errorMessageUrl\":\"https://docs.aws.amazon.com/console/devicefarm/ANDROID_APP_AAPT_DEBUG_BADGING_FAILED\",\"errorMessage\":\"We could not extract information about your Android application package. Please verify that the application package is valid by running the command \\\"aapt debug badging <path to your test package>\\\", and try again after the command does not print any error.\",\"errorCode\":\"ANDROID_APP_AAPT_DEBUG_BADGING_FAILED\"} Debugging done so far: Ran this (aapt debug badging <path to apk>/latest_build.apk ) and was able to get package information correctly
0
answers
0
votes
11
views
asked 9 days ago

AWS CLI greengrass v2 create-deployment using JSON to import lambda not importing lambda artifact

I am importing lambdas as components for ggv2 using the AWS CLI. The lambdas import successfully but when I deploy to greengrass v2 I get the following error: > Error occurred while processing deployment. {deploymentId=********************, serviceName=DeploymentService, currentState=RUNNING}java.util.concurrent.ExecutionException: com.aws.greengrass.componentmanager.exceptions.NoAvailableComponentVersionException: No local or cloud component version satisfies the requirements. Check whether the version constraints conflict and that the component exists in your AWS account with a version that matches the version constraints. If the version constraints conflict, revise deployments to resolve the conflict. Component devmgmt.device.scheduler version constraints: thinggroup/dev-e01 requires =3.0.61. at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) The version exists as it was imported successfully but the artifact is not transferred to the Greengrass Core. If I import the lambda from the AWS Management Console then it works as expected. Here is my CLI json input file and the command I am running. What am I missing? `aws greengrassv2 create-component-version --cli-input-json file://lambda-import-worker.json` *lambda-import-worker.json file:* ``` { "lambdaFunction": { "lambdaArn": "arn:aws:lambda:*******:***************:function:devmgmt-worker:319", "componentName": "devmgmt.device.scheduler", "componentVersion": "3.0.61", "componentPlatforms": [ { "name": "Linux amd64", "attributes": { "os": "All", "platform": "All" } } ], "componentDependencies": { "aws.greengrass.TokenExchangeService":{ "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "HARD" }, "aws.greengrass.LambdaLauncher": { "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "HARD" }, "aws.greengrass.LambdaRuntimes": { "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "SOFT" } }, "componentLambdaParameters": { "maxQueueSize": 1000, "maxInstancesCount": 100, "maxIdleTimeInSeconds": 120, "timeoutInSeconds": 60, "statusTimeoutInSeconds": 60, "pinned": true, "inputPayloadEncodingType": "json", "environmentVariables": {}, "execArgs": [], "linuxProcessParams": { "isolationMode": "NoContainer" }, "eventSources": [ { "topic": "device/notice", "type": "PUB_SUB" }, { "topic": "$aws/things/thingnameManager/shadow/name/ops/update/accepted", "type": "IOT_CORE" }, { "topic": "dev/device", "type": "IOT_CORE" } ] } } } ```
1
answers
0
votes
20
views
asked 15 days ago

Deploy YOLOv5 in sagemaker - ModelError: InvokeEndpoint operation: Received server error (0)

I'm trying to deploy custom trained Yolov5 model in Sagemaker for inference. (Note : The model was not trained in sagemaker). Followed this doc for deploying the model and inference script - [Sagemaker docs](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#bring-your-own-model) ``` ModelError Traceback (most recent call last) <ipython-input-7-063ca701eab7> in <module> ----> 1 result1=predictor.predict("FILE0032.JPG") 2 print(result1) ~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id) 159 data, initial_args, target_model, target_variant, inference_id 160 ) --> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args) 162 return self._handle_response(response) 163 ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 399 "%s() only accepts keyword arguments." % py_operation_name) 400 # The "self" in this scope is referring to the BaseClient. --> 401 return self._make_api_call(operation_name, kwargs) 402 403 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 729 error_code = parsed_response.get("Error", {}).get("Code") 730 error_class = self.exceptions.from_code(error_code) --> 731 raise error_class(parsed_response, operation_name) 732 else: 733 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (0) from primary with message "Your invocation timed out while waiting for a response from container primary. Review the latency metrics for each container in Amazon CloudWatch, resolve the issue, and try again.". See https://ap-south-1.console.aws.amazon.com/cloudwatch/home?region=ap-south-1#logEventViewer:group=/aws/sagemaker/Endpoints/pytorch-inference-2022-06-14-11-58-04-086 in account 772044684908 for more information. ``` After researching about `InvokeEndpoint`, tried this ``` import boto3 sagemaker_runtime = boto3.client("sagemaker-runtime", region_name='ap-south-1') endpoint_name='pytorch-inference-2022-06-14-11-58-04-086' response = sagemaker_runtime.invoke_endpoint( EndpointName=endpoint_name, Body=bytes('{"features": ["This is great!"]}', 'utf-8') # Replace with your own data. ) print(response['Body'].read().decode('utf-8')) ``` But this didn't help as well, detailed output : ``` ReadTimeoutError Traceback (most recent call last) <ipython-input-8-b5ca204734c4> in <module> 12 response = sagemaker_runtime.invoke_endpoint( 13 EndpointName=endpoint_name, ---> 14 Body=bytes('{"features": ["This is great!"]}', 'utf-8') # Replace with your own data. 15 ) 16 ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 399 "%s() only accepts keyword arguments." % py_operation_name) 400 # The "self" in this scope is referring to the BaseClient. --> 401 return self._make_api_call(operation_name, kwargs) 402 403 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 716 apply_request_checksum(request_dict) 717 http, parsed_response = self._make_request( --> 718 operation_model, request_dict, request_context) 719 720 self.meta.events.emit( ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _make_request(self, operation_model, request_dict, request_context) 735 def _make_request(self, operation_model, request_dict, request_context): 736 try: --> 737 return self._endpoint.make_request(operation_model, request_dict) 738 except Exception as e: 739 self.meta.events.emit( ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/endpoint.py in make_request(self, operation_model, request_dict) 105 logger.debug("Making request for %s with params: %s", 106 operation_model, request_dict) --> 107 return self._send_request(request_dict, operation_model) 108 109 def create_request(self, params, operation_model=None): ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/endpoint.py in _send_request(self, request_dict, operation_model) 182 request, operation_model, context) 183 while self._needs_retry(attempts, operation_model, request_dict, --> 184 success_response, exception): 185 attempts += 1 186 self._update_retries_context( ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/endpoint.py in _needs_retry(self, attempts, operation_model, request_dict, response, caught_exception) 306 event_name, response=response, endpoint=self, 307 operation=operation_model, attempts=attempts, --> 308 caught_exception=caught_exception, request_dict=request_dict) 309 handler_response = first_non_none_response(responses) 310 if handler_response is None: ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/hooks.py in emit(self, event_name, **kwargs) 356 def emit(self, event_name, **kwargs): 357 aliased_event_name = self._alias_event_name(event_name) --> 358 return self._emitter.emit(aliased_event_name, **kwargs) 359 360 def emit_until_response(self, event_name, **kwargs): ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/hooks.py in emit(self, event_name, **kwargs) 227 handlers. 228 """ --> 229 return self._emit(event_name, kwargs) 230 231 def emit_until_response(self, event_name, **kwargs): ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/hooks.py in _emit(self, event_name, kwargs, stop_on_response) 210 for handler in handlers_to_call: 211 logger.debug('Event %s: calling handler %s', event_name, handler) --> 212 response = handler(**kwargs) 213 responses.append((handler, response)) 214 if stop_on_response and response is not None: ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in __call__(self, attempts, response, caught_exception, **kwargs) 192 checker_kwargs.update({'retries_context': retries_context}) 193 --> 194 if self._checker(**checker_kwargs): 195 result = self._action(attempts=attempts) 196 logger.debug("Retry needed, action of: %s", result) ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in __call__(self, attempt_number, response, caught_exception, retries_context) 266 267 should_retry = self._should_retry(attempt_number, response, --> 268 caught_exception) 269 if should_retry: 270 if attempt_number >= self._max_attempts: ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in _should_retry(self, attempt_number, response, caught_exception) 292 # If we've exceeded the max attempts we just let the exception 293 # propogate if one has occurred. --> 294 return self._checker(attempt_number, response, caught_exception) 295 296 ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in __call__(self, attempt_number, response, caught_exception) 332 for checker in self._checkers: 333 checker_response = checker(attempt_number, response, --> 334 caught_exception) 335 if checker_response: 336 return checker_response ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in __call__(self, attempt_number, response, caught_exception) 232 elif caught_exception is not None: 233 return self._check_caught_exception( --> 234 attempt_number, caught_exception) 235 else: 236 raise ValueError("Both response and caught_exception are None.") ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/retryhandler.py in _check_caught_exception(self, attempt_number, caught_exception) 374 # the MaxAttemptsDecorator is not interested in retrying the exception 375 # then this exception just propogates out past the retry code. --> 376 raise caught_exception ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/endpoint.py in _do_get_response(self, request, operation_model, context) 247 http_response = first_non_none_response(responses) 248 if http_response is None: --> 249 http_response = self._send(request) 250 except HTTPClientError as e: 251 return (None, e) ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/endpoint.py in _send(self, request) 319 320 def _send(self, request): --> 321 return self.http_session.send(request) 322 323 ~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/httpsession.py in send(self, request) 449 raise ConnectTimeoutError(endpoint_url=request.url, error=e) 450 except URLLib3ReadTimeoutError as e: --> 451 raise ReadTimeoutError(endpoint_url=request.url, error=e) 452 except ProtocolError as e: 453 raise ConnectionClosedError( ReadTimeoutError: Read timeout on endpoint URL: "https://runtime.sagemaker.ap-south-1.amazonaws.com/endpoints/pytorch-inference-2022-06-14-11-58-04-086/invocations" ```
2
answers
0
votes
22
views
asked 15 days ago

AWS Assume Role via .Net SDK gives Access Denied but works with CLI

I am trying to upload a file in S3 by AWS Assume Role. When I am trying to access it from CLI it works fine but from .Net SDK it gives me Access Denied error. Here are the steps I followed in CLI - 1. Setup the access key/secret key for user using **aws configure** 2. Assume the Role - **β€œaws sts assume-role --role-arn "arn:aws:iam::1010101010:role/Test-Account-Role" --role-session-name AWSCLI-Session”** 3. Take the access key / secret key / session token from the assumed role and setup an AWS profile. The credentials are printed out/returned from the assumed role. 4. Switch to the assume role profile: **β€œset AWS_PROFILE=<TempRole>”** 5. Verify that the user has the role: **β€œaws sts get-caller-identity”** 6. Access the bucket using ls or cp or rm command - **Works Successfully.** Now I am trying to access it from .Net core App - Here is the code snippet- Note that I am using same Access and Secret key as CLI from my local. ``` try { var region = RegionEndpoint.GetBySystemName(awsRegion); SessionAWSCredentials tempCredentials = await GetTemporaryCredentialsAsync(awsAccessKey, awsSecretKey, region, roleARN); //Use the temp credentials received to create the new client IAmazonS3 client = new AmazonS3Client(tempCredentials, region); TransferUtility utility = new TransferUtility(client); // making a TransferUtilityUploadRequest instance TransferUtilityUploadRequest request = new TransferUtilityUploadRequest { BucketName = bucketName, Key = $"{subFolder}/{fileName}", FilePath = localFilePath utility.Upload(request); //transfer fileUploadedSuccessfully = true; } catch (AmazonS3Exception ex) { // HandleException } catch (Exception ex) { // HandleException } ``` The method to get temp credentials is as follow - GetTemporaryCredentialsAsync ``` private static async Task<SessionAWSCredentials> GetTemporaryCredentialsAsync(string awsAccessKey, string awsSecretKey, RegionEndpoint region, string roleARN) { using (var stsClient = new AmazonSecurityTokenServiceClient(awsAccessKey, awsSecretKey, region)) { var getSessionTokenRequest = new GetSessionTokenRequest { DurationSeconds = 7200 }; await stsClient.AssumeRoleAsync( new AssumeRoleRequest() { RoleArn = roleARN, RoleSessionName = "mySession" }); GetSessionTokenResponse sessionTokenResponse = await stsClient.GetSessionTokenAsync(getSessionTokenRequest); Credentials credentials = sessionTokenResponse.Credentials; var sessionCredentials = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken); return sessionCredentials; } } ``` I am getting back the temp credentials but it gives me Access Denied while uploading the file. Not sure if I am missing anything here. Also noted that the token generated via SDK is shorter than that from CLI. I tried pasting these temp credentials to local profile and then tried to access the bucket and getting the Access Denied error then too.
0
answers
0
votes
23
views
asked 16 days ago

Cannot run GUI/OpenGL on headless macOS EC2 Instance

Hello, I am currently trying to use EC2 mac instances to run a CI/CD pipeline which involves running tests with electron/selenium. In order to run these tests openGL needs to be available. Im currently getting there error on line 49 of https://chromium.googlesource.com/chromium/src/+/8f066ff5113bd9d348f0aaf7ac6adc1ca1d1cd31/ui/gl/init/gl_initializer_mac.cc. With the output on the instance giving: ``` 2022-06-09 19:38:25.937 Electron[52243:188559] +[NSXPCSharedListener endpointForReply:withListenerName:]: an error occurred while attempting to obtain endpoint for listener 'ClientCallsAuxiliary': Connection interrupted [52245:0609/193826.555969:ERROR:gl_initializer_mac.cc(65)] Error choosing pixel format. [52245:0609/193826.556035:ERROR:gl_initializer_mac.cc(193)] GLSurfaceCGL::InitializeOneOff failed. [52245:0609/193826.664827:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization ``` The root cause of this is there is no display connected to the mac1 bare metal dedicated host. It seems the work around here is either using a plug to fake that a display is connected, or connecting to the instance via vnc with the following commands: **On ec2 instance** ``` sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \ -activate -configure -access -on \ -configure -allowAccessFor -specifiedUsers \ -configure -users ec2-user \ -configure -restart -agent -privs -all sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \ -configure -access -on -privs -all -users ec2-user ``` **On local macbook** ``` ssh -L 5900:localhost:5900 -C -N -i <your private key.pem> ec2-user@<your public ip address> open vnc://localhost ``` After establishing the connection over screen share, I no longer get the openGL issues and the run succeeds. Unfortunately this is not a solution/workaround for my use case as I will need to restart/reboot these instances after each run. I have tested this multiple times and after rebooting the instance the display is no longer present. (I have verified the displays being recognized / not being recognized with displayplacer list) Some more background, this is an issue on the latest AWS Monterey and BigSur AMIs. Is there any way to make the mac1 mini dedicated host think that there is a display plugged into it or trick it into thinking there is a display via software. I need a solution here that can be implemented via a script so setting something up like https://github.com/waydabber/BetterDummy does not work for me. Github seems to have a solution for this with their self-hosted github action runners so I am curious why AWS doesn't seem to support this / should this not be a common use case for an EC2 Mac Instance?
1
answers
0
votes
33
views
asked 20 days ago

Beginner cannot get root access to the AWS Command Line Interface

Hello Forum. **Question:** *Can someone show me how to get root user access to the AWS Command Line Interface?* I am stuck on Step 3 (Module 3: Setting Up the AWS CLI) of the getting started guide found here: https://aws.amazon.com/getting-started/guides/setup-environment/ Here are the steps I have taken: Module 1: Create Your Account. I created an account a few years ago but it has been dormant. I just today decided to learn Amazon AWS. Module 2: Secure Your Account. I went through the steps to secure my account. I secured the root user account. I created an IAM user account for myself. There are now two accounts, the root user (myself) and the user account, also for myself. Module 3: Setting Up the AWS CLI. I downloaded and installed the AWS Command Line Interface. I accessed the AWS CLI from the C prompt on my computer. I configured the credentials to access my AWS account. Here is the problem: I configured the credentials using the *IAM user account* and not the root user account. ``` /* I mistakenly used the credentials for the user instead of the root user below */ aws configure AWS Access Key ID [None]: ANOTREALACCESSKEYID AWS Secret Access Key [None]: ANOTREALSECRETACCESSKEY Default region name [None]: eu-west-1 Default output format [None]: json ``` Now the AWS CLI will not let me into root user area. It tells me that I do not have permissions for access to that area. I am a beginner. All of this is kind of confusing. I would appreciate any assistance with this matter anyone can provide. Thank you in advance.
1
answers
0
votes
37
views
asked 22 days ago

How do I updated a CloudWatch Synthetics Canary layer version number using the AWS CLI?

Hello, I created a CloudWatch Synthetics Canary via a console blueprint I want to update the active layer version ("39" – bolded below) using the AWS CLI I see it in the Code["SourceLocationArn"] attribute in my describe-canary response The [update-canary operation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/synthetics/update-canary.html) doesn't have a SourceLocationArn option for the --code value. How do I updated the layer version number using the AWS CLI? Thank you ``` { "Id": "aae1dc97-773b-47bb-ae72-d0bb54eec60c", "Name": "daily-wisdom-texts", "Code": { ** "SourceLocationArn": "arn:aws:lambda:us-east-1:875425895862:layer:cwsyn-daily-wisdom-texts-aae1dc97-773b-47bb-ae72-d0bb54eec60c:39", ** "Handler": "pageLoadBlueprint.handler" }, "ExecutionRoleArn": "arn:aws:iam::875425895862:role/service-role/CloudWatchSyntheticsRole-daily-wisdom-texts-cf9-de04b4eb2bb3", "Schedule": { "Expression": "rate(1 hour)", "DurationInSeconds": 0 }, "RunConfig": { "TimeoutInSeconds": 60, "MemoryInMB": 1000, "ActiveTracing": false }, "SuccessRetentionPeriodInDays": 31, "FailureRetentionPeriodInDays": 31, "Status": { "State": "RUNNING", "StateReasonCode": "UPDATE_COMPLETE" }, "Timeline": { "Created": 1641343713.153, "LastModified": 1652481443.745, "LastStarted": 1652481444.83, "LastStopped": 1641675597.0 }, "ArtifactS3Location": "cw-syn-results-875425895862-us-east-1/canary/us-east-1/daily-wisdom-texts-cf9-de04b4eb2bb3", "EngineArn": "arn:aws:lambda:us-east-1:875425895862:function:cwsyn-daily-wisdom-texts-aae1dc97-773b-47bb-ae72-d0bb54eec60c:57", "RuntimeVersion": "syn-python-selenium-1.0", "Tags": { "blueprint": "heartbeat" } } ```
1
answers
0
votes
12
views
asked 23 days ago

AWS CLI is not working

Hello Team, AWS CLi is not working on my on-premises server. It return me tiomout error. ** aws ecr get-login-password --region ap-southeast-1 --debug ** 2022-06-06 08:55:48,440 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.2.12 Python/3.8.8 Linux/4.15.0-180-generic exe/x86_64.ubuntu.18 2022-06-06 08:55:48,441 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['ecr', 'get-login-password', '--region', 'ap-southeast-1', '--debug'] 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_s3 at 0x7f1fda03a700> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_ddb at 0x7f1fda1f81f0> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.configure.configure.ConfigureCommand'>> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7f1fda21fb80> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7f1fda2279d0> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function alias_opsworks_cm at 0x7f1fda04c160> 2022-06-06 08:55:48,456 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_history_commands at 0x7f1fda1be040> 2022-06-06 08:55:48,457 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.devcommands.CLIDevCommand'>> 2022-06-06 08:55:48,457 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_waiters at 0x7f1fda0423a0> 2022-06-06 08:55:48,457 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/v2/2.2.12/dist/awscli/data/cli.json 2022-06-06 08:55:48,461 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_types at 0x7f1fda0f4280> 2022-06-06 08:55:48,461 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function no_sign_request at 0x7f1fda0f4dc0> 2022-06-06 08:55:48,461 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_verify_ssl at 0x7f1fda0f4d30> 2022-06-06 08:55:48,461 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_read_timeout at 0x7f1fda0f4ee0> 2022-06-06 08:55:48,461 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_connect_timeout at 0x7f1fda0f4e50> 2022-06-06 08:55:48,462 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <built-in method update of dict object at 0x7f1fd9f62840> 2022-06-06 08:55:48,462 - MainThread - botocore.session - DEBUG - Setting config variable for region to 'ap-southeast-1' 2022-06-06 08:55:48,464 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.2.12 Python/3.8.8 Linux/4.15.0-180-generic exe/x86_64.ubuntu.18 prompt/off 2022-06-06 08:55:48,464 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['ecr', 'get-login-password', '--region', 'ap-southeast-1', '--debug'] 2022-06-06 08:55:48,465 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_timestamp_parser at 0x7f1fda03ad30> 2022-06-06 08:55:48,465 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x7f1fdaadcca0> 2022-06-06 08:55:48,465 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_binary_formatter at 0x7f1fd9fa8c10> 2022-06-06 08:55:48,465 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function no_pager_handler at 0x7f1fdaada160> 2022-06-06 08:55:48,465 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x7f1fdaa3d940> 2022-06-06 08:55:48,466 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/ 2022-06-06 08:55:48,469 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x7f1fda1c0ee0> 2022-06-06 08:55:48,469 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_json_file_cache at 0x7f1fda1f7040> 2022-06-06 08:55:48,483 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/v2/2.2.12/dist/botocore/data/ecr/2015-09-21/service-2.json 2022-06-06 08:55:48,490 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecr: calling handler <function _inject_commands at 0x7f1fda1d3670> 2022-06-06 08:55:48,490 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecr: calling handler <function add_waiters at 0x7f1fda0423a0> 2022-06-06 08:55:48,503 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/v2/2.2.12/dist/botocore/data/ecr/2015-09-21/waiters-2.json 2022-06-06 08:55:48,505 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecr_get-login-password: calling handler <function add_waiters at 0x7f1fda0423a0> 2022-06-06 08:55:48,506 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env 2022-06-06 08:55:48,506 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role 2022-06-06 08:55:48,506 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role-with-web-identity 2022-06-06 08:55:48,506 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: sso 2022-06-06 08:55:48,506 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file 2022-06-06 08:55:48,507 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials 2022-06-06 08:55:48,508 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/v2/2.2.12/dist/botocore/data/endpoints.json 2022-06-06 08:55:48,519 - MainThread - botocore.hooks - DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f1fdc3b1550> 2022-06-06 08:55:48,521 - MainThread - botocore.hooks - DEBUG - Event creating-client-class.ecr: calling handler <function add_generate_presigned_url at 0x7f1fdc3dc790> 2022-06-06 08:55:48,525 - MainThread - botocore.endpoint - DEBUG - Setting api.ecr timeout as (60, 60) 2022-06-06 08:55:48,526 - MainThread - botocore.hooks - DEBUG - Event provide-client-params.ecr.GetAuthorizationToken: calling handler <function base64_decode_input_blobs at 0x7f1fd9faa3a0> 2022-06-06 08:55:48,526 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.ecr.GetAuthorizationToken: calling handler <function generate_idempotent_uuid at 0x7f1fdc3d25e0> 2022-06-06 08:55:48,527 - MainThread - botocore.hooks - DEBUG - Event before-call.ecr.GetAuthorizationToken: calling handler <function inject_api_version_header_if_needed at 0x7f1fdc358e50> 2022-06-06 08:55:48,527 - MainThread - botocore.endpoint - DEBUG - Making request for OperationModel(name=GetAuthorizationToken) with params: {'url_path': '/', 'query_string': '', 'method': 'POST', 'headers': {'X-Amz-Target': 'AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken', 'Content-Type': 'application/x-amz-json-1.1', 'User-Agent': 'aws-cli/2.2.12 Python/3.8.8 Linux/4.15.0-180-generic exe/x86_64.ubuntu.18 prompt/off command/ecr.get-login-password'}, 'body': b'{}', 'url': 'https://api.ecr.ap-southeast-1.amazonaws.com/', 'context': {'client_region': 'ap-southeast-1', 'client_config': <botocore.config.Config object at 0x7f1fd93e8100>, 'has_streaming_input': False, 'auth_type': None}} 2022-06-06 08:55:48,527 - MainThread - botocore.hooks - DEBUG - Event request-created.ecr.GetAuthorizationToken: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f1fd93e80a0>> 2022-06-06 08:55:48,527 - MainThread - botocore.hooks - DEBUG - Event choose-signer.ecr.GetAuthorizationToken: calling handler <function set_operation_specific_signer at 0x7f1fdc3d24c0> 2022-06-06 08:55:48,528 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2022-06-06 08:55:48,528 - MainThread - botocore.auth - DEBUG - CanonicalRequest: POST / content-type:application/x-amz-json-1.1 host:api.ecr.ap-southeast-1.amazonaws.com x-amz-date:20220606T085548Z x-amz-target:AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken content-type;host;x-amz-date;x-amz-target 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a 2022-06-06 08:55:48,528 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20220606T085548Z 20220606/ap-southeast-1/ecr/aws4_request 1f2f49d5f7d087cac11cfef2086b4ed2ecf6e23c6c77f82bae38e1b614565742 2022-06-06 08:55:48,528 - MainThread - botocore.auth - DEBUG - Signature: a1505f59c0deda052fb1f3317ff3874f7a2e066e226c60d2cae67f25fb83b335 2022-06-06 08:55:48,529 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://api.ecr.ap-southeast-1.amazonaws.com/, headers={'X-Amz-Target': b'AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken', 'Content-Type': b'application/x-amz-json-1.1', 'User-Agent': b'aws-cli/2.2.12 Python/3.8.8 Linux/4.15.0-180-generic exe/x86_64.ubuntu.18 prompt/off command/ecr.get-login-password', 'X-Amz-Date': b'20220606T085548Z', 'Authorization': b'AWS4-HMAC-SHA256 Credential=AKIA5GDQUDHCSVAVNPXV/20220606/ap-southeast-1/ecr/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=a1505f59c0deda052fb1f3317ff3874f7a2e066e226c60d2cae67f25fb83b335', 'Content-Length': '2'}> 2022-06-06 08:55:48,530 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/v2/2.2.12/dist/botocore/cacert.pem 2022-06-06 08:55:48,530 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): api.ecr.ap-southeast-1.amazonaws.com:443 2022-06-06 08:56:48,601 - MainThread - botocore.hooks - DEBUG - Event needs-retry.ecr.GetAuthorizationToken: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x7f1fd93e8b50>> 2022-06-06 08:56:48,601 - MainThread - botocore.retries.standard - DEBUG - Retry needed, retrying request after delay of: 0.28902638812913395 2022-06-06 08:56:48,601 - MainThread - botocore.endpoint - DEBUG - Response received to retry, sleeping for 0.28902638812913395 seconds 2022-06-06 08:56:48,891 - MainThread - botocore.hooks - DEBUG - Event request-created.ecr.GetAuthorizationToken: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f1fd93e80a0>> 2022-06-06 08:56:48,891 - MainThread - botocore.hooks - DEBUG - Event choose-signer.ecr.GetAuthorizationToken: calling handler <function set_operation_specific_signer at 0x7f1fdc3d24c0> 2022-06-06 08:56:48,892 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2022-06-06 08:56:48,892 - MainThread - botocore.auth - DEBUG - CanonicalRequest: content-type:application/x-amz-json-1.1 host:api.ecr.ap-southeast-1.amazonaws.com x-amz-date:20220606T085648Z x-amz-target:AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken content-type;host;x-amz-date;x-amz-target 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a 2022-06-06 08:56:48,892 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20220606T085648Z 20220606/ap-southeast-1/ecr/aws4_request 0cab3afc3bb3cc3be88edab93ed85a3540cadaa6292cc6f507acc56ca5e2a408 2022-06-06 08:56:48,893 - MainThread - botocore.auth - DEBUG - Signature: 4530c08225e137c535551883cedac92992b95c6e613762df7b43bf588f782339 2022-06-06 08:56:48,893 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://api.ecr.ap-southeast-1.amazonaws.com/, headers={'X-Amz-Target': b'AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken', 'Content-Type': b'application/x-amz-json-1.1', 'User-Agent': b'aws-cli/2.2.12 Python/3.8.8 Linux/4.15.0-180-generic exe/x86_64.ubuntu.18 prompt/off command/ecr.get-login-password', 'X-Amz-Date': b'20220606T085648Z', 'Authorization': b'AWS4-HMAC-SHA256 Credential=AKIA5GDQUDHCSVAVNPXV/20220606/ap-southeast-1/ecr/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=4530c08225e137c535551883cedac92992b95c6e613762df7b43bf588f782339', 'Content-Length': '2'}> 2022-06-06 08:56:48,893 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/v2/2.2.12/dist/botocore/cacert.pem 2022-06-06 08:56:48,893 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (2): api.ecr.ap-southeast-1.amazonaws.com:443 2022-06-06 08:57:48,986 - MainThread - botocore.hooks - DEBUG - Event needs-retry.ecr.GetAuthorizationToken: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x7f1fd93e8b50>> 2022-06-06 08:57:48,987 - MainThread - botocore.retries.standard - DEBUG - Retry needed, retrying request after delay of: 0.7103189856020138 2022-06-06 08:57:48,987 - MainThread - botocore.endpoint - DEBUG - Response received to retry, sleeping for 0.7103189856020138 seconds 2022-06-06 08:57:49,698 - MainThread - botocore.hooks - DEBUG - Event request-created.ecr.GetAuthorizationToken: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f1fd93e80a0>> 2022-06-06 08:57:49,698 - MainThread - botocore.hooks - DEBUG - Event choose-signer.ecr.GetAuthorizationToken: calling handler <function set_operation_specific_signer at 0x7f1fdc3d24c0> 2022-06-06 08:57:49,699 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2022-06-06 08:57:49,699 - MainThread - botocore.auth - DEBUG - CanonicalRequest: POST / content-type:application/x-amz-json-1.1 host:api.ecr.ap-southeast-1.amazonaws.com x-amz-date:20220606T085749Z x-amz-target:AmazonEC2ContainerRegistry_V20150921.GetAuthorizationToken content-type;host;x-amz-date;x-amz-target 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a 2022-06-06 08:57:49,699 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20220606T085749Z 20220606/ap-southeast-1/ecr/aws4_request e2ad3c264cf4bfb6bfcdb527aa0b098785ee5dea45125abe58caef4704116648 2022-06-06 08:57:49,699 - MainThread - botocore.auth - DEBUG - Signature: ee528fda4b39c6419005854133
1
answers
0
votes
28
views
asked 24 days ago

What is the suggested method to track user's actions after assuming a cross-account role

I need to be able to guarantee that a user's actions can always be traced back to their account regardless of which role they have assumed in another account. What methods are required to guarantee this for? * Assuming a cross-account role in the console * Assuming a cross-account role via the cli I have run tests and can see that when a user assumes a role in the CLI, temporary credentials are generated. These credentials are seen in CloudTrail logs under responseElements.credentials for the assumeRole event. All future events generated by actions taken in the session include the accessKeyId and I can therefore track all of the actions in this case. Using the web console, the same assumeRole event is generated, also including an accessKeyId. Unfortunately, future actions taken by the user don't include the same accessKeyId. At some point a different access key is generated and the session makes use of this new key. I can't find any way to link the two and therefore am not sure of how to attribute actions taken by the role to the user that assumed the role. I can see that when assuming a role in the console, the user can't change the sts:sessionName and this is always set to their username. Is this the suggested method for tracking actions? Whilst this seems appropriate for roles within the same account, as usernames are not globally unique I am concerned about using this for cross account attribution. It seems placing restrictions on the value of sts:sourceIdentity is not supported when assuming roles in the web console.
1
answers
2
votes
71
views
asked a month ago

ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region.

Hi Dears, I am building ML model using DeepAR Algorithm. I faced this error while i reached to this point : Error : ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region. ------------------- Code: from sagemaker.tuner import ( IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner, ) from sagemaker import image_uris container = image_uris.retrieve(region= 'af-south-1', framework="forecasting-deepar") deepar = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type="ml.m5.2xlarge", use_spot_instances=True, # use spot instances max_run=1800, # max training time in seconds max_wait=1800, # seconds to wait for spot instance output_path="s3://{}/{}".format(bucket, output_path), sagemaker_session=sess, ) freq = "D" context_length = 300 deepar.set_hyperparameters( time_freq=freq, context_length=str(context_length), prediction_length=str(prediction_length) ) Can you please help in solving the error? I have to do that in af-south-1 region. Thanks Basem hyperparameter_ranges = { "mini_batch_size": IntegerParameter(100, 400), "epochs": IntegerParameter(200, 400), "num_cells": IntegerParameter(30, 100), "likelihood": CategoricalParameter(["negative-binomial", "student-T"]), "learning_rate": ContinuousParameter(0.0001, 0.1), } objective_metric_name = "test:RMSE" tuner = HyperparameterTuner( deepar, objective_metric_name, hyperparameter_ranges, max_jobs=10, strategy="Bayesian", objective_type="Minimize", max_parallel_jobs=10, early_stopping_type="Auto", ) s3_input_train = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/train/".format(bucket, prefix), content_type="json" ) s3_input_test = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/test/".format(bucket, prefix), content_type="json" ) tuner.fit({"train": s3_input_train, "test": s3_input_test}, include_cls_metadata=False) tuner.wait()
1
answers
0
votes
15
views
asked a month ago

Possible CLI Bug: Dynamo DB endpoint URL does not work locally with active and correct credentials set

**Summary**: Dynamo DB commands from the CLI do not work when real credentials are set up. The 'endpoint-url' flag should work around this and recognize that localhost endpoints can be hit with no credentials given the default setup of the AWS Dynamo Docker image. Output of command after setting credentials: `An error occurred (ResourceNotFoundException) when calling the DescribeTable operation: Cannot do operations on a non-existent table` Is there a fix or workaround for this? **System**: MacOS Monterey version 12.0.1, Macbook Pro - M1 - 2020 ``` 'aws --version' -> aws-cli/2.4.11 Python/3.9.10 Darwin/21.1.0 source/arm64 prompt/off ``` **To reproduce**: -- Start from a terminal that does NOT have AWS Credentials set up via environment variables or anything else -- Start up a local Dynamo DB Instance on Docker: ``` docker pull amazon/dynamodb-local docker run -p 8000:8000 --name=ddblocal -d amazon/dynamodb-local ``` -- Create a table: ``` aws dynamodb create-table --attribute-definitions "[{ \"AttributeName\": \"key\", \"AttributeType\": \"S\"}, { \"AttributeName\": \"valueA\", \"AttributeType\": \"S\"}]" --table-name test_table --key-schema "[{\"AttributeName\": \"key\", \"KeyType\": \"HASH\"}, {\"AttributeName\": \"valueA\", \"KeyType\": \"RANGE\"}]" --endpoint-url "http://localhost:8000" --provisioned-throughput "{\"ReadCapacityUnits\": 100, \"WriteCapacityUnits\": 100}" --region local ``` -- Query the table (to prove it works): ``` aws dynamodb describe-table --table-name test_table --region local --endpoint-url "http://localhost:8000" ``` -- Set your real AWS Credentials: ``` export AWS_ACCESS_KEY_ID="<REAL KEY ID HERE>" export AWS_SECRET_ACCESS_KEY="<REAL SECRET KEY HERE>" export AWS_SESSION_TOKEN="REAL TOKEN HERE>" ``` -- Query the table again (This one fails for me - see output above) ``` aws dynamodb describe-table --table-name test_table --region local --endpoint-url "http://localhost:8000" ```
1
answers
0
votes
39
views
asked 2 months ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
10
views
asked 2 months ago

Manual remediation config works, automatic remediation config fails

SOLVED! There was a syntax problem in the runbook, that is not detected when manually remediating. In the content of the remediation doc (that was created using Cloudformation), I used a parameter declaration: parameters: InstanceID: type: 'AWS::EC2::Instance::Id' It should be: parameters: InstanceID: type: String ===================================================================================== I have a remediation runbook that creates Cloudwatch alarms for the metric 'CPUUtilization' for any EC2 instances that have none defined. The runbook is configured as a remediation document for a config rule that checks for the absence of such alarms. When I configure the remediation on the rule as manual, all goes well. When I configure the remediation with the exact same runbook as automatic, the remediation fails with this error (snippet): "StepDetails": [ { "Name": "Initialization", "State": "FAILED", "ErrorMessage": "Invalid Automation document content for Create-CloudWatch-Alarm-EC2-CPUUtilization", "StartTime": "2022-05-09T17:30:02.361000+02:00", "StopTime": "2022-05-09T17:30:02.361000+02:00" } ], This is the remediation configuration for the automatic remediation. The only difference with the manual remediation configuration is obviously the value for key "Automatic" being "false" { "RemediationConfigurations": [ { "ConfigRuleName": "rul-ensure-cloudwatch-alarm-ec2-cpuutilization-exists", "TargetType": "SSM_DOCUMENT", "TargetId": "Create-CloudWatch-Alarm-EC2-CPUUtilization", "TargetVersion": "$DEFAULT", "Parameters": { "AutomationAssumeRole": { "StaticValue": { "Values": [ "arn:aws:iam::123456789012:role/rol_ssm_full_access_to_cloudwatch" ] } }, "ComparisonOperator": { "StaticValue": { "Values": [ "GreaterThanThreshold" ] } }, "InstanceID": { "ResourceValue": { "Value": "RESOURCE_ID" } }, "Period": { "StaticValue": { "Values": [ "300" ] } }, "Statistic": { "StaticValue": { "Values": [ "Average" ] } }, "Threshold": { "StaticValue": { "Values": [ "10" ] } } }, "Automatic": true, "MaximumAutomaticAttempts": 5, "RetryAttemptSeconds": 60, "Arn": "arn:aws:config:eu-west-2:123456789012:remediation-configuration/rul-ensure-cloudwatch-alarm-ec2-cpuutilization-exists/5e3a81a7-fc55-4cbe-ad75-6b27be8da79a" } ] } The error message is rather cryptic, I can't find documentation on possible root causes. Any suggestions would be very welcome! Thanks!
1
answers
0
votes
23
views
asked 2 months ago

s3 create Presigned Multipart Upload URL using API

I'm trying to use the AWS S3 API to perform a multi-part upload with Signed URLs. This will allow us to send a request to the server (which is configured with the correct credentials), and then return a pre-signed URL to the client (which will not have credentials configured). The client should then be able to complete the request, computing subsequent signatures as appropriate. This appears to be possible as per the AWS S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) - Amazon Simple Storage Service - AWS Documentation As described in the Overview, when authenticating requests using the Authorization header, you have an option of uploading the payload in chunks. You can send data in fixed size or variable size chunks. This section describes the signature calculation process in chunked upload, how you create the chunk body, and how the delayed signing works where you first upload the chunk, and send its ... docs.aws.amazon.com The main caveat here is that it seems to need the Content-Length​ up front, but we won't know the value of that as we'll be streaming the value. Is there a way for us to use signed URLs to do multipart upload without knowing the length of the blob to be uploaded beforehand?
0
answers
0
votes
7
views
asked 2 months ago

Error when running vsock_sample AWS nitro tutorial

I have configured and build the enclave instance as per https://docs.aws.amazon.com/enclaves/latest/user/enclaves-user.pdf . But when I tried to run in it throws the following error ``` $ nitro-cli run-enclave --eif-path vsock_sample.eif --cpu-count 2 --enclave-cid 6 --memory 512 --debug-mode Start allocating memory... Started enclave with enclave-cid: 6, memory: 512 MiB, cpu-ids: [1, 5] [ E36 ] Enclave boot failure. Such error appears when attempting to receive the `ready` signal from a freshly booted enclave. It arises in several contexts, for instance, when the enclave is booted from an invalid EIF file and the enclave process immediately exits, failing to submit the `ready` signal. In this case, the error backtrace provides detailed information on what specifically failed during the enclave boot process. For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E36 If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-27T03:41:39.495653281+00:00.log" Failed connections: 1 [ E39 ] Enclave process connection failure. Such error appears when the enclave manager fails to connect to at least one enclave process for retrieving the description information. For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E39 If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-27T03:41:39.495889864+00:00.log" ``` Action: Run Enclave Subactions: Failed to handle all enclave process replies Failed to connect to 1 enclave processes Root error file: src/enclave_proc_comm.rs Root error line: 349 Build commit: not available ``` How to fix this error ?
0
answers
0
votes
3
views
asked 2 months ago

"aws cli cp" command gives inconsistent results

I am using the following command to download files from S3 to my local server "aws s3 cp s3://bucket-name/dir-name/ . --recursive --debug" Sometimes I will get the files downloaded successfully. If I run the same command a few times, sometimes I will get error. With --debug flag, this is the output: GET / encoding-type=url&list-type=2&prefix=2022-04-18%2F host:glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20220423T041824Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2022-04-22 22:18:24,445 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20220423T041824Z 20220423/ca-central-1/s3/aws4_request 184d4f7de08e4ea90234c5717ce78cfd7c31c01cfe854a3d11fa94381f9ab1c3 2022-04-22 22:18:24,445 - MainThread - botocore.auth - DEBUG - Signature: 87d083e547678eebd416afb6691a541b05fa930456c5bb57bb3f9a650cf8c276 2022-04-22 22:18:24,445 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url, headers={'User-Agent': b'aws-cli/2.5.4 Python/3.9.11 Linux/3.10.0-1160.45.1.el7.x86_64 exe/x86_64.rhel.7 prompt/off command/s3.cp', 'X-Amz-Date': b'20220423T041824Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=AKIAVLZENK7ICR7W3PXG/20220423/ca-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=87d083e547678eebd416afb6691a541b05fa930456c5bb57bb3f9a650cf8c276'}> 2022-04-22 22:18:24,445 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/v2/2.5.4/dist/awscli/botocore/cacert.pem 2022-04-22 22:18:24,446 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (3): glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com:443 2022-04-22 22:18:24,448 - MainThread - botocore.endpoint - DEBUG - Exception received when sending HTTP request. Traceback (most recent call last): File "urllib3/connection.py", line 174, in _new_conn File "urllib3/util/connection.py", line 95, in create_connection File "urllib3/util/connection.py", line 85, in create_connection ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/httpsession.py", line 358, in send File "urllib3/connectionpool.py", line 785, in urlopen File "urllib3/util/retry.py", line 525, in increment File "urllib3/packages/six.py", line 770, in reraise File "urllib3/connectionpool.py", line 703, in urlopen File "urllib3/connectionpool.py", line 386, in _make_request File "urllib3/connectionpool.py", line 1040, in _validate_conn File "urllib3/connection.py", line 358, in connect File "urllib3/connection.py", line 186, in _new_conn urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7fad81a9d7c0>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/endpoint.py", line 199, in _do_get_response File "awscli/botocore/endpoint.py", line 271, in _send File "awscli/botocore/httpsession.py", line 387, in send botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" 2022-04-22 22:18:24,448 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.ListObjectsV2: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x7fad81af65e0>> 2022-04-22 22:18:24,448 - MainThread - botocore.retries.standard - DEBUG - Max attempts of 3 reached. 2022-04-22 22:18:24,448 - MainThread - botocore.retries.standard - DEBUG - Not retrying request. 2022-04-22 22:18:24,449 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.ListObjectsV2: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fad81af6670>> 2022-04-22 22:18:24,449 - MainThread - awscli.customizations.s3.results - DEBUG - Exception caught during command execution: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" Traceback (most recent call last): File "urllib3/connection.py", line 174, in _new_conn File "urllib3/util/connection.py", line 95, in create_connection File "urllib3/util/connection.py", line 85, in create_connection ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/botocore/httpsession.py", line 358, in send File "urllib3/connectionpool.py", line 785, in urlopen File "urllib3/util/retry.py", line 525, in increment File "urllib3/packages/six.py", line 770, in reraise File "urllib3/connectionpool.py", line 703, in urlopen File "urllib3/connectionpool.py", line 386, in _make_request File "urllib3/connectionpool.py", line 1040, in _validate_conn File "urllib3/connection.py", line 358, in connect File "urllib3/connection.py", line 186, in _new_conn urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7fad81a9d7c0>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "awscli/customizations/s3/s3handler.py", line 149, in call File "awscli/customizations/s3/fileinfobuilder.py", line 31, in call File "awscli/customizations/s3/filegenerator.py", line 142, in call File "awscli/customizations/s3/filegenerator.py", line 322, in list_objects File "awscli/customizations/s3/utils.py", line 412, in list_objects File "awscli/botocore/paginate.py", line 252, in __iter__ File "awscli/botocore/paginate.py", line 329, in _make_request File "awscli/botocore/client.py", line 304, in _api_call File "awscli/botocore/client.py", line 620, in _make_api_call File "awscli/botocore/client.py", line 640, in _make_request File "awscli/botocore/endpoint.py", line 101, in make_request File "awscli/botocore/endpoint.py", line 155, in _send_request File "awscli/botocore/endpoint.py", line 199, in _do_get_response File "awscli/botocore/endpoint.py", line 271, in _send File "awscli/botocore/httpsession.py", line 387, in send botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" fatal error: Could not connect to the endpoint URL: "https://glue-bucket-interactions-staging.s3.ca-central-1.amazonaws.com/?list-type=2&prefix=2022-04-18%2F&encoding-type=url" 2022-04-22 22:18:24,450 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread, shutting down result thread.
1
answers
0
votes
49
views
asked 2 months ago

Can't see EBS Snapshot tags from other accounts

Hi, I have private snapshots in one account (source) that I have shared with another account (target). I am able to see the snapshots themselves from the target account, but the tags are not available, neither on the console nor via the cli. This makes it impossible to filter for a desired snapshot from the target account. For background, the user in the target account has the following policy in effect: ``` "Effect": "Allow", "Action": "ec2:*", "Resource": "*" ``` Here's an example of what I'm seeing; from the source account: ``` $ aws --region us-east-2 ec2 describe-snapshots --snapshot-ids snap-XXXXX { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Tags": [ { "Value": "test-snapshot", "Key": "Name" } ], "Encrypted": true, "VolumeId": "vol-XXXXX", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:XXXXX:key/mrk-XXXXX", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "XXXXX", "SnapshotId": "snap-XXXXX" } ] } ``` but from the target account ``` $ aws --region us-east-2 ec2 describe-snapshots --owner-ids 012345678900 --snapshot-ids snap-11111111111111111 { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Encrypted": true, "VolumeId": "vol-22222222222222222", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:012345678900:key/mrk-00000000000000000000000000000000", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "012345678900", "SnapshotId": "snap-11111111111111111" } ] } ``` Any ideas on what's going on here? Cheers!
1
answers
0
votes
17
views
asked 2 months ago

Using aws s3api put-object --sse-customer-key-md5 fails with CLI

I'm trying to use aws s3api put-object/get-object with server side encryption with customer keys. I'm using Powershell, but I don't believe that is the source of my issue. On the surface, sse-customer-key-md5 appears to be a pretty simple input: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error. put-object works when I don't use --sse-customer-key-md5: >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > { "SSECustomerKeyMD5": "ezatpv/Yg0KkjX+5ZcsxdQ==", "SSECustomerAlgorithm": "AES256", "ETag": "\"0d44c3df058c4e190bd7b2e6d227be73\"" } I agree with the SSECustomerKeyMD5 result: >$key = "testaes256testaes256testaes25612" $md5 = new-object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider $utf8 = new-object -TypeName System.Text.UTF8Encoding $hash = $md5.ComputeHash($utf8.GetBytes($key)) $EncodedString =[Convert]::ToBase64String($hash) Write-Host "Base64 Encoded String: " $EncodedString Base64 Encoded String: ezatpv/Yg0KkjX+5ZcsxdQ== Now I resubmit my put request with the --sse-customer-key-md5 option. Before anyone jumps on the base64 encoding, I've tried submitting the MD5 hash in Base64, Hexidecimal (With and without delimiters), JSON of the MD5 hash result, and upper case and lower case versions of the aforementioned. None work. Has anyone gotten this to work and, if so, format did you use? >aws s3api put-object ` --bucket abc ` --sse-customer-algorithm AES256 ` --sse-customer-key "testaes256testaes256testaes25612" ` --sse-customer-key-md5 "ezatpv/Yg0KkjX+5ZcsxdQ==" ` --region us-east-1 ` --key test.pdf ` --body C:\test.pdf > aws : At line:1 char:1 + aws s3api put-object ` + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided. Thanks
2
answers
0
votes
38
views
asked 2 months ago

AWS CLI Updating Network Firewall Rules

I've been trying to determine a method to streamline/automate the updating of AWS Network firewall rules. The AWS CLI looked promising but I've consistently seen failure when trying to push a new ruleset. For example, running the command: aws network-firewall describe-rule-group --rule-group-arn <arn> Returns the JSON as expected with the content as a flat string: "RuleGroup": { "RulesSource": { "RulesString": "pass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".example.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:1; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"example.com\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:2; rev:1;)\npass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".google.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:3; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"www.google.com\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:4; rev:1;)\npass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:\".ubuntu.com\"; endswith; msg:\"Allowed HTTP domain\"; sid:5; rev:1;)\npass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; content:\"ipinfo.io\"; startswith; nocase; endswith; msg:\"matching TLS allowlisted FQDNs\"; sid:6; rev:1;)\npass tcp $HOME_NET any <> $EXTERNAL_NET 80 (flow:not_established; sid:899998; rev:1;)\npass tcp $HOME_NET any <> $EXTERNAL_NET 443 (flow:not_established; sid:899999; rev:1;)" When trying to update the flat string with a new string including more entries though, I receive an error: aws network-firewall update-rule-group --cli-input-yaml file://example.yaml Error received: An error occurred (InvalidRequestException) when calling the UpdateRuleGroup operation: parameter is invalid I've tried the JSON/YAML/CLI methods and I encounter the issue using any of those methods. I've also tried using the --rule-group vs --rules options to update. I suspected there was an issue with string formatting but I've failed to find a resolution. Updating the rules via the console works without issue. Could anyone provide a pointer where I'm going wrong or even a working method they are using? Not too bothered if it is via CLI, SDK etc. as I may revert to python as it is the language I know best.
0
answers
0
votes
10
views
asked 3 months ago

Aws Iot Device Client Setup not working

Hello, i have been trying to setup a rpi using the tutorials [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-dc-install-configure.html) and when i run the command `./aws-iot-device-client --config-file ~/dc-configs/dc-testconn-config.json ` i am getting errors on the terminal saying that the aws crt sdk is not found with a fatal error like this ``` 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {template-name} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {csr-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {device-key} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.850Z [WARN] {Config.cpp}: Key {publish-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Key {subscribe-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Shadow Name {shadow-name} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Input file {shadow-input-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [WARN] {Config.cpp}: Output file {shadow-output-file} was provided in the JSON configuration file with an empty value 2022-04-11T07:38:13.851Z [INFO] {Config.cpp}: Successfully fetched JSON config file: { "endpoint": "a32vqcn021ykiy-ats.iot.ap-south-1.amazonaws.com", "cert": "~/certs/testconn/device.pem.crt", "key": "~/certs/testconn/private.pem.key", "root-ca": "~/certs/AmazonRootCA1.pem", "thing-name": "Triton_Dp_Office", "logging": { "enable-sdk-logging": true, "level": "DEBUG", "type": "STDOUT", "file": "" }, "jobs": { "enabled": false, "handler-directory": "" }, "tunneling": { "enabled": false }, "device-defender": { "enabled": false, "interval": 300 }, "fleet-provisioning": { "enabled": false, "template-name": "", "template-parameters": "", "csr-file": "", "device-key": "" }, "samples": { "pub-sub": { "enabled": true, "publish-topic": "test/dc/pubtopic", "publish-file": "", "subscribe-topic": "test/dc/subtopic", "subscribe-file": "" } }, "config-shadow": { "enabled": false }, "sample-shadow": { "enabled": false, "shadow-name": "", "shadow-input-file": "", "shadow-output-file": "" } } 2022-04-11T07:38:13.851Z [DEBUG] {Config.cpp}: Did not find a runtime configuration file, assuming Fleet Provisioning has not run for this device 2022-04-11T07:38:13.852Z [DEBUG] {EnvUtils.cpp}: Updated PATH environment variable to: /home/pi/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games:/snap/bin:/home/pi/.aws-iot-device-client:/home/pi/.aws-iot-device-client/jobs:/home/pi/aws-iot-device-client/build:/home/pi/aws-iot-device-client/build/jobs 2022-04-11T07:38:13.852Z [INFO] {Main.cpp}: Now running AWS IoT Device Client version v1.5.19-868465b 2022-04-11T07:38:13.860Z [ERROR] {FileUtils.cpp}: Failed to create empty file: /var/log/aws-iot-device-client/sdk.log errno: 17 msg: File exists 2022-04-11T07:38:13.860Z [ERROR] {Main.cpp}: *** AWS IOT DEVICE CLIENT FATAL ERROR: Failed to initialize AWS CRT SDK. AWS IoT Device Client must abort execution, reason: Failed to initialize AWS CRT SDK Please check the AWS IoT Device Client logs for more information Aborted ``` I need this setup asap to work on deploying a fleet and test out AWS IOT Jobs. Any help is appreciated.
1
answers
0
votes
60
views
asked 3 months ago
  • 1
  • 90 / page