By using AWS re:Post, you agree to the Terms of Use
/AWS Command Line Interface/

Questions tagged with AWS Command Line Interface

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago

Docker push doesn't work even docker login succeeded during AWS CodePipeline Build stage

Hello, I'm preparing CI/CD using AWS CodePipeline. Unfortunatelly I have an error during build stage. Below there is content of my buildspec.yml file, where: AWS_DEFAULT_REGION = eu-central-1 CONTAINER_NAME=cicd-1-app REPOSITORY_URI = <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com/cicd-1-app ``` version: 0.2 phases: install: runtime-versions: java: corretto11 pre_build: commands: - echo Logging in to Amazon ECR... - aws --version - TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)" - IMAGE_URI=${REPOSITORY_URI}:${TAG} build: commands: - echo Build started on `date` - echo $IMAGE_URI - mvn clean package -Ddockerfile.skip - docker build --tag $IMAGE_URI . post_build: commands: - printenv - echo Build completed on `date` - echo $(docker images) - echo Pushing docker image - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com - docker push $IMAGE_URI - echo push completed - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $IMAGE_URI > imagedefinitions.json artifacts: files: - imagedefinitions.json ``` I got error: ``` [Container] 2022/01/06 19:57:36 Running command aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Container] 2022/01/06 19:57:37 Running command docker push $IMAGE_URI The push refers to repository [<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/cicd-1-app] 37256fb2fd27: Preparing fe6c1ddaab26: Preparing d4dfab969171: Preparing no basic auth credentials [Container] 2022/01/06 19:57:37 Command did not exit successfully docker push $IMAGE_URI exit status 1 [Container] 2022/01/06 19:57:37 Phase complete: POST_BUILD State: FAILED [Container] 2022/01/06 19:57:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $IMAGE_URI. Reason: exit status 1 ``` Even docker logged in successfully there is "no basic auth credentials" error. Do you know what could be a problem? Best regards.
2
answers
0
votes
8
views
KM
asked 10 days ago

Amplify build error - Cannot find module '/codebuild/output/....

Hi all My vue app is running fine locally and builds fine locally, however, I'm trying to build my app on Amplify using a link to my github repo. The link and the clone work fine but I'm getting an error during the build. Amplify push also works fine without problems. I've only ever used NPM for all modules along with the vue-cli and Amplify-cli. I have no idea where to start with this. The main error seems to be : `Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs'` I've tried `yarn install ` but that does not help. I'm not sure what to do next because I've never used yarn at all in this project. My build config is standard - ``` version: 1 backend: phases: build: commands: - '# Execute Amplify CLI with the helper script' - amplifyPush --simple frontend: phases: preBuild: commands: - npm install build: commands: - npm run build artifacts: baseDirectory: dist files: - '**/*' cache: paths: - node_modules/**/* ``` The error I'm getting is - ``` [WARNING]: ✖ An error occurred when pushing the resources to the cloud 2022-01-04T06:47:49.986Z [WARNING]: ✖ There was an error initializing your environment. 2022-01-04T06:47:49.993Z [INFO]: Error: Packaging lambda function failed with the error  [0mCommand failed with exit code 1: yarn --production [0minternal/modules/cjs/loader.js:818 [0m throw err; [0m ^ [0mError: Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs' [0m at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15)  at Function.Module._load (internal/modules/cjs/loader.js:667:27)  at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)  at internal/main/run_main_module.js:17:47 {  code: 'MODULE_NOT_FOUND',  requireStack: [] }  at runPackageManager (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:66:13)  at installDependencies (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:40:3)  at Object.buildResource [as build] (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:13:5)  at buildFunction (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-category-function/src/provider-utils/awscloudformation/utils/buildFunction.ts:41:36)  at prepareResource (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:605:33)  at async Promise.all (index 1)  at prepareBuildableResources (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:601:10)  at Object.run (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:173:5) 2022-01-04T06:47:50.024Z [ERROR]: !!! Build failed 2022-01-04T06:47:50.024Z [ERROR]: !!! Non-Zero Exit Code detected ```
0
answers
0
votes
6
views
DareDevil
asked 12 days ago

Launched EC2 instance UNREACHABLE for Ubuntu 20.04 AMI with python 3.9 upgrade

I am using **EC2 Ubuntu 20.04 VM**. Due to **[CVE-2021-3177][1]**, Python needs to be upgraded to the latest version of Python3.9 which would be 3.9.5 currently. I did that using the `apt install` option as per the steps mentioned below: sudo apt update sudo apt upgrade -y sudo apt install python3.9 The above ensures that Python3.9.5 is now available. But now python3.8 & python3.9 is available. So next we will use the update-alternatives command to make python3.9 as the default version. sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2 Now that alternatives are defined, we will switch to Option 2 as the default option i.e. Python3.9 sudo update-alternatives --config python3 Once done, the following command would point to the latest version. sudo python3 -V However, if you use the `sudo apt update` command, you will see an error stating that Traceback (most recent call last): File "/usr/lib/cnf-update-db", line 8, in <module> from CommandNotFound.db.creator import DbCreator File "/usr/lib/python3/dist-packages/CommandNotFound/db/creator.py", line 11, in <module> import apt_pkg ModuleNotFoundError: No module named 'apt_pkg' Reading package lists... Done E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null; fi' E: Sub-process returned an error code To fix this we will have to add a link using the following command cd /usr/lib/python3/dist-packages/ sudo ln -s apt-pkg.cpython-{38m,39m}-x86_64-linux-gnu.so Next, I tried used the following commands apt purge python3-apt apt install python3-apt sudo apt install python3.9-distutils python3.9-dev Once done following command will now not result in any errors sudo apt update This means that the issue is fixed. **I can use this machine & use it after reboot too. ** **But for some reason, If I create an AMI and launch an instance then that instance is unreachable.** Appreciate your help. [1]: https://nvd.nist.gov/vuln/detail/CVE-2021-3177
0
answers
0
votes
1
views
awswiki
asked 13 days ago

AWS EC2 F1 ERROR: [v++ 60-773] caught Tcl error: ERROR: '2201011829' is an invalid argument.

Hi, I have been trying to run hardware system image in AWS f1.2x large instance. I am successfully able to run sw_emu. However, when I try to create the hardware image, I keep getting the following error. This is strange because the same code was able to be synthesized 2 days ago. ``` ERROR: [v++ 60-773] In '/home/centos/<cwd>/_x/runOnfpga/runOnfpga/vitis_hls.log', caught Tcl error: ERROR: '2201012237' is an invalid argument. Please specify an integer value. ``` Note: The ERROR number is observed to change as I run the hw synthesis each time. And in the vitis_hls.log, the following info is being displayed ``` INFO: [IP_Flow 19-1686] Generating 'Simulation' target for IP 'runOnfpga_sitodp_32ns_64_4_no_dsp_1_ip'... ERROR: '2201012237' is an invalid argument. Please specify an integer value. while executing "rdi::set_property core_revision 2201012237 {component component_1}" invoked from within "set_property core_revision $Revision $core" (file "run_ippack.tcl" line 1515) INFO: [Common 17-206] Exiting Vivado at Sat Jan 1 22:37:40 2022... ERROR: [IMPL 213-28] Failed to generate IP. INFO: [HLS 200-111] Finished Command export_design CPU user time: 55.92 seconds. CPU system time: 2.62 seconds. Elapsed time: 55.3 seconds; current allocated memory: 1.956 GB. command 'ap_source' returned error code while executing "source runOnfpga.tcl" ("uplevel" body line 1) invoked from within "uplevel \#0 [list source $arg] " ``` Update: It seems like even the Xilinx examples are giving out the same problem when we run on F1 instance. I tried to clone the [Vitis Accel Examples](https://github.com/Xilinx/Vitis_Accel_Examples.git) and it gave the same problem as above. ``` INFO: [v++ 200-789] **** Estimated Fmax: 339.33 MHz ERROR: [v++ 213-28] Failed to generate IP. ERROR: [v++ 60-300] Failed to build kernel(ip) vadd, see log for details: /home/centos/Vitis_Accel_Examples/sys_opt/multiple_devices/_x.hw.xilinx_aws-vu9p-f1_shell-v04261818_201920_2/vadd/vadd/vitis_hls.log ERROR: [v++ 60-773] In '/home/centos/Vitis_Accel_Examples/sys_opt/multiple_devices/_x.hw.xilinx_aws-vu9p-f1_shell-v04261818_201920_2/vadd/vadd/vitis_hls.log', caught Tcl error: ERROR: '2201020036' is an invalid argument. Please specify an integer value. ERROR: [v++ 60-773] In '/home/centos/Vitis_Accel_Examples/sys_opt/multiple_devices/_x.hw.xilinx_aws-vu9p-f1_shell-v04261818_201920_2/vadd/vadd/vitis_hls.log', caught Tcl error: ERROR: [IMPL 213-28] Failed to generate IP. ERROR: [v++ 60-599] Kernel compilation failed to complete ERROR: [v++ 60-592] Failed to finish compilation INFO: [v++ 60-1653] Closing dispatch client. make: *** [_x.hw.xilinx_aws-vu9p-f1_shell-v04261818_201920_2/vadd.xo] Error 1 ``` Please suggest ways to debug this problem.
3
answers
1
votes
2
views
riddlesingh
asked 15 days ago

How to debug CLI crash when running Lightsail container commands?

I only use a couple of commands in my workflow but very often both will randomly throw a cryptic error in the middle of a deploy on CI: - `aws lightsail get-container-service-deployments --region us-west-2 --service-name my_service --output json` - `aws lightsail get-container-images --region us-west-2 --output json --service-name my_service` My CI stack is Ubuntu Server 18.04 which runs a Github action-runner service. ``` Signal received: -1695824320, errno: 32575 Stack trace: /usr/local/aws-cli/v2/2.3.0/dist/_awscrt.cpython-38-x86_64-linux-gnu.so(aws_backtrace_print+0x4d) [0x7f3f9483edbd] /usr/local/aws-cli/v2/2.3.0/dist/_awscrt.cpython-38-x86_64-linux-gnu.so(+0x68513) [0x7f3f947b5513] /lib/x86_64-linux-gnu/libc.so.6(+0x3f040) [0x7f3f9af46040] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(+0x1f9ed0) [0x7f3f9ab65ed0] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(+0xbb58b) [0x7f3f9aa2758b] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(+0x1fa930) [0x7f3f9ab66930] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(PyGC_Collect+0x81) [0x7f3f9ab67aa1] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(Py_FinalizeEx+0xe2) [0x7f3f9ab3ff02] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(Py_Exit+0x8) [0x7f3f9ab40818] /usr/local/aws-cli/v2/2.3.0/dist/libpython3.8.so.1.0(+0x1d8c8b) [0x7f3f9ab44c8b] aws(+0x378b) [0x55705f60f78b] aws(+0x3b1f) [0x55705f60fb1f] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f3f9af28bf7] aws(+0x24fa) [0x55705f60e4fa] ``` Any ideas what to do about this? **edit Jan 1, 2022** I turned on the `--debug` flag. **There is no debug output**. The command seems to immediately crash. I find if I run these command multiple times, they randomly crash with and sometimes it prints "Segmentation fault" which looks to likely to be a memory access/management issue :( Something else worth saying is this issue only presents itself on Ubuntu distros. I ran out of options to try and reinstalled other OS like ArchLinux and did not experience the same problem. Unfortunately some of the other software I use isn not well supported under ArchLinux and I have to come back to Ubuntu :(
2
answers
0
votes
4
views
AWS-User-Arman
asked 16 days ago

How to upload video files using rest API after receiving an "upload URL"

I'm working with ShotGrid (an AutoDesk service) who make it possible to upload media to their S3 buckets The basic idea: Developer sends a request to ShotGrid for an AWS S3 "upload URL" [ShotGrid's upload documentation](https://developer.shotgridsoftware.com/rest-api/?shell#requesting-an-upload-url) explains how to make the request for the "upload URL", and it seems to work just, but then there's no documentation explaining how to actually execute the upload after receiving it. So far I'm getting errors, the most promising of which shows "SignatureDoesNotMatch / The request signature we calculated does not match the signature you provided. Check your key and signing method." More detail below... I've tried the following: Request for 'upload URL' is ``` curl -X GET https//myshow.shotgrid.autodesk.com/api/v1/entity/Version/{VersionId}/_upload?\filename={FileName} \ -H 'Authorization: Bearer {BearerToken} \ -H 'Accept: application/json' ``` Result is ``` { "UrlRequest": { "data": {"timestamp": "[timestsamp]", "upload_type": "Attachment", "upload_id": null, "storage_service": "s3", "original_filename": "[FileName]", "multipart_upload": false }, "links": { "upload": "https://[s3domain].amazonaws.com/[longstring1]/[longstring2]/[FileName] ?X-Amz-Algorithm=[Alg] &X-Amz-Credential=[Creds] &X-Amz-Date=[Date] &X-Amz-Expires=900 &X-Amz-SignedHeaders=host &X-Amz-Security-Token=[Token] &X-Amz-Signature=[Signature]", "complete_upload": "/api/v1/entity/versions/{VersionId}/_upload" } } ``` Then the upload request... ``` curl -X PUT -H 'x-amz-signature=[Signature-See-Above]' -d '@/Volumes/Path/To/Upload/Media' 'https://[uploadUrlFromAbove]' ``` And get the following error... ``` <Error> <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message> </Error> ```
3
answers
0
votes
8
views
Trln
asked 19 days ago

Need suggestion to Automate the task to convert glb file into usdz by using docker command in EC2 instance.

We have implemented a docker on EC2 instance (i-05d**** (ARubntu)) for the purpose of conversion of GLB files to USDZ. It is implemented properly and we are also able to use it from the EC2 command line. But we want to give this feature to convert the file to our users on our webpage for which they will first upload the GLB file(this we have done successfully) but now we want to implement the conversion function on the webpage for which we don't have any idea and we need help with that. 1. First step file uploading on s3 bucket in our case bucket name is bucket_name (ap-south-1) 2. Second step is to convert this .glb file into .usdz (manually by using this docker command it is successfully uploaded to the same bucket---> docker run -e INPUT_GLB_S3_FILEPATH='bucket_name/10_Dinesh/8732f71f6eca07050f62b014354c5/model.glb' \ -e OUTPUT_USDZ_FILE='model.usdz' \ -e OUTPUT_S3_PATH='bucket_name/10_Dinesh/8732f71f6eca07050f62b014354c5' \ -e AWS_REGION='ap-south-1' \ -e AWS_ACCESS_KEY_ID='AKIA6N3W****' \ -e AWS_SECRET_ACCESS_KEY='0GuRz3b1X8****' \ -it --rm awsleochan/docker-glb-to-usdz-to-s3 by using the above command we can get the .usdz file in particular s3 bucket 3. Now we want to automate that task each time whenever any user uploads a .glb file in the s3 bucket it should give a .usdz file as well in the same bucket. Does anyone have a solution for this? We just want the object path automatically updated in this command.
2
answers
0
votes
3
views
AWS_Reality-bit
asked a month ago

Error when passing s3=None to ExportDefinition

There seems to be a problem with passing **None** as a value for the s3 parameter for the ExportDefinition when calling create_message_stream. The following code will produce the error below. But, if I remove the parameter: **s3=None** altogether, the code runs fine. client.create_message_stream(MessageStreamDefinition( name="WednesDayStream", # Required. max_size=268435456, # Default is 256 MB. stream_segment_size=16777216, # Default is 16 MB. time_to_live_millis=None, # By default, no TTL is enabled. strategy_on_full=StrategyOnFull.OverwriteOldestData, # Required. persistence=Persistence.File, # Default is File. flush_on_write=False, # Default is false. export_definition=ExportDefinition( # Optional. Choose where/how the stream is exported to the AWS Cloud. kinesis=None, iot_analytics=None, iot_sitewise=None, s3=None ) )) 2021-02-05T15:09:17.112Z \[INFO] (aws.greengrass.TokenExchangeService-lifecycle) com.example.stream.create: shell-runner-start. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=STARTING, command=\["ln -f -s -t . /greengrass/v2/packages/artifacts/com.example.stream.create/1.0...."]} 2021-02-05T15:09:17.436Z \[INFO] (Copier) com.example.stream.create: Run script exited. {exitCode=137, serviceName=com.example.stream.create, currentState=FINISHED} 2021-02-05T15:09:57.236Z \[INFO] (pool-2-thread-26) com.example.stream.create: shell-runner-start. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=STARTING, command=\["ln -f -s -t . /greengrass/v2/packages/artifacts/com.example.stream.create/1.0...."]} 2021-02-05T15:09:57.678Z \[WARN] (Copier) com.example.stream.create: stderr. Traceback (most recent call last):. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-05T15:09:57.683Z \[WARN] (Copier) com.example.stream.create: stderr. File "create-stream.py", line 27, in <module>. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-05T15:09:57.684Z \[WARN] (Copier) com.example.stream.create: stderr. s3=None. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-05T15:09:57.684Z \[WARN] (Copier) com.example.stream.create: stderr. TypeError: __init__() got an unexpected keyword argument 's3'. {scriptName=services.com.example.stream.create.lifecycle.Run.script, serviceName=com.example.stream.create, currentState=RUNNING} 2021-02-05T15:09:57.705Z \[INFO] (Copier) com.example.stream.create: Run script exited. {exitCode=1, serviceName=com.example.stream.create, currentState=RUNNING} 20
3
answers
0
votes
0
views
DarrenB
asked a year ago

Cloudwatch alarms created via CLI (CWAgent) fail "Unchecked: Initial alarm"

I am having issues with the cloudwatch API/CLI. Below I have included 3 API/CLI calls for "put-metric-alarm" (with describe-alarms to show values) with unit=Percent and unit=None and also no usage of unit. In the Wizard/Console, I don't select unit. Through the wizard, the alarm works, via API/CLI - all three ways, the alarm never passes "INSUFFICIENT_DATA" - "Unchecked: Initial alarm creation". What is the issue and how can I fix it? Thank you. NOTE: "XXXXX" is used to obfuscate identifying items aws cloudwatch put-metric-alarm --region us-west-2 --alarm-name "HighMemPercentUsedAlarm EC2 - API - unit:Percent" --alarm-description "Alarm when Memory Percent Usage exceeds 95 percent" --metric-name mem_used_percent --namespace CWAgent --datapoints-to-alarm 2 --statistic Average --period 900 --threshold 95.0 --comparison-operator GreaterThanThreshold --dimensions "Name=InstanceId,Value=i-XXXXX" --evaluation-periods 2 --alarm-actions arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic --treat-missing-data "missing" --unit Percent aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - API - unit:Percent" aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - API - unit:Percent" { "MetricAlarms": \[ { "Dimensions": \[ { "Name": "InstanceId", "Value": "i-XXXXX" } ], "Namespace": "CWAgent", "DatapointsToAlarm": 2, "ActionsEnabled": true, "MetricName": "mem_used_percent", "EvaluationPeriods": 2, "StateValue": "INSUFFICIENT_DATA", "StateUpdatedTimestamp": "2020-06-05T16:44:16.620Z", "AlarmConfigurationUpdatedTimestamp": "2020-06-05T16:44:16.620Z", "AlarmActions": \[ "arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic" ], "InsufficientDataActions": \[], "AlarmArn": "arn:aws:cloudwatch:us-west-2:XXXXX:alarm:HighMemPercentUsedAlarm EC2 - API - unit:Percent", "Threshold": 95.0, "StateReason": "Unchecked: Initial alarm creation", "OKActions": \[], "AlarmDescription": "Alarm when Memory Percent Usage exceeds 95 percent", "Period": 900, "ComparisonOperator": "GreaterThanThreshold", "AlarmName": "HighMemPercentUsedAlarm EC2 - API - unit:Percent", "Statistic": "Average", "TreatMissingData": "missing", "Unit": "Percent" } ] } ================================================================================================ aws cloudwatch put-metric-alarm --region us-west-2 --alarm-name "HighMemPercentUsedAlarm EC2 - API - unit:None" --alarm-description "Alarm when Memory Percent Usage exceeds 95 percent" --metric-name mem_used_percent --namespace CWAgent --datapoints-to-alarm 2 --statistic Average --period 900 --threshold 95.0 --comparison-operator GreaterThanThreshold --dimensions "Name=InstanceId,Value=i-XXXXX" --evaluation-periods 2 --alarm-actions arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic --treat-missing-data "missing" --unit None aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - API - unit:None" aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - API - unit:None" { "MetricAlarms": \[ { "Dimensions": \[ { "Name": "InstanceId", "Value": "i-XXXXX" } ], "Namespace": "CWAgent", "DatapointsToAlarm": 2, "ActionsEnabled": true, "MetricName": "mem_used_percent", "EvaluationPeriods": 2, "StateValue": "INSUFFICIENT_DATA", "StateUpdatedTimestamp": "2020-06-05T16:44:27.434Z", "AlarmConfigurationUpdatedTimestamp": "2020-06-05T16:44:27.434Z", "AlarmActions": \[ "arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic" ], "InsufficientDataActions": \[], "AlarmArn": "arn:aws:cloudwatch:us-west-2:XXXXX:alarm:HighMemPercentUsedAlarm EC2 - API - unit:None", "Threshold": 95.0, "StateReason": "Unchecked: Initial alarm creation", "OKActions": \[], "AlarmDescription": "Alarm when Memory Percent Usage exceeds 95 percent", "Period": 900, "ComparisonOperator": "GreaterThanThreshold", "AlarmName": "HighMemPercentUsedAlarm EC2 - API - unit:None", "Statistic": "Average", "TreatMissingData": "missing", "Unit": "None" } ] } ================================================================================================ aws cloudwatch put-metric-alarm --region us-west-2 --alarm-name "HighMemPercentUsedAlarm EC2 - API" --alarm-description "Alarm when Memory Percent Usage exceeds 95 percent" --metric-name mem_used_percent --namespace CWAgent --datapoints-to-alarm 2 --statistic Average --period 900 --threshold 95.0 --comparison-operator GreaterThanThreshold --dimensions "Name=InstanceId,Value=i-XXXXX" --evaluation-periods 2 --alarm-actions arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic --treat-missing-data "missing" aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - API" { "MetricAlarms": \[ { "EvaluationPeriods": 2, "TreatMissingData": "missing", "AlarmArn": "arn:aws:cloudwatch:us-west-2:XXXXX:alarm:HighMemPercentUsedAlarm EC2 - API", "StateUpdatedTimestamp": "2020-06-05T16:44:34.574Z", "AlarmConfigurationUpdatedTimestamp": "2020-06-05T16:44:34.574Z", "ComparisonOperator": "GreaterThanThreshold", "AlarmActions": \[ "arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic" ], "AlarmDescription": "Alarm when Memory Percent Usage exceeds 95 percent", "Namespace": "CWAgent", "Period": 900, "StateValue": "INSUFFICIENT_DATA", "Threshold": 95.0, "AlarmName": "HighMemPercentUsedAlarm EC2 - API", "Dimensions": \[ { "Name": "InstanceId", "Value": "i-XXXXX" } ], "DatapointsToAlarm": 2, "Statistic": "Average", "StateReason": "Unchecked: Initial alarm creation", "InsufficientDataActions": \[], "OKActions": \[], "ActionsEnabled": true, "MetricName": "mem_used_percent" } ] } ================================================================================================ Same via the "Wizard/Console" aws cloudwatch describe-alarms --alarm-names "HighMemPercentUsedAlarm EC2 - Wizard" { "MetricAlarms": \[ { "Dimensions": \[ { "Name": "InstanceId", "Value": "i-XXXXX" }, { "Name": "ImageId", "Value": "ami-XXXXX" }, { "Name": "InstanceType", "Value": "m5.2xlarge" } ], "Namespace": "CWAgent", "DatapointsToAlarm": 2, "ActionsEnabled": true, "MetricName": "mem_used_percent", "EvaluationPeriods": 2, "StateValue": "OK", "StateUpdatedTimestamp": "2020-06-05T16:43:06.797Z", "AlarmConfigurationUpdatedTimestamp": "2020-06-05T16:41:52.619Z", "AlarmActions": \[ "arn:aws:sns:us-west-2:XXXXX:Default_CloudWatch_Alarms_Topic" ], "InsufficientDataActions": \[], "AlarmArn": "arn:aws:cloudwatch:us-west-2:XXXXX:alarm:HighMemPercentUsedAlarm EC2 - Wizard", "StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2020-06-05T16:43:06.795_0000\",\"startDate\":\"2020-06-05T16:13:00.000_0000\",\"statistic\":\"Average\",\"period\":900,\"recentDatapoints\":http://14.36740102848503,15.074661280217848,\"threshold\":95.0}", "Threshold": 95.0, "StateReason": "Threshold Crossed: 2 out of the last 2 datapoints 15.074661280217848 (05/06/20 16:28:00), 14.36740102848503 (05/06/20 16:13:00) were not greater than the threshold (95.0) (minimum 1 datapoint for ALARM -> OK transition).", "OKActions": \[], "AlarmDescription": "Alarm when Memory Percent Usage exceeds 95 percent", "Period": 900, "ComparisonOperator": "GreaterThanThreshold", "AlarmName": "HighMemPercentUsedAlarm EC2 - Wizard", "Statistic": "Average", "TreatMissingData": "missing" } ] }
2
answers
0
votes
1
views
markwilhelm
asked 2 years ago

API export and import - Invalid OpenAPI input.

I run into an error when trying to **import** a API definition in JSON format via the AWS CLI. The file I try to import, is an export of that same API, which I **exported** also via the AWS CLI. Moreover, if I import that JSON via the AWS web console (via Gateway > API > Resources > Actions > import..) it works just fine. I have this with my project API, but I could also reproduce it with a simple API definition; **Reproduction steps** (be sure to replace the <variable> parts): 1. Create an API, add a (POST) method, of integration type "Mock" 2. export via AWS CLI: ``` aws apigateway --profile "<MyAWSProfile>" get-export --rest-api-id <MyAPIid> --parameters extensions=&#39;apigateway&#39; --stage-name <MyStageName> --export-type oas30 "api_definition.json" ``` 3. import via AWS CLI: ``` aws apigateway --profile "<MyAWSProfile>" put-rest-api --rest-api-id <MyAPIid> --mode overwrite --body "api_definition.json" ``` The above gives this trace + error: ``` 2020-04-28 15:21:59,700 - MainThread - awscli.clidriver - DEBUG - Exception caught in main() Traceback (most recent call last): File ".\lib\site-packages\awscli\clidriver.py", line 217, in main return command_table[parsed_args.command](remaining, parsed_args) File ".\lib\site-packages\awscli\clidriver.py", line 358, in __call__ return command_table[parsed_args.operation](remaining, parsed_globals) File ".\lib\site-packages\awscli\clidriver.py", line 530, in __call__ call_parameters, parsed_globals) File ".\lib\site-packages\awscli\clidriver.py", line 650, in invoke client, operation_name, parameters, parsed_globals) File ".\lib\site-packages\awscli\clidriver.py", line 662, in _make_client_call **parameters) File ".\lib\site-packages\botocore\client.py", line 316, in _api_call return self._make_api_call(operation_name, kwargs) File ".\lib\site-packages\botocore\client.py", line 626, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the PutRestApi operation: Invalid OpenAPI input. 2020-04-28 15:21:59,702 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255 An error occurred (BadRequestException) when calling the PutRestApi operation: Invalid OpenAPI input. ``` I would expect that an import via the AWS CLI of the exported definition (exported via the same tool) would just work. Am I overlooking something? Beginners mistake? (I am fairly new to AWS :-))
2
answers
0
votes
1
views
DTR
asked 2 years ago

AWS STS Temporary Credentials S3 Access Denied PutObject

I am following <https://www.2ndwatch.com/blog/use-aws-iam-sts-access-aws-resources/> blog posting and my understanding is the S3 Bucket Policy Principal requesting the temporary credentials, one approach would be to hard code the Id of this User but the post attempts a more elegant approach of evaluating the STS delegated user (when it works). arn:aws:iam::409812999999999999:policy/fts-assume-role IAM user policy ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "*" } ] } ``` arn:aws:s3:::finding-taylor-swift s3 bucket policy ``` { "Version": "2012-10-17", "Id": "Policy1581282599999999999", "Statement": [ { "Sid": "Stmt158128999999999999", "Effect": "Allow", "Principal": { "Service": "sts.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::finding-taylor-swift/*" } ] } conor@xyz:~$ aws configure --profile finding-taylor-swift AWS Access Key ID [****************6QNY]: AWS Secret Access Key [****************+8kF]: Default region name [eu-west-2]: Default output format [text]: json conor@xyz:~$ aws sts get-session-token --profile finding-taylor-swift { "Credentials": { "SecretAccessKey": "<some text>", "SessionToken": "<some text>", "Expiration": "2020-02-11T03:31:50Z", "AccessKeyId": "<some text>" } } conor@xyz:~$ export AWS_SECRET_ACCESS_KEY=<some text> conor@xyz:~$ export AWS_SESSION_TOKEN=<some text> conor@xyz:~$ export AWS_ACCESS_KEY_ID=<some text> conor@xyz:~$ aws s3 cp dreamstime_xxl_concert_profile_w500_q8.jpg s3://finding-taylor-swift upload failed: ./dreamstime_xxl_concert_profile_w500_q8.jpg to s3://finding-taylor-swift/dreamstime_xxl_concert_profile_w500_q8.jpg An error occurred (AccessDenied) when calling the PutObject operation: Access Denied conor@xyz:~$ ``` AWS CLI has been setup as described <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html#using-temp-creds-sdk-cli> "When you run AWS CLI commands, the AWS CLI looks for credentials in a specific order—first in environment variables and then in the configuration file. Therefore, after you&#39;ve put the temporary credentials into environment variables, the AWS CLI uses those credentials by default. (If you specify a profile parameter in the command, the AWS CLI skips the environment variables. Instead, the AWS CLI looks in the configuration file, which lets you override the credentials in the environment variables if you need to.) The following example shows how you might set the environment variables for temporary security credentials and then call an AWS CLI command. Because no profile parameter is included in the AWS CLI command, the AWS CLI looks for credentials first in environment variables and therefore uses the temporary credentials." So far SO has only suggested using assume-role https://stackoverflow.com/questions/60153869/aws-sts-temporary-credentials-s3-access-denied-putobject
1
answers
0
votes
0
views
C9n7r
asked 2 years ago
  • 1
  • 90 / page