Questions tagged with AWS Command Line Interface

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, I am deploying a lambda function that utilizes the NLTK packages for preprocessing text. For the application to work I need to download the stop words, punkt and wordnet libraries. I have deployed using a docker image and SAM cli. When the function runs on AWS, I get a series of errors when trying to access the NLTK libraries. The first error I got was that '/home/sbx_user1051/' cannot be edited. After reading solutions on stack over flow, I was pointed in the direction of needing to store the NLTK libraries in the /tmp/ directory because that is the only directory that can be modified. Now, after redeploying the image with the changes to the code, I have the files stored in temp, but the lambda function does not search for that file when trying to access the stop words. It still tries to search for the file in these directories: - '/home/sbx_user1051/nltk_data' - '/var/lang/nltk_data' - '/var/lang/share/nltk_data' - '/var/lang/lib/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' What should I do about importing the NLTK libraries needed when running this function on aws lambda?
0
answers
0
votes
11
views
Tyler
asked a day ago
When I execute command: aws ec2 export-image --image-id ami-04f516c --disk-image-format vmdk --s3-export-location S3Bucket=ami-export I had a returned error: An error occurred (InvalidParameter) when calling the ExportImage operation: Insufficient permissions - please verify bucket ownership and write permissions on the bucket. Bucket: ami-export I couldn't change the permissions. Can someone help me?
3
answers
0
votes
8
views
asked 2 days ago
I am following the link:- https://docs.snowflake.com/en/user-guide/admin-security-privatelink This is to set up the private link between AWS and Snowflake. The first command is aws sts get-federation-token --name sam Here i am replacing the name Sam with Root user and executing in Cloudshell. error occurred (AccessDenied) when calling the GetFederationToken operation: Cannot call GetFederationToken with session credentials Not sure if it has to do with permissions. Please advise
2
answers
0
votes
20
views
asked 3 days ago
Hi, For adding a user with a home directory mapping, I tried the below stack template. However, the home directory was not created after stack was run. It was in restricted mode. If we only do to edit user configuration manually we can uncheck restricted. I want to implement this mode in yaml template. Please help me to do better. ``` GoldcoastTvodUser: Type: 'AWS::Transfer::User' Properties: HomeDirectoryMappings: - Entry: / Target: /goldcoast-tvod HomeDirectoryType: LOGICAL Policy: 'Fn::Sub': | { "Version": "2012-10-17", "Statement": { "Sid": "AllowFullAccessToBucket", "Action": "s3:*", "Effect": "Allow", "Resource": [ "arn:aws:s3:::goldcoast-tvod", "arn:aws:s3:::goldcoast-tvod/*" ] } } Role: 'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole' ServerId: 'Fn::GetAtt': TransferServer.ServerId SshPublicKeys: - >- ssh-rsa AAAAB UserName: GoldcoastTvodUser ```
1
answers
0
votes
9
views
asked 3 days ago
I tried with below command but it is not getting the results like if applied version Id of the tag if not applied not applying the tag to s3 object Help me to write shell script aws s3api get-object-tagging --bucket your-bucket --key your-object-key --query 'TagSet[?Key==`retention` && Value==`10yearsretention` || Value==`6yearsretention`]' >/dev/null 2>> error.log || aws s3api put-object-tagging --bucket your-bucket --key your-object-key --tagging 'TagSet=[{Key=retention,Value=10yearsretention}]' >> error.log The above command not working properly, put-object command is working but not both commands correctly when combined Results I am getting like [] when trying with aws s3api get-object-tagging --bucket your-bucket --key your-object-key --query 'TagSet[?Key==`retention` && Value==`10yearsretention` || Value==`6yearsretention`]'
2
answers
0
votes
16
views
asked 3 days ago
While doing the bucket empty using below aws s3 cli, it removed the objects and now for current version is it not showing any object but when I see using version checkbox enabled it shows more object, so I need to empty other version's object as well other wise bucket deleton is not possible, please let me know hot to do it in faster way. Cli used for emptying the bucket. aws s3 rm s3://<bucketName>--recursive I have also enabled the lifecycle on the bucket but does any command also needed to remove the object
2
answers
0
votes
32
views
asked 3 days ago
I am getting an error when attempting to use SAM BUILD to add code into a CloudFormation Template. Here is the message log from Cloud Trail. I verified that the user has AdministratorAccess as a permission set. Any help would be appreciated. { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "######", "arn": "arn:aws:iam::#####:user/XXXXX", "accountId": "#####", "accessKeyId": "######", "userName": "XXXXXX" }, "eventTime": "2023-03-22T17:26:19Z", "eventSource": "serverlessrepo.amazonaws.com", "eventName": "CreateCloudFormationTemplate", "awsRegion": "us-east-1", "sourceIPAddress": "######", "userAgent": "Boto3/1.26.95 Python/3.8.8 Windows/10 Botocore/1.29.95", "errorCode": "AccessDenied", "requestParameters": { "semanticVersion": "latest", "applicationId": "#######.dkr.ecr.us-east-1.amazonaws.com%2FBATCHJOB" }, "responseElements": { "Access-Control-Expose-Headers": "*,Amz-Sdk-Invocation-Id,Amz-Sdk-Request,Authorization,Content-Length,Content-Type,Date,Host,x-amz-content-sha256,X-Amz-Date,X-Amz-Security-Token,X-Amz-Target,x-amz-user-agent,x-amzn-platform-id,x-amzn-trace-id", "message": "User: arn:aws:iam::######:user/XXXXX is not authorized to perform: serverlessrepo:CreateCloudFormationTemplate on resource: ######.dkr.ecr.us-east-1.amazonaws.com/BATCHJOB" }, "requestID": "98fb4cc7-1907-4472-a161-67fc75492d81", "eventID": "f3688202-a889-42d1-ab56-82dfc7002cd4", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "######", "eventCategory": "Management" }
1
answers
0
votes
28
views
asked 3 days ago
I am deploying using codedeploy through Jenkins. In Jenkins, I created a shell to query codedeploy's deployment-id with aws-cli. Here is the corresponding command. ``` aws deploy list-deployments --application-name [my-application-name] --deployment-group-name [my-deployment-group-name] --query "deployments[0]" --output text ``` In the case of other distribution groups, only one distribution id was normally output, but in the case of a specific distribution group, two were output. One was the most recent one, but the second output was the deployment-id that was deployed 4 months ago. What could be the cause of this output? Additionally, how can I delete the Deployment history in codedeploy?
1
answers
0
votes
17
views
joker
asked 4 days ago
I am getting NoSuchUpload error when uploading a part of multi-part upload file via PUT request to presigned url. The error message says, "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed." The upload id is not invalid as I have checked the same via list-multipart-uploads command on AWS CLI. And neither I have aborted or completed the upload. I am getting the following error on multi part upload ``` <?xml version="1.0"?> <Error> <Code>NoSuchUpload</Code> <Message>The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.</Message> <UploadId>1kPnI95Sy9xim3DzudwQ4Yno1wIrKT.Lv.wzZ6wqXTM792QfKYZZLavSWOrQxCAgc9mj3E09Nos2xJu_YvaRzAIjD4sx6hO1pOoBNWvzfoFf_Tabbt9d62ebjrKgHHfN</UploadId> <RequestId>BNK1E884Y30TM5MF</RequestId> <HostId>Iz2brROW9q4ym9UnxLZwoBZp+Af8KkXmFfTm2C86tRHIW1r5w/LWAKU0wSg2bQS4c5K0Xo/yL1A=</HostId> </Error> ``` I am trying to upload a file to s3 bucket via multi-part upload method. I am using boto3 python SDK to do the same. I generated the upload_id for a 20MB file with key `<uuid4>/files/test-user-data/<uuid4>_0001.mp4` using `create_multipart_upload` method. Then I generated presigned-url for each 5MB chunk of the file as follows: params = {'Bucket': <bucket_name>, 'Key': <key>, 'UploadId': <upload_id>, 'PartNumber': <chunk_id>} s3_client.generate_presigned_url(ClientMethod='upload_part', Params=params, ExpiresIn=3600) I got the following presigned url: ``` https://<bucket_name>.s3.amazonaws.com/3be4b390-f01c-4cfb-bac0-ecf1534a335a/files/3be4b390-f01c-4cfb-bac0-ecf1534a335a/files/test-user-data/725f5643-6dc8-4d48-ad7b-d73479aa5752_25bceabd-343e-4d9c-82ab-0577dc551a69_0001.mp4?uploadId=1kPnI95Sy9xim3DzudwQ4Yno1wIrKT.Lv.wzZ6wqXTM792QfKYZZLavSWOrQxCAgc9mj3E09Nos2xJu_YvaRzAIjD4sx6hO1pOoBNWvzfoFf_Tabbt9d62ebjrKgHHfN&partNumber=1&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATTAUJC2AI5OKS5GA%2F20221119%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20221119T041617Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=aa26c2263d36fd3e47e05077fa32c4631cefc42c0b7c009341f9b52804cbc97e ``` Then to the presigned url, I sent a put request as follows: ``` s3_response = requests.put(url=<presigned_url>, files={'file': <chunk>}) ``` Here `chunk` is a bytes object. I expected the file to upload successfully. Initially I doubted that I may be sending the incorrect upload_id as per the error message. But I eliminated that probability after writing an automated test case.
0
answers
0
votes
7
views
asked 5 days ago
Hi all, I am trying to use boto3 to do some KMS operation. I keeping getting an error that my security token is invalid. I've went through various posts I could find and was not able to find any resolution. Things I have checked so far * I am not using any special region. Everything is just in standard us-east-1 nothing fancy. * I have created a user that has AdministratorAccess and created security access credentials for this user * Have tried putting these into credentials file + supplying through client() constructor My code snippet ``` import boto3 aws_access_key_id = "XXX" aws_secret_access_key = "XXX" client = boto3.client('sts', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, ) resp = client.get_session_token() key = resp['Credentials']['AccessKeyId'] secret = resp['Credentials']['SecretAccessKey'] session_token = resp['Credentials']['SessionToken'] client = boto3.client( 'kms', aws_access_key_id="\"" + key + "\"", aws_secret_access_key="\"" + secret + "\"", aws_session_token="\"" + session_token + "\"" ) response = client.generate_data_key_pair_without_plaintext( KeyId='XXX', KeyPairSpec='ECC_NIST_P384', ) ``` My code fails on the last line... **Traceback (most recent call last): File "C:\pathToTestScript.py", line 28, in <module> response = client.generate_data_key_pair_without_plaintext( File "C:\Users\benarnao\AppData\Roaming\Python\Python310\site-packages\botocore\client.py", line 530, in _api_call return self._make_api_call(operation_name, kwargs) File "C:\Users\benarnao\AppData\Roaming\Python\Python310\site-packages\botocore\client.py", line 961, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the GenerateDataKeyPairWithoutPlaintext operation: The security token included in the request is invalid.** I am able to get the session token from STS, and notice this returns a temporary key and secret as well. I have tried the new set of credentials as well the existing credentials + security token with no luck. For some reason the key and secret require surrounding quotes when supplying through client() constructor, I have tried this with and without for the session token parameter. Any ideas?
1
answers
0
votes
24
views
AWS
asked 5 days ago
I'm trying to do something I thought was easy, but my google fu is failing me. I have been provided an API gateway endpoint that I must call a get on the d/l a file onto my EC2 instance. The request has to be signed. I see all kinds of SDK's and examples, but nothing for cli. I don't see aws cli command that will let me call a gateway endpoint (test-invoke) doesn't seem right. Is there one. If not, is there a simple way to use the aws cli to create a signed request that I can send with invoke-webrequest (powershell) to d/l the file. The IAM permissions are in place and the EC2 instance profile does have the invoke permission for the API.
1
answers
0
votes
42
views
asked 9 days ago
Hi, I am trying to update an existing Cognito User Pool to make it send Emails using a third-party provider, I am following every detail mentioned in https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-email-sender.html However when it comes to updating the pool using a CLI command "aws cognito-idp update-user-pool --lambda-config " I receive the following error: Parameter validation failed: Unknown parameter in LambdaConfig: "CustomEmailSender", must be one of: PreSignUp, CustomMessage, PostConfirmation, PreAuthentication, PostAuthentication, DefineAuthChallenge, CreateAuthChallenge, VerifyAuthChallengeResponse, PreTokenGeneration, UserMigration Unknown parameter in LambdaConfig: "KMSKeyID", must be one of: PreSignUp, CustomMessage, PostConfirmation, PreAuthentication, PostAuthentication, DefineAuthChallenge, CreateAuthChallenge, VerifyAuthChallengeResponse, PreTokenGeneration, UserMigration so what I understand is the CustomEmailSender is for some reason had been rejected as aparameter! and at same time this is not available in the console either. I can set a CustomEmailSender only when i create a user pool using CloudFormation YAML script, but I am unable to update exisiting one Help in this is highly appreciated.
0
answers
0
votes
11
views
asked 9 days ago