By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS Command Line Interface

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

S3 Access Denied 403 error

Hi AWS, I was learning about App2Container service using this AWS Workshop https://catalog.us-east-1.prod.workshops.aws/workshops/2c1e5f50-0ebe-4c02-a957-8a71ba1e8c89/en-US and while deploying the infrastructure using CloudFormation template as provided in Step 1, I am experiencing the issue. ``` Resource handler returned message: "Your access has been denied by S3, please make sure your request credentials have permission to GetObject for application-migration-with-aws-workshop/lambda/4eb5dfa8efc17763bc41edb070cb9cd2. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: Lambda, Status Code: 403, Request ID: 95687072-37e7-4670-b715-7a0e5bdefd92)" (RequestToken: 09b159a9-c86b-72ef-5d6e-c18bbed29004, HandlerErrorCode: AccessDenied) ``` After that I have updated the IAM user permission with the following S3 API and here is the code for the same: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::application-migration-with-aws-workshop", "arn:aws:s3:::application-migration-with-aws-workshop/lambda/4eb5dfa8efc17763bc41edb070cb9cd2", "arn:aws:s3:::application-migration-with-aws-workshop/lambda/438e5a43749a18ff0f4c7a7d0363e695" ] } ] } ``` Please tell me what's the reason behind the failure. I know this is Amazon owned bucket. So what's missing either from permissions point of view. Thanks
2
answers
0
votes
122
views
profile picture
asked 2 months ago

X-Ray trace doesn't shows inner method call in springboot app

I'm new to aws x-ray and trying to use x-ray with AOP based approach in a springboot application. I was able to get the traces in the aws console, but traces doesn't show inner method call method2() details. Am I missing anything here ? **Controller class ** ``` import com.amazonaws.xray.spring.aop.XRayEnabled; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/xray") @XRayEnabled public class XrayController { @GetMapping(value = "/method1") public String method1() { return method2(); } public String method2() { return "Hello"; } } ``` **Aspect Class** ``` import com.amazonaws.xray.entities.Subsegment; import com.amazonaws.xray.spring.aop.BaseAbstractXRayInterceptor; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.springframework.stereotype.Component; import java.util.Map; @Aspect @Component public class XRayInspector extends BaseAbstractXRayInterceptor { @Override protected Map<String, Map<String, Object>> generateMetadata(ProceedingJoinPoint proceedingJoinPoint, Subsegment subsegment) { return super.generateMetadata(proceedingJoinPoint, subsegment); } @Override @Pointcut("@within(com.amazonaws.xray.spring.aop.XRayEnabled) && (bean(*Controller) || bean(*Service) || bean(*Client) || bean(*Mapper))") public void xrayEnabledClasses() {} } ``` When I hit http://localhost:8080/xray/method1 endpoint, AWS Xray Console doesn't show method2() details ![Enter image description here](https://repost.aws/media/postImages/original/IM77lOgIsWSGyNuiuhOOxsqw)
1
answers
0
votes
81
views
asked 2 months ago

Can't start or stop streaming session with Nimble Studio API

I'm trying to automate the start and stop of a streaming session using AWS Nimble Studio API. I'm using Python Boto3 API, and although I got the expected result using other Nimble API methods, start_streaming_session() and stop_streaming_session() respond with "Unknown error parsing request body". At this point, I have no idea what I am doing wrong. I've tried to use AWS CLI, and the outcome is the same. I used my account admin IAM user credentials with the Admin Access policy attached. Here are the logs. I've replaced sensitive information with placeholders. [nimble_studio_logs.txt](https://github.com/boto/boto3/files/9226672/nimble_studio_logs.txt) And here is the script I'm using : ```import boto3 boto3.set_stream_logger('') session = boto3.Session(aws_access_key_id='my-admin-key', aws_secret_access_key='my-admin-secret-key') nimble_client = session.client('nimble', region_name = 'eu-west-2') studio_id = nimble_client.list_studios()['studios'][0]['studioId'] sessions = nimble_client.list_streaming_sessions(studioId=studio_id)['sessions'] for session in sessions: if session['state'] == "STOPPED" and \ session['tags']['aws:nimble:createdWithLaunchProfile'] == 'my-launch-profile-id': the_session = session if the_session: response = nimble_client.get_streaming_session(sessionId=the_session['sessionId'],studioId=studio_id) response = nimble_client.start_streaming_session(sessionId=the_session['sessionId'],studioId=studio_id)```
1
answers
0
votes
88
views
asked 2 months ago

Creating IoT OTA update via CLI

Hello, I'm trying to create an OTA update job in the manner of FreeRTOS, but having trouble with the AWS CLI. When I create the OTA job via the console UI, the update works as expected. However, when I try to create the job via the CLI, I run into an issue (see below for specific command). The issue appears to be caused by the contents of the Job Document, specifically the "sig-sha256-ecdsa" field. When I create it using the below method, the contents of the field appears to be the decoded base64 binary: ``` "sig-sha256-ecdsa": "0D\u0002 \r�\u001f\t�M�hj�j}W\\욒��(n5]�i\"\u000b���\nD\u0002 $�\u001b+\u001eV�\u000ed�$�N�(��E؅��\u001et8 ��\u0000\u000f\u001f�" ``` The documentation for the [create-ota-update](https://docs.aws.amazon.com/freertos/latest/userguide/ota-cli-workflow.html) command states that the "inlineDocument" field of the "signature" object is "A base64 encoded binary representation of the code signing signature". However, it appears that I actually need to double-base64-encode the field - is this correct? For example, the signature in binary is: ``` 3044 0220 0dee 1f09 d34d eb68 6ab1 6a7d 575c ec9a 929e 8628 6e35 5de9 6922 0bdb c2c5 0a44 0220 24c5 1b2b 1e56 a10e 6482 24f9 4ec6 28b3 e045 d885 f79e 1e74 3820 f581 000f 1fc0 ``` Base64 encoded once that becomes: `MEQCIA3uHwnTTetoarFqfVdc7JqSnoYobjVd6WkiC9vCxQpEAiAkxRsrHlahDmSCJPlOxiiz4EXYhfeeHnQ4IPWBAA8fwA==`. This is the value seen below that causes an issue. Base64 encoded again, this becomes: `TUVRQ0lBM3VId25UVGV0b2FyRnFmVmRjN0pxU25vWW9ialZkNldraUM5dkN4UXBFQWlBa3hSc3JIbGFoRG1TQ0pQbE94aWl6NEVYWWhmZWVIblE0SVBXQkFBOGZ3QT09`. This value appears to work. Is this the correct way to use the API? When using boto3, what is the proper way to make this call? Thank you! ------------------------------------------------------- Command: `aws iot create-ota-update --cli-input-json file://fotaArguments.json` fotaArguments.json contents: ``` { "otaUpdateId": "ota-test-23", "description": "Testing an update", "targets": [ "arn:aws:iot:us-east-1:xxxx:thing/RCA-AAA" ], "targetSelection": "SNAPSHOT", "files": [ { "fileName": "/path/to/update.bin", "fileLocation": { "s3Location": { "bucket": "bucket-name", "key": "update.bin" } }, "codeSigning": { "customCodeSigning": { "signature": { "inlineDocument": "MEQCIA3uHwnTTetoarFqfVdc7JqSnoYobjVd6WkiC9vCxQpEAiAkxRsrHlahDmSCJPlOxiiz4EXYhfeeHnQ4IPWBAA8fwA==" }, "hashAlgorithm": "SHA256", "signatureAlgorithm": "ECDSA", "certificateChain": { "certificateName": "/path/to/fw_signing_public_key.pem" } } } } ], "roleArn": "arn:aws:iam::xxxxx:role/xxxxxxxxxx80D6CF5A-1PZCGRLJ44XJE" } ```
1
answers
0
votes
61
views
asked 2 months ago

User Data script not downloading file(s) from S3

I have been trying for days to get a User Data script for a Windows instance to copy files from S3. At first I was trying to use the 'aws s3 sync' command to copy a large number of files, but since that wouldn't work I zipped the files and I'm trying to now copy just that one zipped file. I am trying to perform the copy with both a script command and powershell command. Since other commands work in both blocks I know the user data script is formatted correctly and executing at launch, but this one file copy command using the AWS CLI is simply not working. It is also worth noting that I am downloading and installing the AWS CLI first, and that the download/install works from either the script block of powershell block. I am also associating an IAM role with sufficient permissions at launch time via an instance profile. And I know both work (CLI and role) as I am able to manually execute the copy command soon as I logon to the instance, it's just not performing the copy from the user data script. Here's the (cleansed) script block I'm running (formatting is fouled up due to <script>): > <script> > c:\windows\system32\msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi /qn > aws s3 cp s3://<bucket>/<path/key>/Installers.zip C:\Temp\Installers.zip > </script> Here's the (cleansed) powershell block I'm running (formatting is fouled up due to <powershell>): > <powershell> > aws s3 cp s3://<bucket>/<path/key>/Installers.zip C:\Temp\Installers.zip > </powershell> Obviously it's the exact same CLI command structured the same for both script and powershell, but again I cannot get it to execute the file copy operation from either block. However, the installation of the AWS CLI works fine from either block. After pulling my hair out for days on this trying to figure it out, searching the Internet and AWS documentation, and not finding any possible solutions I'm posting here to try to get this figured out. Thank you in advance for any assistance. More info... I can see in the err.tmp file (for the batch file <script> block that, "'#aws' is not recognized as an internal or external command, operable program or batch file." And in the <powershell> err.tmp file, "aws : The term 'aws' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again." So it appears that the AWS CLI is not installed by the time I am attempting to copy files from S3 to the instance, despite the fact that the AWS CLI is being installed as the first line in the batch portion of the User Data script, and I've even added "timeout 90" right after the install AWS CLI line in an attempt to pause for 1.5 minutes to give plenty of time to install the CLI prior to the first attempt at copying the file (in the batch block), then the second attempt to copy the file is in the 27th line of the script in the powershell portion, long after several other commands complete successfully. Again, thank you in advance for any assistance with this issue.
1
answers
0
votes
50
views
asked 2 months ago

How do we look up more verbose information by RequestID's thrown in AWS CloudFormation events which status reports CREATE_FAILED?

Without setting up a CloudTrail and executing a CFN template which rolls back, I have started looking at Debugging options. I found [this](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-api-logging-cloudtrail.html) which says we can see the most recent without a created trail. I tried to find RequestId documentation from [here](https://docs.aws.amazon.com/search/doc-search.html?searchPath=documentation-guide&searchQuery=resourceid&this_doc_product=AWS+CloudFormation&facet_doc_product=AWS+CloudFormation) entering 'requested' in the search bar, which returned many unrelated items to my specific case (thanks for the attempt Kendra:). I also have looked at cli docs [here](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-listing-event-history.html) I guess I fist need to know what a RequestID is capable of helping me trace, as I am doing a simple debug of an instance I already know that has the wrong AMI ID for that region, but am trying. to re-familiarize myself. with fixing CFN templates after being out of the loop for a few years. I'd like to know how someone else handles a CREATE _FAILED and ways to use the status reason in a verbose way. Each reason appears to be ';' separated, so even just a point in that direction might help weed through the mountain of information here. Thanks ahead of time- Rudy
0
answers
0
votes
27
views
asked 2 months ago

Configure AWS SES as relay host in aaPanel

I have a lightsail instance with an Ubuntu 20 installation. I set up aaPanel as the main control panel. Because lightsail instances have locked port 25 for smtp service, I configured AWS SES as mail provider. aaPanel has a tool to configure a relayhost using postfix service. First, I verified my domain with SES by a TXT validation, my domain is properly configured in SES, also I made a test send and it was successful. By the way, my lightsail instance and SES domains are in the same region (us-west-1). Now, on my console, I entered these commands: ``` sudo postconf relayhost=in-v3.mailjet.com:2587 sudo postconf smtp_tls_security_level=encrypt sudo postconf smtp_sasl_auth_enable=yes sudo postconf smtp_sasl_password_maps=hash:/etc/postfix/sasl_password sudo postconf smtp_sasl_securty_options=noanonynous sudo vi /etc/postfix/sasl_password ``` In vi editor, enter this line: email-smtp.us-east-1.amazonaws.com:2587 [api]:[secret] ``` sudo postmap /etc/postfix/sasl_password sudo chown root:root /etc/postfix/sasl_password* sudo chmod 600 /etc/postfix/sasl_password* sudo systemctl restart postfix ``` In lightsail network section, I opened ports 2587, 25, and 465. And with this, I suppose I can send email by relayhost on aaPanel. When I made a test with the mailer tool on aaPanel, this is the log: ``` Jul 22 19:25:48 softnia postfix/qmgr[13083]: E2C8F81CD7: from=<>, size=3462, nrcpt=1 (queue active) Jul 22 19:25:48 softnia postfix/trivial-rewrite[90585]: warning: /etc/postfix/main.cf, line 75: overriding earlier entry: relayhost=email-smtp.us-east-1.amazonaws.com:2587 Jul 22 19:25:48 softnia postfix/trivial-rewrite[90585]: warning: /etc/postfix/main.cf, line 77: overriding earlier entry: smtp_sasl_password_maps=hash:/etc/postfix/sasl_password Jul 22 19:25:48 softnia postfix/lmtp[90586]: warning: /etc/postfix/main.cf, line 75: overriding earlier entry: relayhost=email-smtp.us-east-1.amazonaws.com:2587 Jul 22 19:25:48 softnia postfix/lmtp[90586]: warning: /etc/postfix/main.cf, line 77: overriding earlier entry: smtp_sasl_password_maps=hash:/etc/postfix/sasl_password Jul 22 19:25:48 softnia postfix/bounce[90587]: warning: /etc/postfix/main.cf, line 75: overriding earlier entry: relayhost=email-smtp.us-east-1.amazonaws.com:2587 Jul 22 19:25:48 softnia postfix/bounce[90587]: warning: /etc/postfix/main.cf, line 77: overriding earlier entry: smtp_sasl_password_maps=hash:/etc/postfix/sasl_password Jul 22 19:25:48 softnia postfix/lmtp[90586]: E2C8F81CD7: to=<root@softnia.com>, relay=none, delay=38331, delays=38331/0.01/0/0, dsn=4.4.1, status=deferred (connect to softnia.com[private/dovecot-lmtp]: No such file or directory) ``` This is the postfix configuration file: ``` # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on # fresh installs. compatibility_level = 2 smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = softnia.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = email-smtp.us-east-1.amazonaws.com:2587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all virtual_mailbox_domains = sqlite:/etc/postfix/sqlite_virtual_domains_maps.cf virtual_alias_maps = sqlite:/etc/postfix/sqlite_virtual_alias_maps.cf, sqlite:/etc/postfix/sqlite_virtual_alias_domain_maps.cf, sqlite:/etc/postfix/sqlite_virtual_alias_domain_catchall_maps.cf virtual_mailbox_maps = sqlite:/etc/postfix/sqlite_virtual_mailbox_maps.cf, sqlite:/etc/postfix/sqlite_virtual_alias_domain_mailbox_maps.cf smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_use_tls = yes smtp_tls_security_level = may smtpd_tls_security_level = may virtual_transport = lmtp:unix:private/dovecot-lmtp smtpd_milters = inet:127.0.0.1:11332 non_smtpd_milters = inet:127.0.0.1:11332 milter_mail_macros = i {mail_addr} {client_addr} {client_name} {auth_authen} milter_protocol = 6 milter_default_action = accept message_size_limit = 102400000 recipient_bcc_maps = hash:/etc/postfix/recipient_bcc sender_bcc_maps = hash:/etc/postfix/sender_bcc recipient_bcc_maps = hash:/etc/postfix/recipient_bcc sender_bcc_maps = hash:/etc/postfix/sender_bcc recipient_bcc_maps = hash:/etc/postfix/recipient_bcc sender_bcc_maps = hash:/etc/postfix/sender_bcc recipient_bcc_maps = hash:/etc/postfix/recipient_bcc sender_bcc_maps = hash:/etc/postfix/sender_bcc smtpd_tls_chain_files = /www/server/panel/plugin/mail_sys/cert/softnia.com/privkey.pem,/www/server/panel/plugin/mail_sys/cert/softnia.com/fullchain.pem tls_server_sni_maps = hash:/etc/postfix/vmail_ssl.map smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_securty_options = noanonynous ``` As you can see, my base domain is softnia.com, which is appropriately configurated in lightsail and SES.
1
answers
0
votes
28
views
profile picture
asked 2 months ago