Questions tagged with AWS Command Line Interface
Content language: English
Sort by most recent
I am following IoT Greengrass tutorial - https://docs.aws.amazon.com/greengrass/v2/developerguide/defer-component-updates-tutorial.html.
I am stuck on the step "gdk component publish". I am unable to publish the helloworld component to cloud service from my development computer.
Issue
******
[2023-03-01 20:38:54] INFO - Getting project configuration from gdk-config.json
[2023-03-01 20:38:54] INFO - Found component recipe file 'recipe.json' in the project directory.
[2023-03-01 20:38:54] INFO - Found credentials in shared credentials file: ~/.aws/credentials
[2023-03-01 20:39:10] ERROR - Failed to calculate the version of component 'com.example.BatteryAwareHelloWorld' based on the configuration.
[2023-03-01 20:39:10] ERROR - Failed to publish new version of the component 'com.example.BatteryAwareHelloWorld'
=============================== ERROR ===============================
Could not publish the component due to the following error.
Failed to publish new version of component with the given configuration.
Failed to calculate the next version of the component during publish.
Error while getting the component versions of 'com.example.BatteryAwareHelloWorld' in '<<region>>' from the account '<<aws account>>' during publish.
Connection was closed before we received a valid response from endpoint URL: "https://greengrass.us-east-1.amazonaws.com/greengrass/v2/components/arn%3Aaws%3Agreengrass%3A<<region>>%3A<<aws account>>%3Acomponents%3Acom.example.BatteryAwareHelloWorld/versions".
The strange behavior I observed here is that "gdk component publish" command is getting past this step and creating artifacts in S3 bucket once in a while. It is getting succeeded only once in 10 times. I keep getting the above error all the time.
All the suggestions on internet are saying to check VPN connection, firewall settings, network connectivity etc., My doubt is that how is it getting succeeded one or two times with same settings on my side.
Can anyone suggest?
I am trying to figure out what to use for when my pipeline finishes building the software packages (all of them are tgz zipped) where can I store it? Is a simple s3 bucket the solution or is there something more tailored that allows me an easy integration with my pipeline?
Hope everyone is doing well!
Here's the context of the issue I'm facing, I'm working on a company that is supporting a really old airflow version, here are the details of the version and some components.
airflow=1.10.1=py36_0
python=3.6.2=0
botocore=1.12.226=py_0
awscli=1.16.236=py36_0
boto3=1.9.199=py_0
boto=2.49.0=py36_0
Since a couple of days ago, we have been facing an issue on a DAG that is supposed to have part of the code to add a task to an EMR cluster and we are facing the following issue:
```
Traceback (most recent call last):
File "/home/conda/.conda/envs/airflow36/lib/python3.6/site-packages/airflow/models.py", line 1659, in _run_raw_task
result = task_copy.execute(context=context)
File "/src/src/dags/data_conversion/operators/emr.py", line 69, in execute
response = emr.add_job_flow_steps(JobFlowId=job_flow_id, Steps=steps)
File "/home/conda/.conda/envs/airflow36/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/conda/.conda/envs/airflow36/lib/python3.6/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the AddJobFlowSteps operation: A job flow that is shutting down, terminated, or finished may not be modified.
```
And here is the part of the code affected:
```
job_flow_id = context['task_instance'].xcom_pull(task_ids=self.cluster_creator_operator_name)[0]
emr = EmrHook(aws_conn_id=self.aws_conn_id).get_conn()
logging.info('Adding steps to %s', job_flow_id)
response = emr.add_job_flow_steps(JobFlowId=job_flow_id, Steps=steps)
if not response['ResponseMetadata']['HTTPStatusCode'] == 200:
raise AirflowException('Adding steps failed: %s' % response)
else:
logging.info('Steps %s added to JobFlow', response['StepIds'])
return response['StepIds']
```
Based on my research, I've found a stackoverflow post (https://stackoverflow.com/questions/64634755/mrjob-im-having-a-client-error-while-using-emr) where it is mentioned that botocore package is deprecated as well as in the github it is mentioned that is no longer support the version of python we are using.
Would this be the correct analysis for the issue? I've also found this link (https://stackoverflow.com/questions/65595398/mrjob-in-emr-is-running-only-1-mrstep-out-of-3-mrsteps-and-cluster-is-shutting-d) where it is suggested to persist an EMR cluster but not sure if will be useful because I suspect the Airflow classes are invoking botocore package and I would be unable to override that.
Thanks a lot.
Hi
I am trying to invoke outbound campaigns using the StartOutboundVoiceContact CLI/API without pinpoint.
aws connect start-outbound-voice-contact --destination-phone-number +1xxx --contact-flow-id xxx --instance-id xx --source-phone-number +1xxx --campaign-id xxxx --traffic-type CAMPAIGN --answer-machine-detection-config EnableAnswerMachineDetection=true,AwaitAnswerMachinePrompt=true
But the calls do not follow the campaign logic. I have set it check if 100% agents are available in queue before dialing out. Both for predictive and progressive mode.
As soon as I hit the CLI calls are dialing out to the destination number even if there are 0 agents logged into the queue.
It looks like the CLI / API does not follow the campaign but simply put the calls into the contact as normal calls even hough I have specified the traffic type as CAMPAIGN.
Any idea what is wrong ?
Hello everyone,
I am trying to figure out a way to automate access key rotation for IAM users. We have several users that have their own IAM programmatic access key and I am trying to figure out a way to force the user to rotate their access key after 90 days. It would be nice to also have some sort of SNS topic that will inform the user.
I attempted to use the ASA Key Rotation document that AWS provided but kept on running into CloudFormation template errors which include Malformed Document and missing resources in the .PY files.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-rotate-iam-user-access-keys-at-scale-with-aws-organizations-and-aws-secrets-manager.html
Any guidance on this would be awesome.
Thank you!
I've followed the [docs](https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-dkim-bring-your-own.html). I've tried via the console and via the API.
Both times verification has failed.
I can get the TXT record if I do `dig -t txt selector._domainkeydomain.com`. The record is in GoDaddy.
How could I debug the issue further?
Greetings,
I am using AWS cli in s3 to return the dates of files in buckets, the aim is to find old files to archive/delete.
The syntax that I am using is standard cli syntax. The files have been updated, but it returns the wrong date **2022-11-17**??
Not **2023-02-01**
`aws s3 ls s3:bucket/Europe/ --recursive --human-readable --summarize --query "Contents[?contains(LastModified, '2022-11')].{Key: Key}" --output text | xargs -n1 -t -I 'KEY'
`
It seems there is currently no way to create applications in IAM Identity Center programmatically but is there any way that the metadata for an already created application can be fetched programmatically? I have checked the cli and neither `aws sso` or `aws sso-admin` has an option for applications, and the same goes for boto3.
Is this just not possible yet?
Based on https://docs.aws.amazon.com/eks/latest/userguide/deploy-collector-advanced-configuration.html
```
demo % aws eks create-addon \
--cluster-name observability \
--region us-west-2 \
--addon-name adot \
--addon-version v0.66.0-eksbuild.1 \
--configuration-values configuration-values.json
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: --configuration-values, configuration-values.json
```
Cli is not considering configuration-values
Hi I ran out of storage on my website, I'm trying to upload .mp3 files to Amazon SW and use the links on my public website. The links work but only for a short amount of time. All of my permissions are set to public. Also, when I click the link provided I get this error message :
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>KQK7ST3WTPAG6P9N</RequestId>
<HostId>v7H6Np+rtuNvDcPKIpg8r3mlWbN733GrdUZHXtXZpErRtRi3pG6R2cenN6l62h6Hb0b/aur4Ucw=</HostId>
</Error>
I'm wondering how to create a permanent link to the media?
Thanks.
If I execute the following aws cli command, (through ansible)
ex)
aws efs create-mount-target --file-system-id fs-09f8e9569cc3d5873 --subnet-id subnet-03249b119107a9ddb --security-groups sg093669eb59254ece07
The following error occurs:
an error occurred (IncorrectFileSystemLifeCycleState) when calling the CreateMountTarget operation: None
Why is the above error occurring?
I followed this [guide](https://docs.amplify.aws/cli-legacy/graphql-transformer/searchable/#configure-environment-opensearch-instance-type) (adding to the schema.graphql and pushing to amplify), although it didn't add OpenSearch resource. Any tips on doing this?