Questions tagged with AWS Command Line Interface
Content language: English
Sort by most recent
How to increase ec2-use home directory space to 5 G ( /home/cloudshell-use) ? For example, what AWS CLI command I can use or can I do using EC2 console webpage etc.
I have installed the latest version of AWS CLI. While I am trying to open the installed CLI, it's popping up windows for a second and disappearing instantly. I don't know what is the possible reason for that.
Here is the output while I am entering the command: aws --version
aws-cli/2.11.8 Python/3.11.2 Windows/10 exe/AMD64 prompt/off
N:B: I am using Windows 10 Pro Education (64-bit)
I tried to install AWS CLI v2 on raspberry pi 4 model b+ with Raspbian GNU/Linux 10 following step.
But I confronted `/usr/local/bin/aws: No such file or directory` error when checking aws cli verison by using `aws --version` command.
Is it possible to install AWS CLI v2 on raspberry pi 4 model b+ with Raspbian OS for ARM64 ?
Install step
```
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin
```
Error message
```
./aws/install: 78: ./aws/install: /home/pi/aws/dist/aws: not found
You can now run: /usr/local/bin/aws --version
```
```
$ aws --version
/usr/local/bin/aws: No such file or directory
```
Supplimental information is as follow for futher analysis.
```
$ uname -a
Linux rapsberrypi4 6.1.20-v8+ #1638 SMP PREEMPT Tue Mar 21 17:16:29 GMT 2023 aarch64 GNU/Linux
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
```
Can someone Please Help me in Fetching the AWS RDS Snapshots which are older than 1 month using "AWS CLI" Command.
Regards,
kalyan varma
Below is a sample javascript sdk v3 Athena query that uses a prepared statement and parameters that are passed to the query
```
const { AthenaClient } = require("@aws-sdk/client-athena");
const REGION = 'us-east-1';
const athenaClient = new AthenaClient({region: REGION});
module.exports = {athenaClient};
```
```
const tableName = 'employees';
const sqlString = "SELECT firstname, lastname, state FROM " + tableName + " WHERE " +
"zipcode = ? AND " +
"companyname = ?";
const queryExecutionInput = {
QueryString: sqlString,
QueryExecutionContext: {
Database: 'sample-employee',
Catalog: 'awscatalogname'
},
ResultConfiguration: {
OutputLocation: 's3://athena-query-bucket'
},
WorkGroup: 'primary',
ExecutionParameters: ["12345", "Test 1"]
}
const queryExecutionId = await athenaClient.send(new StartQueryExecutionCommand(queryExecutionInput));
const command = new GetQueryExecutionCommand(queryExecutionId);
const response = await athenaClient.send(command);
const state = response.QueryExecution?.Status?.State;
if(state === QueryExecutionState.QUEUED || state === QueryExecutionState.RUNNING) {
await setTimeout(this.config.pollInterval); //wait for pollInterval before calling again.
return this.waitForQueryExecution(queryExecutionId);
} else if(state === QueryExecutionState.SUCCEEDED) {
const resultParams = { QueryExecutionId: response.QueryExecution.QueryExecutionId, MaxResults: this.config.maxResults};
const getQueryResultsCommand:any = new GetQueryResultsCommand(resultParams);
const resp = await athenaClient.send(getQueryResultsCommand);
console.log("GetQueryResultsCommand : ", resp.ResultSet.ResultSetMetadata.ColumnInfo);
console.log("GetQueryResultsCommand : ", resp.ResultSet.Rows);
} else if(state === QueryExecutionState.FAILED) {
throw new Error(`Query failed: ${response.QueryExecution?.Status?.StateChangeReason}`);
} else if(state === QueryExecutionState.CANCELLED) {
throw new Error("Query was cancelled");
}
```
This table has about 50 records that match this query. When the query is run this is what is returned for all 50 records.
```
{
"ResultSetMetadata":
{
"Rows":
[
{
"Data":
[
{
"VarCharValue": "firstname"
},
{
"VarCharValue": "lastname"
},
{
"VarCharValue": "state"
}
]
}
]
}
}
```
Only the column names are listed but no data from these columns.
I see the exact same issue when I try it using the CLI as well
```
aws athena start-query-execution --query-string "SELECT firstname, lastname, state FROM employees WHERE zipcode = CAST(? as varchar) AND companyname = CAST(? as varchar)"
--query-execution-context "Database"="sample-employee"
--result-configuration "OutputLocation"="s3://athena-query-bucket/"
--execution-parameters "12345" "Test 1"
aws athena get-query-execution --query-execution-id "<query-execution-id>"
aws athena get-query-results --query-execution-id "<query-execution-id>"
```
FYI ColumnInfo in the ResultSetMetadata object has been removed to keep the json simple
```
{
"ResultSetMetadata":
{
"Rows":
[
{
"Data":
[
{
"VarCharValue": "firstname"
},
{
"VarCharValue": "lastname"
},
{
"VarCharValue": "state"
}
]
}
]
}
}
```
So, not exactly sure what I might be doing wrong. Any help/pointers on this would be great. We are currently running Athena engine version 2.
I am trying to deploy a Landing zone from my CLI using the LZA cloudformation tempelate and the AWS github repo: https://github.com/awslabs/landing-zone-accelerator-on-aws#aws-acceleratorconfig and I am stuck at a parameter field which I do not know what it is refering to. I have made the section bold (AcceleratorQualifier,ParameterValue=<Accelerator_Qualifier>), I would appreciate it if someone could explain to me what value I need to input in that section. Many Thanks
```
aws cloudformation create-stack --stack-name AWSAccelerator-InstallerStack --template-body file://cdk.out/AWSAccelerator-InstallerStack.template.json \
--parameters ParameterKey=RepositoryName,ParameterValue=<Repository_Name> \
ParameterKey=RepositoryBranchName,ParameterValue=<Branch_Name> \
ParameterKey=**AcceleratorQualifier,ParameterValue=<Accelerator_Qualifier> \**
ParameterKey=ManagementAccountId,ParameterValue=<Management_Id> \
ParameterKey=ManagementAccountEmail,ParameterValue=<Management_Email> \
ParameterKey=ManagementAccountRoleName,ParameterValue= \
ParameterKey=LogArchiveAccountEmail,ParameterValue=<LogArchive_Email> \
ParameterKey=AuditAccountEmail,ParameterValue=<Audit_Email> \
ParameterKey=EnableApprovalStage,ParameterValue=Yes \
ParameterKey=ApprovalStageNotifyEmailList,ParameterValue=comma-delimited-notify-emails \
ParameterKey=ControlTowerEnabled,ParameterValue=Yes \
--capabilities CAPABILITY_IAM
```
I am trying to get SSL certificate with let's encrypt nginx. First, I added epel using the commands ```$ wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm``` and
```$ sudo rpm -ihv --nodeps ./epel-release-latest-8.noarch.rpm``` and it was added with no problem then I used ```
sudo yum install python3-certbot-nginx``` and got the error message:
```
Problem: package certbot-1.22.0-1.el8.noarch requires python3-certbot = 1.22.0-1.el8, but none of the providers can be installed
- conflicting requests
- nothing provides python3.6dist(setuptools) >= 39.0.1 needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides python3.6dist(cryptography) >= 2.5.0 needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides python3.6dist(configobj) >= 5.0.6 needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides python3.6dist(distro) >= 1.0.1 needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides /usr/bin/python3.6 needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides python3.6dist(pytz) needed by python3-certbot-1.22.0-1.el8.noarch
- nothing provides python(abi) = 3.6 needed by python3-certbot-1.22.0-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages)
```
I also tried
```
sudo dnf install python3-certbot-nginx
```
I learned i may need a Code ready builder but haven't been able to install it. Please how can I get it. If that is not the issue, please what I'm I doing wrong and how can I resolve it?
I launch an instance with
- The latest AL2023 image
- SG with all open outbound traffic
- Using IAM role with policy that has 2 actions: "s3:ListAllMyBuckets" and "cognito-idp:ListUserPools"
- Default VPC created in my account
When I connect to the instance and run `aws s3api list-buckets --region eu-central-1` works fine. However, when I run `aws cognito-idp list-user-pools --max-results 1--region eu-central-1` it never returns.
Note: I have also tried with sqs list-queues, sns list-topics, they all work fine (adding the permissions to the pokicy), its just cognito.
Running with `--debug` I see it gets stuck at
```
MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): cognito-idp.eu-central-1.amazonaws.com:443
```
But if I grab all the headers that the debug option is exposing and build the corresponding curl command (below) and run it within the instance it does work.
```
curl https://cognito-idp.eu-central-1.amazonaws.com -X POST -d '{"MaxResults": 1}' \
-H 'X-Amz-Target: x' -H 'Content-Type: x' -H 'User-Agent: x' -H 'X-Amz-Date: x' -H 'X-Amz-Security-Token: x' -H 'Authorization: x' -H 'Content-Length: x'
```
Please, I'm turning crazy, what is going on? The instance has access to cognito since the curl command works but the cli gets stucked calling the endpoint. Why is the cli not able to do the request?
```
aws --version
aws-cli/2.9.19 Python/3.9.16 Linux/6.1.19-30.43.amzn2023.x86_64 source/x86_64.amzn.2023 prompt/off
```
When working with WAFV2 and making a call to GetWebACL the CustomResponse configuration is missing.
If this configuration is subsequently used in a call to UpdateWebACL then the CustomResponse is lost.
This appears to be a serious bug that would potentially cause undetected loss of configuration as the response from both API calls is successful.
The API documentation states:
> To modify a web ACL, do the following:
1) Retrieve it by calling GetWebACL
2) Update its settings as needed
3) Provide the complete web ACL specification to UpdateWebACL
https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-wafv2-2019-07-29.html#updatewebacl
For example a WAFV2 Rule with the following configuration:
```
{
"Name":"RateLimit-3000",
"Priority":8,
"Statement":{
"RateBasedStatement":{
"Limit":3000,
"AggregateKeyType":"IP"
}
},
"Action":{
"Block":{
"CustomResponse":{
"ResponseCode":429,
"CustomResponseBodyKey":"TooManyRequests"
}
}
},
"VisibilityConfig":{
"SampledRequestsEnabled":true,
"CloudWatchMetricsEnabled":true,
"MetricName":"RateLimit-3000"
}
}
```
Is returned from an API call to GetWebACL as this:
```
{
"Name":"RateLimit-3000",
"Priority":8,
"VisibilityConfig":{
"MetricName":"RateLimit-3000",
"CloudWatchMetricsEnabled":true,
"SampledRequestsEnabled":true
},
"Action":{
"Block": { }
},
"Statement":{
"RateBasedStatement":{
"AggregateKeyType":"IP",
"Limit":3000
}
}
}
```
If that configuration is then passed back to an API call to UpdateWebACL then the CustomResponse in the Block Action is removed.
Is this a known bug or is there another way to correctly update a WebACL without loss of configuration?
Hallo guys, I'm writing to you in hope to get a help on a problem I'm facing, so basically I created a spring boot rest API for my app and I used AWS Elasticbeanstalk to deploy, now when I try to upload a file to my S3 bucket using the rest API, I'm facing an error saying that the body of my request is too large, even for some image files not larger than 1MB. So please how can I solve that issues.
Here's the error part of the logs of the app:
2023/03/21 05:12:26 [error] 2736#2736: *56 client intended to send too large body: 3527163 bytes, client: ..., server: , request: "POST /mobile/create_post HTTP/1.1", host: "..."
I am using AWS transcribe in the following format:
```
aws transcribe start-transcription-job --language-code en-US --media-format wav --media MediaFileUri=s3://my-bucket/my-audio-file.wav --output-bucket-name my-output-bucket
```
And in my output files, I am seeing that any number that's being said is transcribed as digits. so for example: "I just spent fifty dollars" is transcribed as "I just spent 50 dollars".
Is there a way to transcribe numbers in their written form and not digits?
Hello, I see where AWS GovCloud mentions endpoints are FIPS compliant but it never mentions validated. So I was looking for confirmation that just like in AWS commerical regions, in order to use FIPS validated endpoints I would need to specifically call them, add them to code or otherwise use env variables and the like for the AWS CLI or SDK.
I ask this question because I'm the past some people have argued that endpoints in GovCloud are FIPS by default and we don't need to specify them, this is probably a confusion of compliant and validated, but I believe for the FIPS validated endpoints we still do need to explicitly do so.
https://aws.amazon.com/compliance/fips/