By using AWS re:Post, you agree to the Terms of Use
/All/
All Questions
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

We have 2 volumes can't detach and delete

We have 2 volumes(vol-63046619 and vol-2c076556) which need to be deleted. However, we can't delete them. They are not attached to any ec2 instances. The below are the command we tried. $ aws ec2 describe-volumes --region us-east-1 --volume-id vol-63046619 { "Volumes": [ { "AvailabilityZone": "us-east-1d", "Attachments": [], "Tags": [ { "Value": "", "Key": "Name" } ], "Encrypted": false, "VolumeType": "standard", "VolumeId": "vol-63046619", "State": "in-use", "SnapshotId": "snap-xxxxxxx", "CreateTime": "2012-10-01T20:29:01.000Z", "MultiAttachEnabled": false, "Size": 8 } ] } $ aws ec2 delete-volume --region us-east-1 --volume-id vol-63046619 An error occurred (IncorrectState) when calling the DeleteVolume operation: The volume 'vol-63046619' is 'in-use' $ aws ec2 detach-volume --region us-east-1 --volume-id vol-63046619 An error occurred (IncorrectState) when calling the DetachVolume operation: Volume 'vol-63046619' is in the 'available' state. $ aws ec2 describe-volumes --region us-east-1 --volume-id vol-2c076556 { "Volumes": [ { "AvailabilityZone": "us-east-1d", "Attachments": [], "Tags": [ { "Value": "xxxxxxxxxxxxx", "Key": "aws:cloudformation:stack-name" }, { "Value": "", "Key": "Name" }, { "Value": "xxxxxxxx", "Key": "aws:cloudformation:logical-id" }, { "Value": "arn:aws:cloudformation:us-east-1:xxxxxxxxxxxxx:stack/xxxxxxxxxx/xxxxxx-xxxx-xxxx-xxxx-xxxx", "Key": "aws:cloudformation:stack-id" } ], "Encrypted": false, "VolumeType": "standard", "VolumeId": "vol-2c076556", "State": "in-use", "SnapshotId": "", "CreateTime": "2012-10-01T20:28:41.000Z", "MultiAttachEnabled": false, "Size": 5 } ] } $ aws ec2 delete-volume --region us-east-1 --volume-id vol-2c076556 An error occurred (IncorrectState) when calling the DeleteVolume operation: The volume 'vol-2c076556' is 'in-use' $ aws ec2 detach-volume --region us-east-1 --volume-id vol-2c076556 An error occurred (IncorrectState) when calling the DetachVolume operation: Volume 'vol-2c076556' is in the 'available' state. $ We tried detach and force detach from the console too. But it just stuck and doesn't help this case.
1
answers
0
votes
14
views
asked 6 hours ago

Cluster created with ParallelCluster will not run jobs

UPDATE: I answered this question for myself. I re-created the AMI but manually (following these docs: https://docs.aws.amazon.com/parallelcluster/latest/ug/pcluster.update-cluster-v3.html#modify-an-aws-parallelcluster-ami ) this time and it worked. Odd because the documentation cautions against this but it worked better than creating the AMI using pcluster. Can't delete the question so here it is for the record. I created a slurm cluster using AWS ParallelCluster (the `pcluster` tool). Creation works fine and I can ssh to the head node. But when I submit jobs they do not run. Using `srun`: ``` $ srun echo hello world srun: error: Node failure on queue1-dy-t2micro-1 srun: Force Terminated job 1 srun: error: Job allocation 1 has been revoked ``` Using `sbatch`: ``` $ sbatch t.sh Submitted batch job 2 $ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 2 queue1 t.sh ubuntu CF 0:02 1 queue1-dy-t2micro-2 ``` Above it looks like it is going to start a job on host `queue1-dy-t2micro-2` but that host never comes up, or at least does not stay up, and after a little bit, I see this: ``` $ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 2 queue1 t.sh ubuntu PD 0:00 1 (BeginTime) ``` And then subsequently, the job is never run. Anyone know what is going on? I did use a custom AMI which I also built with pcluster, but I am not sure if that is the issue, because the head node comes up just fine and it is using the same AMI.
0
answers
0
votes
2
views
asked 6 hours ago

RDS custom oracle disk full

Hello We are currently using AWS RDS Custom Oracle and the OS disk have a root partition of 10Gb , although allocating a 42 Gb volume . Just to add a cherry atop, there is a 16Gb swap area allocated in between, which make worse to expand to the remaining disk area I could barely get out of os disk full condition killing old log files in /var/log [root@ip- /]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 7.6G 0 7.6G 0% /dev tmpfs 16G 7.5G 7.8G 49% /dev/shm tmpfs 7.7G 785M 6.9G 11% /run tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/nvme0n1p1 9.8G 7.9G 1.8G 83% / /dev/nvme1n1 25G 13G 11G 54% /rdsdbbin /dev/mapper/dbdata01-lvdbdata01 296G 25G 271G 9% /rdsdbdata tmpfs 1.6G 0 1.6G 0% /run/user/61001 tmpfs 1.6G 0 1.6G 0% /run/user/61005 [root@ip- aws]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 42G 0 disk ├─nvme0n1p1 259:1 0 10G 0 part / ├─nvme0n1p128 259:3 0 1M 0 part └─nvme0n1p127 259:2 0 16G 0 part [SWAP] nvme3n1 259:6 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme2n1 259:5 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme5n1 259:8 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme1n1 259:4 0 25G 0 disk /rdsdbbin nvme4n1 259:7 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata du -sh ./*/ 185M ./bin/ 143M ./boot/ 7.5G ./dev/ 43M ./etc/ 124K ./home/ 1.1G ./lib/ 195M ./lib64/ 16K ./lost+found/ 4.0K ./media/ 4.0K ./mnt/ 3.4G ./opt/ 0 ./proc/ 13G ./rdsdbbin/ 25G ./rdsdbdata/ 13M ./root/ 785M ./run/ 46M ./sbin/ 4.0K ./srv/ 0 ./sys/ 72K ./tmp/ 465M ./usr/ 2.5G ./var/ What im planning to do is to allocate a new swap volume , switch it on, kill the old swap and expand the original volume as the most i could. 1) Could this harm any monitoring task in AWS for RDS custom? 2)Any good soul at AWS could look into the automation scripts for RDS custom and make SWAP on a separate volume? , and allocate the 42Gb volume fully for OS? There can happen some bad stuff any OS updates that surely need more disk space
0
answers
0
votes
8
views
asked 7 hours ago

Lambda function as image, how to find your handler URI

Hello, I have followed all of the tutorials on how to build an AWS Lambda function as a container image. I am also using the AWS SAM SDK as well. What I don't understand is how do I figure out my end-point URL mapping from within my image to the Lambda function? For example in my docker image that I am using the AWS Python 3.9 image where I install some other packages and my python requirements and my handler is defined as: summarizer_function_lambda.postHandler My python file being copied into the image is the same name as above but without the .postHandler My AWS SAM Template has: AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS Lambda dist-bart-summarizer function # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: DistBartSum: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: FunctionName: DistBartSum ImageUri: <my-image-url> PackageType: Image Events: SummarizerFunction: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /postHandler Method: POST So what is my actual URI path to do my POST call either locally or once deployed on Lambda?? When I try and do a CURL command I get an "{"message": "Internal server error"}% " curl -XPOST "https://<my-aws-uri>/Prod/postHandler/" -d '{"content": "Test data.\r\n"}' So I guess my question is how do you "map" your handler definitions from within a container all the way to the end point URI?
0
answers
0
votes
10
views
asked 7 hours ago

Athena Error: Permission Denied on S3 Path.

I am trying to execute athena queries from a lambda function but I am getting this error: `Athena Query Failed to run with Error Message: Permission denied on S3 path: s3://bkt_logs/apis/2020/12/16/14` The bucket `bkt_logs` is the bucket which is used by AWS Glue Crawlers to crawl through all the sub-folders and populate Athena table on which I am querying on. Also, `bkt_logs` is an encrypted bucket. These are the policies that I have assigned to the Lambda. ``` [ { "Action": [ "s3:Get*", "s3:List*", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::athena-query-results/*", "Effect": "Allow", "Sid": "AllowS3AccessToSaveAndReadQueryResults" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::bkt_logs/*", "Effect": "Allow", "Sid": "AllowS3AccessForGlueToReadLogs" }, { "Action": [ "athena:GetQueryExecution", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:GetWorkGroup", "athena:GetDatabase", "athena:BatchGetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowAthenaAccess" }, { "Action": [ "glue:GetTable", "glue:GetDatabase", "glue:GetPartitions" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowGlueAccess" }, { "Action": [ "kms:CreateGrant", "kms:DescribeKey", "kms:Decrypt" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowKMSAccess" } ] ``` What seems to be wrong here? What should I do to resolve this issue?
1
answers
0
votes
19
views
asked 12 hours ago

Django App in ECS Container Cannot Connect to S3 in Gov Cloud

I have a container running in an EC2 instance on ECS. The container is hosting a django based application that utilizes S3 and RDS for its file storage and db needs respectively. I have appropriately configured my VPC, Subnets, VPC endpoints, Internet Gateway, roles, security groups, and other parameters such that I am able to host the site, connect to the RDS instance, and I can even access the site. The issue is with the connection to S3. When I try to run the command `python manage.py collectstatic --no-input` which should upload/update any new/modified files to S3 as part of the application set up the program hangs and will not continue. No files are transferred to the already set up S3 bucket. **Details of the set up:** All of the below is hosted on AWS Gov Cloud **VPC and Subnets** * 1 VPC located in Gov Cloud East with 2 availability zones (AZ) and one private and public subnet in each AZ (4 total subnets) * The 3 default routing tables (1 for each private subnet, and 1 for the two public subnets together) * DNS hostnames and DNS resolution are both enabled **VPC Endpoints** All endpoints have the "vpce-sg" security group attached and are associated to the above vpc * s3 gateway endpoint (set up to use the two private subnet routing tables) * ecr-api interface endpoint * ecr-dkr interface endpoint * ecs-agetn interface endpoint * ecs interface endpoint * ecs-telemetry interface endpoint * logs interface endpoint * rds interface endpoint **Security Groups** * Elastic Load Balancer Security Group (elb-sg) * Used for the elastic load balancer * Only allows inbound traffic from my local IP * No outbound restrictions * ECS Security Group (ecs-sg) * Used for the EC2 instance in ECS * Allows all traffic from the elb-sg * Allows http:80, https:443 from vpce-sg for s3 * Allows postgresql:5432 from vpce-sg for rds * No outbound restrictions * VPC Endpoints Security Group (vpce-sg) * Used for all vpc endpoints * Allows http:80, https:443 from ecs-sg for s3 * Allows postgresql:5432 from ecs-sg for rds * No outbound restrictions **Elastic Load Balancer** * Set up to use an Amazon Certificate https connection with a domain managed by GoDaddy since Gov Cloud route53 does not allow public hosted zones * Listener on http permanently redirects to https **Roles** * ecsInstanceRole (Used for the EC2 instance on ECS) * Attached policies: AmazonS3FullAccess, AmazonEC2ContainerServiceforEC2Role, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com * ecsTaskExecutionRole (Used for executionRole in task definition) * Attached policies: AmazonECSTaskExecutionRolePolicy * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com * ecsRunTaskRole (Used for taskRole in task definition) * Attached policies: AmazonS3FullAccess, CloudWatchLogsFullAccess, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com **S3 Bucket** * Standard bucket set up in the same Gov Cloud region as everything else **Trouble Shooting** If I bypass the connection to s3 the application successfully launches and I can connect to the website, but since static files are supposed to be hosted on s3 there is less formatting and images are missing. Using a bastion instance I was able to ssh into the EC2 instance running the container and successfully test my connection to s3 from there using `aws s3 ls s3://BUCKET_NAME` If I connect to a shell within the application container itself and I try to connect to the bucket using... ``` s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET_NAME) s3.meta.client.head_bucket(Bucket=bucket.name) ``` I receive a timeout error... ``` File "/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 179, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f3da4467190>, 'Connection to BUCKET_NAME.s3.amazonaws.com timed out. (connect timeout=60)') ... File "/.venv/lib/python3.9/site-packages/botocore/httpsession.py", line 418, in send raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://BUCKET_NAME.s3.amazonaws.com/" ``` Based on [this article ](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) I think this may have something to do with the fact that I am using the GoDaddy DNS servers which may be preventing proper URL resolution for S3. > If you're using the Amazon DNS servers, you must enable both DNS hostnames and DNS resolution for your VPC. If you're using your own DNS server, ensure that requests to Amazon S3 resolve correctly to the IP addresses maintained by AWS. I am unsure of how to ensure that requests to Amazon S3 resolve correctly to the IP address maintained by AWS. Perhaps I need to set up another private DNS on route53? I have tried a very similar set up for this application in AWS non-Gov Cloud using route53 public DNS instead of GoDaddy and there is no issue connecting to S3. Please let me know if there is any other information I can provide to help.
1
answers
0
votes
20
views
asked 12 hours ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
0
answers
0
votes
5
views
asked 14 hours ago

Invalid security token error when executing nested step function on Step Functions Local

Are nested step functions supported on AWS Step Functions Local? I am trying to create 2 step functions, where the outer one executes the inner one. However, when trying to execute the outer step function, getting an error: "The security token included in the request is invalid". To reproduce, use the latest `amazon/aws-stepfunctions-local:1.10.1` Docker image. Launch the container with the following command: ```sh docker run -p 8083:8083 -e AWS_DEFAULT_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=TESTID -e AWS_SECRET_ACCESS_KEY=TESTKEY amazon/aws-stepfunctions-local ``` Then create a simple HelloWorld _inner_ step function in the Step Functions Local container: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"A Hello World example of the Amazon States Language using a Pass state\",\ \"StartAt\": \"HelloWorld\",\ \"States\": {\ \"HelloWorld\": {\ \"Type\": \"Pass\",\ \"End\": true\ }\ }}" --name "HelloWorld" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Then add a simple _outer_ step function that executes the HelloWorld one: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"OuterTestComment\",\ \"StartAt\": \"InnerInvoke\",\ \"States\": {\ \"InnerInvoke\": {\ \"Type\": \"Task\",\ \"Resource\": \"arn:aws:states:::states:startExecution\",\ \"Parameters\": {\ \"StateMachineArn\": \"arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld\"\ },\ \"End\": true\ }\ }}" --name "HelloWorldOuter" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Finally, start execution of the outer Step Function: ```sh aws stepfunctions --endpoint-url http://localhost:8083 start-execution --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorldOuter ``` The execution fails with the _The security token included in the request is invalid_ error in the logs: ``` arn:aws:states:us-east-1:123456789012:execution:HelloWorldOuter:b9627a1f-55ed-41a6-9702-43ffe1cacc2c : {"Type":"TaskSubmitFailed","PreviousEventId":4,"TaskSubmitFailedEventDetails":{"ResourceType":"states","Resource":"startExecution","Error":"StepFunctions.AWSStepFunctionsException","Cause":"The security token included in the request is invalid. (Service: AWSStepFunctions; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: ad8a51c0-b8bf-42a0-a78d-a24fea0b7823; Proxy: null)"}} ``` Am I doing something wrong? Is any additional configuration necessary?
0
answers
0
votes
7
views
asked a day ago

Power Users can't invite external users?

In the WorkDocs documentation, in [this link](https://docs.aws.amazon.com/workdocs/latest/adminguide/manage-sites.html#ext-invite-settings) in the section on "Security - external invitations", it claims that Power Users can be set up to invite external users. However, in the administration panel it doesn't exist. Our company has one administrator for WorkDocs, but could potentially have a few hundred power users. Those power users will have control over their allocated 1TB of space (and be on their own site), and they need to be able to invite external users to view a folder. Each power user might have a hundred or so external users that need to view folders in their space. What won't work at all is those power users having to contact the admin to send a link to every single external user they need to view their folders because that could potentially be 20,000+ external invitations that would be piled onto the one admin. It also won't work to make each of those power users an admin, because you'd run into the possibility that they could inadvertently create and/or invite paid users, and the cost to our company would skyrocket unnecessarily. Bottom line, we need to be able to have power users invite external users and ONLY external users--they should have ZERO ability to create or invite paid users. Those external users need to be able to view the contents of folders that the power user sets up. Can this be done? Thank you, -Brent
0
answers
0
votes
6
views
asked a day ago

Run fleet with on demand instance across AZ

Hello, I wanted to start EC2 fleet with on-demand instances only, and I wanted them to be distributed across availability zones. Unfortunately, I couldn't find a way to do that, and all the instances are always started in a single AZ. That is not a problem with spot instances, as they spawn in all the AZ. I was trying to try different allocation strategies and priorities, but nothing helped. I was trying to do so in AWS-CDK, using both `CfnEC2Fleet` [link](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CfnEC2Fleet.html) as well as `CfnSpotFleet` [link](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CfnSpotFleet.html). Bellow is my code. Is there way how to achieve that, or do I need to use something else? Thank you. ```typescript const spotFleet = new CfnSpotFleet(stack, 'EC2-Fleet', { spotFleetRequestConfigData: { allocationStrategy: 'lowestPrice', targetCapacity: 8, iamFleetRole: fleetRole.roleArn, spotMaintenanceStrategies: { capacityRebalance: { replacementStrategy: 'launch-before-terminate', terminationDelay: 120, } }, onDemandTargetCapacity: 4, instancePoolsToUseCount: stack.availabilityZones.length, launchTemplateConfigs: [{ launchTemplateSpecification: { launchTemplateId: launchTemplate.launchTemplateId, version: launchTemplate.latestVersionNumber, }, overrides: privateSubnets.map(subnet => ({ availabilityZone: subnet.subnetAvailabilityZone, subnetId: subnet.subnetId, })), }], } }); const ec2Fleet = new CfnEC2Fleet(stack, 'EC2-EcFleet', { targetCapacitySpecification: { totalTargetCapacity: 6, onDemandTargetCapacity: 6, defaultTargetCapacityType: 'on-demand', }, replaceUnhealthyInstances: true, onDemandOptions: { allocationStrategy: 'prioritized', }, launchTemplateConfigs: [{ launchTemplateSpecification: { launchTemplateId: launchTemplate.launchTemplateId, version: launchTemplate.latestVersionNumber, }, overrides: privateSubnets.map(subnet => ({ availabilityZone: subnet.subnetAvailabilityZone, subnetId: subnet.subnetId, })), }] }); ``` Where `launchTemplate` is instance of [`LaunchTemplate`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.LaunchTemplate.html) and `privateSubnets` is array of [`Subnet`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.Subnet.html) instances, one for each AZ.
0
answers
0
votes
8
views
asked a day ago

Config Advanved Query Editor - Return ConfigRuleName

I am using the AWS Config Service across multiple Accounts within my Organization. My goal is to write a query which will give me a full list of non-compliant resources in all regions, in all accounts. I have an Aggregator which has the visibility for this task. The Advanced Query I am using is similar to the AWS [Example in the docs:](https://docs.aws.amazon.com/config/latest/developerguide/example-query.html) ``` SELECT configuration.targetResourceId, configuration.targetResourceType, configuration.complianceType, configuration.configRuleList, accountId, awsRegion WHERE configuration.configRuleList.complianceType = 'NON_COMPLIANT' ``` However, the ConfigRuleName is nested within `configuration.configRuleList` - as there could be multiple config rules, (hence the list) assigned to `configuration.targetResourceId` How can I write a query that picks apart the JSON list returned this way? Because the results returned do not export to csv for example very well at all. Exporting a JSON object within a csv provides an unsuitable method if we wanted to import this into a spreadsheet for example, for viewership. I have tried to use `configuration.configRuleList.configRuleName` and this only returns `-` even when the list has a single object within. If there is a better way to create a centralised place to view all my Org's Non-Compliant Resources, I would like to learn about it. Thanks in Advance.
0
answers
0
votes
5
views
asked a day ago

Lambda function throwing : TooManyRequestsException, Rate exceeded

When the Lambda function is invoked, occasionally I see the following error for the function even though there is no Load running or not many lambda functions running. While the throttling and quota requests are set to default running in the Mumbai region, but this error is observed even when no load is running.. How do I determine with configuration needs to be increased to address this problem ? 2022-05-17T10:01:13.555Z 84379818-c8b8-44a3-b353-2c9f7f8f5e48 ERROR Invoke Error { "errorType": "TooManyRequestsException", "errorMessage": "Rate exceeded", "code": "TooManyRequestsException", "message": "Rate exceeded", "time": "2022-05-17T10:01:13.553Z", "requestId": "c3dc9f1b-d7c3-40d5-bec7-78e19dc2e033", "statusCode": 400, "retryable": true, "stack": [ "TooManyRequestsException: Rate exceeded", " at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:52:27)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:686:14)", " at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)", " at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)", " at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:688:12)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)" ] }
1
answers
0
votes
16
views
asked 2 days ago

Rekognition: error when trying to detect faces with s3 object name containing a colon (:)

Actually Rekognition works fine but when I use a filename containing a colon (:) for the S3Object, it makes an error. It is very problematic for me because all my files already have colons and I can't change their names. So if use this It works fine: ``` { "Image":{ "S3Object":{ "Bucket":"console-sample-images", "Name":"skateboard.jpg" } ``` but if i use a name with a colon like this It gives me an error. ``` { "Image":{ "S3Object":{ "Bucket":"console-sample-images", "Name":"skate:board.jpg" } ``` Error output: `{"name":"Error","content":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}","message":"faultCode:Server.Error.Request faultString:'null' faultDetail:'null'","rootCause":{"errorID":2032,"target":{"bytesLoaded":174,"dataFormat":"text","bytesTotal":174,"data":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}"},"text":"Error #2032: Stream Error. URL: https://rekognition.eu-west-1.amazonaws.com","currentTarget":{"bytesLoaded":174,"dataFormat":"text","bytesTotal":174,"data":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}"},"type":"ioError","bubbles":false,"eventPhase":2,"cancelable":false},"errorID":0,"faultCode":"Server.Error.Request","faultDetail":null,"faultString":""}` Is there a workaround for this problem? (encoding the ':' a certain way?) Thank you for your help.
1
answers
0
votes
7
views
asked 2 days ago
  • 1
  • 90 / page