Browse through the questions and answers listed below or filter and sort to narrow down your results.
Step fails when it is Poll action status for completion. Script execution times out. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
I want to start SSM Automation Document, This document copies s3 bucket objects to another s3 bucket but it takes 1 hour. After 15 minutes I received this message: "Step fails when it is Poll action status for completion. Script execution times out. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.". I use "timeoutSeconds: 30000" but it doesn't help. Also I use "timeoutSeconds: 300000" with isCritical: false and maxAttempts: 3 etc. (with different parameters)
Amplify datastore deleting specific list of data
My app is a social media clone app. I want a logic like below for my home feed: **amplify api will fetch list of 10 items through pagination** -> **this 10 items will be showing up to user in feed** -> **when user reaches the 7th item on the screen I will fetch again 10 items through api call and save it to local datastore**-> **when user will reach the end item/10th item i will show the next 10 items present in local datastore and I will clear that 10 items in local storage**-> **and fetch next 10 items**-> **if user closes app immediately the 10 items will be there in local storage**-> **next time when user will open app I will show the data without buffer form local storage** My issue is how if user likes a post or makes a comment I will save that to data store for short time and when sync event occurs I will update that like and comment to DynamoDB. **1.** My question is how to delete some specific data when some event occurs like list of post? **2.** Can we upload new post from datastore? how?
What happens to EFS based PVs when a node crashes?
I have some applications which uses dynamically provisioned PVs with EFS (EFS CSI dynamic provisioning). I have an EKS cluster with managed node groups in different AZs. My question: what happens to these PVs when a node crashes for some reason and the pods are restarted on other nodes? Will EKS or k8s automatically remount these EFS based PVs to the proper pods?
Attachment order for EBS volumes as /dev/nvme devices
Hello, We started seeing (from what I can find, our old instances from months ago don't exhibit this behavior) that the order of attached EBS volumes changes after first reboot. For example, we attach (using AWS console): vol-011117cfde1966e5f as /dev/sdf vol-0222290fbbd8a3b79 as /dev/sdg and they immediately show up as /dev/nvme1n1 and /dev/nvme2n1. After reboot, they change order, where vol-011117cfde1966e5f becomes /dev/nvme2n1 and vol-0222290fbbd8a3b79 becomes /dev/nvme1n1. This order becomes permanent even if you reboot again any number of times. In the console, vol-0111* is still listed first alphabetically as sdf and vol-0222* second as sdg. I'm seeing this behavior on CentOS 7.9, RockyLinux and AlmaLinux 8.6 and RockyLinux 9.0, so it doesn't seem to be specific to any operating system. I tested with t3a and m6i instance types. I am aware that we can mount filesystems using UUIDs to ensure order. I also know that https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html says "The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping." The question is whether it's expected behavior that this order changes after first reboot only, and then doesn't change again?
head_object request on S3 object after restore still shows old storage class
I'm trying to dynamically determine the storage class of an object that was restored from GLACIER to STANDARD. But when i make this request from boto3's head_object i still keep getting the old storage class of the object. I've verified the restore completion from the url (version:null)
Understand RDS PIOPS, EBS IO and EBS BYTE Balance (%) ?
I have a Posgrest RDS instance using r5.xlarge type. 500GB SSD gp2 type. As I understand, with 500GB gp2, i got a baseline with (3 x 500) = 1500 IOPS. Now I need to increase it to 2500 IOPS, what should I do ? As documents said, I have 2 option (please correct me if I'm wrong) : 1. I can increase the size of DB to ~ 850 ( 3x850 ~ 2500 IOPS) 2. Change the disk type to IO1 and set PIOPS = 2500. 500GB Gp2 cost 115$ per month With option 1, I has to pay 195$ per month. With option 2, I has to pay 115$ + 500$ ( 0.2 * 2500) = 615$ per month. I know that Gp1 Provide more throughput and SLA level, but do I really need to use io1 + PIOPS? Which case should I use it (assume that I just need 99% SLA) ? And one more question, assume that I have RDS with 1000 GB Gp2, so the baseline is 3000 IOPS, what happen if I change it to io1 and set PIOPS to 1000? Now what is the baseline IO of my RDS? 3000 or 1000 or 3000 + 1000 ? ------ I saw EBS IO Balance (%) and EBS Byte Balance (%) in CloudWatch metric, as I understand, it's my reserved balance of (IO and Throughput), but how do I know the absolute value of it ? (So I can count how many IO balance remaining) Let say I have RDS with 1000GB Gp2, as I understand from documents, I got 3000 IOPS, if my RDS used < 3000 IOPS, it will reserved the IO credits to my balance, but what is the maximum balance that I can reserved? I couldn't find the documents said about that. Is there anyway to monitor how RDS consume my IO ( independently of AWS ) ? Thank you so much.
MGN for RDM, direct-attach disks(ISCSI,NBD)
I need to migrate multiple VMs with RDM and direct-attach disks(ISCSI, NBD). After reading, MGN agentless replication doesn't support migrating those VMs with independent disks, Raw Device Mappings (RDM), or direct-attach disks (iSCSI, NBD). Could the agent-based replication support migrating those VMs with RDM or direct-attach disks(ISCSI, NBD)?
Unable to Ingest the Data into S3 Bucket through Pentaho ETL
Actually we are facing a PIPE Closed Error while we ingesting the data into S3 bucket through Pentaho ETL. This Particular error is facing while we running multiple flows in parallel of same type in same time. Adding to it, the data size is not more than 500 MB.
When I tried to use sdk to complete the data migration of s3, I encountered a strange error, but the data has been migrated
I migrated data from HUAWEI CLOUD to the s3 data bucket, but the job status is failed. I checked the s3 bucket and the data files have been migrated![Enter image description here](/media/postImages/original/IMOqEYP0vJSIK-88IZPfZR5g)