Content language: English
Sort by most recent
How long does it take CloudTrail to create an insights event?
I enabled cloudtrail insights one month ago, but I can't find any insights events. How long does it take CloudTrail to create an insights event for unusual activity? ![Enter image description here](/media/postImages/original/IM577WA0obT9OdLlRxDCuS2g)
Autoscaling scale in policy working
I have an autoscaling group that works with a target-tracking policy and scales out after my instance hits 75% CPU. But I wanna know how much will it take to scale in or terminate the instance made by the autoscaling group right now it's 25 min which is a lot and can I set a custom time for my instance to be deleted during scale in the process? Thanks
Failed to pull data from Salesforce to RDS Postgres via Appflow
I have setup an Appflow to integrated AWS RDS Postgres with Salesforce. I can successfully connect to Salesforce and pull data from Salesforce to S3 files. However, when I try to integrated Salesforce with RDS Postgres, there is a "Error while performing write operation" error. I am new to AWS and can someone help to guide me what potential error could be. Thanks.
Cloudformation Accidentally change bucket name
Hi, I have a Cloudformation template, which creates a couple of resources, an IAM user, an access key, roles and an S3 bucket. I re-used the same template to create another bucket and ran the command from the CLI. I, however, neglected to update the "--stack-name" parameter, and the update started taking place on the existing stack and not a new one. The new stack has a new value for the bucket name, and went ahead and created a new bucket, then attempted to delete the old bucket, which contains data, so it could not be deleted. I can, however now not roll back to a state where it will use the existing bucket name, to enable me to manage the original stack anymore. I have done a test run by creating a bucket and then importing that into a test stack. This works for the import, however once imported, it does not allow me to make any changes to the import bucket. A current setting on the bucket is ``` ObjectLockEnabled: true VersioningConfiguration: Status: Enabled ``` If I have this option in the cloudformation template, it requires replacement. Yet the setting is currently in place already, I just need the template to match what is in place. Any advice?
What is the role of an these Aurora3 user ?( AWS_COMPREHEND_ACCESS, AWS_LAMBDA_ACCESS, AWS_LOAD_S3_ACCESS,AWS_SAGEMAKER_ACCESS,AWS_SELECT_S3_ACCESS) And how to change plugin of these user
1.AWS_COMPREHEND_ACCESS 2.AWS_LAMBDA_ACCESS 3.AWS_LOAD_S3_ACCESS 4.AWS_SAGEMAKER_ACCESS 5. AWS_SELECT_S3_ACCESS I want to change these five Aurora3 mysql user's plugin from 'mysql_native_password' to 'sha256_password'. but i can't . i have two question about this issue. 1. What is the role of an these five user ? 2. How to change these user's plugin from 'mysql_native_password' to 'sha256_password'. Thank you.
Upgrade Elasticsearch cluster size and use 2 subnet ids.
Hi, We are using hashicorp terraform to maintain our aws resources. but aside from that Currently we have an ElasticSearh(opensearch domain) A Elasticsearch cluster: A single subnet version 5.3 (really old) instant count =1 instant type = m4.large.elasticsearch volumn_size = 512 use EBS = true EBS volumn type = gp2 zone awarness = false (if providing one subnet) =true (if providing 2 subnets) * If we upgraded to 2 instances with a larger instance type, such as m5.large.elasticsearch. Could you please confirm: - During the upgrade, (I believe it is blue/green upgrade), we can still call API to insert documents and the inserts during upgrade will still be available after upgrade. Am I correct? - Within same subnet, upgrading to more nodes will balance part of indexes and documents to 2nd node which provides performance enhancement due to load balance. Am I correct? Or actually the 2nd node will contain only replica shards from 1st node - which means query will improve but insert will not? * If we used 2 subnets and 2 nodes - will each subnet get one node? If so, will one become the total replica (only containing replica shards to ensure fail safe) of the other so queries/inserts/updates performance will not be improved? - if upgrading to 2 nodes, will the EBS volumn size be shared by these 2 nodes (eahch get 256 or just a shared ESB between 2 nodes?), or each node get 512? * Does version 5.3 even support dedicated mast nodes?
Why AWS WAF (AWS-AWSManagedRulesAmazonIpReputationList) google ip are blacklisted?
![Enter image description here](/media/postImages/original/IMUR-t-R0gRUKjIlP2KxpRVQ) The IP shown in the screenshot is blacklisted by AWS WAF. And this IP is used by google for indexing? Will this affect the SEO in my website.
Firehose delivery stream not writing to s3
I created a datastream connected to api gateway and a firehose delivery stream connected to the datastream that writes to an S3 bucket. There is nothing being writtten to S3 irrespective of the data successfully being received in data stream. No error logs, and the dashboards show that there are no records written to s3. Need help in trouble shooting. Appreciate any help. Thanks, Haripriya
How can I work around spontaneous nvml mismatch errors in AWS ECS gpu image?
We're running g4dn.xlarges in a few ECS clusters for some ML services, and use the AWS-provided GPU-optimized ECS AMI (https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#Images:visibility=public-images;imageId=ami-07dd70259efc9d59b). This morning at around 7-8am PST (12/7/2022), newly-provisioned container instances stopped being able to register with our ECS clusters. After some poking around on the boxes and reading /var/log/ecs/ecs-init.log, it turned out that we were getting errors in nvml that prevented the ECS init routine from completing: ``` [ERROR] Nvidia GPU Manager: setup failed: error initializing nvidia nvml: nvml: Driver/library version mismatch ``` This is the same AMI as some older instances in the cluster that started up fine. We noticed the issue simultaneously across 4 different clusters. Manually killing and restart nvidia components on individual hosts resolved the mismatch and allowed ECS init to complete (and the instances to become available for task allocation): ``` [ec2-user@- ~]$ lsmod | grep nvidia nvidia_drm 61440 0 nvidia_modeset 1200128 1 nvidia_drm nvidia_uvm 1142784 0 nvidia 35459072 2 nvidia_uvm,nvidia_modeset drm_kms_helper 184320 1 nvidia_drm drm 421888 4 drm_kms_helper,nvidia,nvidia_drm i2c_core 77824 3 drm_kms_helper,nvidia,drm [ec2-user@- ~]$ sudo rmmod nvidia_uvm [ec2-user@- ~]$ sudo rmmod nvidia_drm [ec2-user@- ~]$ sudo rmmod nvidia_modeset [ec2-user@- ~]$ sudo rmmod nvidia [ec2-user@- ~]$ nvidia-smi ``` This seems a bit bonkers, as it's a regression in the absence of a new AMI or any changes to our application or AWS resources. What causes this spontaneous mismatch and how can we work around it in an automated fashion?
Medium EC2 Instance Keeps going down
Hi, my Medium EC2 Instance Keeps going down... The server does not pass its status check and the website being hosted can no longer load and only times out. Doesn't make sense, only fixes once I get in there and manually reboot it. Why does the server keep going down? Is there a technical fault with the hardware or software?