Questions tagged with Backup & Recovery
Content language: English
Sort by most recent
Hi, We have SQL 2017 ENT running on EC2 instances (i3.4xlarge) in Ireland. We are using Litespeed to do full and log backups to S3 in the same region. This is a very large Database. Through litespeed we are doing maximum compression and stripping the full backup to 7 files and each file size is ~430 GB. In the SQL server side, full backups are scheduled to always run in AG secondary replica, so it is only doing the backup work load nothing else. A full back usually completes in ~ 16 to 18 hours. But sometimes the full backup takes 30 to 40 hours and sometimes in worst cases it is taking 60 to 75 hrs to complete. For the slow running backups, at the moment we dont have a clue on where things are actually slowing down. One thing I notice from the litespeed backup summary - whenever backup throughput reduces, the backup time increases proportionally. How would I go about troubleshooting this to find out the root cause and what actions should I take so that full backup always completes in less than 20 hours? Thanks in advance.
We have implemented AWS backup solution to backup the vms in SDDC ( VMC) . There are about 100's of vms to be backup and we have 4 Backup gateway appliances installed. As per the documentation AWS backup gateway allows 4 concurrent vms per Backup gateway. We tried increasing the Backup window and still failing. Question : is there a way to monitor the Backup gateway appliance and the time taken per vm and per backup task ? How do we know/estimate the number of backup gateway appliances are needed to size the backup window?
Hi, I am using AWS Backup to create a snapshot of my Aurora Postgres cluster + instance. In Resource assignments I added arn of cluster and arn of the instance. But backup is only doing a backup of a cluster? So I have one cluster restored but with 0 instances. Why AWS Backup does not backup my RDS instance? Thanks
Starting from 2023-03-01 the `mongodump` command fails to exec on documentDB 4.0 instances. Before this date the `mongodump` command was working just fine. Error trace: **Failed: error checking for AtlasProxy: Unknown admin command atlasVersion** I've used mongodb-org-tools=4.4.15 and mongodb-org-tools=5.0.15 to run the `mongodump` with the same result. Command: ``` mongodump --ssl \ --host="$DB_HOST:$DB_PORT" \ --db="$DB_DATABASE" \ --username="$DB_USERNAME" \ --password="$DB_PASSWORD" \ --archive=./production-mongodump.gz \ --numParallelCollections=4 \ --authenticationDatabase="$DB_AUTHENTICATION_DATABASE" \ --authenticationMechanism=SCRAM-SHA-1 \ --sslCAFile=./rds-combined-ca-bundle.pem \ --gzip ``` Any ideas of what this can be?
I have 3 websites hosted with AWS, but I cannot see how they can be backed up (I need a remote (ie, my Google Drive) copy of each. I've tried installing a plugin (Updraft), but have got an error "Installation failed, could not create directory." If there's an easy way of fixing this, could you please explain it to me like I was 5 years old (or maybe a little bit older)?
Suppose a MySQL RDS instance was created with engine version 5.7.33 and a backup of it was taken today, with plans to retain the backup for seven years. After seven years, if AWS no longer supports version 5.7.33 and the user wants to restore the snapshot, can they do so? Additionally, will the data be recoverable even if the database engine version is no longer supported? Furthermore, if restoring from the snapshot is not possible, what are some alternative methods for storing backups of an RDS instance for long-term retention?
I accidentally deleted var folder. So basically I lost everything including databases and folders within var/www/html Is there any way to recover them, any paid support can help. Please advise.
I am currently using the AWS Lightsail service and I would like to set up an auto-snapshot, but I only want it to occur on Mondays, Wednesdays, and Fridays. In the picture below i can only set daily. Can someone please assist me with the steps to achieve this? Any help would be greatly appreciated. ![aws](/media/postImages/original/IMRfqCVPesQdG0A4a1qDdiFg)
How can I export and import EC2 Launch Templates, including version history and launch scripts (advanced)? Exporting to another EC2 account/region is not sufficient, I need a way to get them off AWS and stored locally. Thanks!
Today while trying to access my table through API, I accidentally ran the initialization function, which rewrote my data table with a blank one. We have not set up automatic backups nor taken a snapshot of the table as of yet. I found the event in the event stream, and I am wondering if there is any way that we can recover the data from the table. It happened 20 minutes ago, and we have been trying to figure out a way.
Is there a way to set up a lifecycle policy utilizing EBS-backed AMI policy and share that EBS-backed AMI across accounts through the AWS Lifecycle Manager? I see the guides for EBS backed snapshots but I do not see anything in the guide for cross account sharing automation in the EBS-backed AMI policy documents (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-policy.html). I know how to share an AMI manually across accounts. If there is not a way to do this through AWS Lifecycle Manager, could somebody describe another way to approach the problem (e.g. create a lambda function that finds the AMI that is backed up on a weekly basis and share it across accounts)?
Hello, I have a question related to EC2 Storage Gateway HA that was discussed here https://repost.aws/questions/QU7uSNAm4qR1C1VKircva_NQ/aws-storage-gateway-ec-2-high-availability and here https://repost.aws/questions/QUHkuVnkdPT7WsGWmECm14TA/ec-2-storage-gateway-high-availability-configuration 1. For the file gateway - If the EC2 instance is destroyed will I have access to files stored in S3? 2. What happens if entire AZ is gone - can I recover EC2 instance in another AZ? 3. It answer for 1 is NO then - is it possible to do a backup or snapshot of the SG config/cache?